Bio Medical Projects – ElysiumPro

Bio Medical Projects

CSE Projects, ECE Projects
Description
B Bio-Medical engineering is the application of engineering principles to the fields of biology and health care. Students can work with doctors, therapists and researchers to develop systems, equipment and devices in order to solve clinical problems. Bio gadgets, bio medicals and RFID projects can be done.
Download Project List

Quality Factor

  • 100% Assured Results
  • Best Project Explanation
  • Tons of Reference
  • Cost optimized
  • Controlpanel Access


1Detection of Age-Related Macular Degeneration in Fundus Images by an Associative Classifier
In this paper we propose the application of a novel associative classifier, the Heaviside's Classifier, for the early detection of Age-Related Macular Degeneration un retinal fundus images. Retinal fundus images are, first, processed by a simple method based on the Homomorphic filtering and some basic mathematical morphology operations; in the second phase we extract relevant features of the images using the Zernike moments, we also apply a feature selection method to select the best features from the original features set. The dataset created from the images with the best features are used to train and test a new classification model whose learning and classification phases are based on the Heaviside's Function. Experimental results show that our method is capable to achieve an accuracy value about the 94.12% with a dataset created from images belonging to famous image repositories.

2Health of Things Algorithms for Malignancy Level Classification of Lung Nodules
Lung cancer is one of the leading causes of death worldwide. Several computer-aided diagnosis systems have been developed to help reduce lung cancer mortality rates. This paper presents a novel structural co-occurrence matrix (SCM)-based approach to classify nodules into malignant or benign nodules and also into their malignancy levels. The SCM technique was applied to extract features from images of nodules and classifying them into malignant or benign nodules and also into their malignancy levels. The computed tomography exams from the lung image database consortium and image database resource initiative datasets provide information concerning nodule positions and their malignancy levels. The SCM was applied on both grayscale and Hounsfield unit images with four filters, to wit, mean, Laplace, Gaussian, and Sobel filters creating eight different configurations. The classification stage used three well-known classifiers: multilayer perceptron, support vector machine, and k-nearest neighbors algorithm and applied them to two tasks: (i) to classify the nodule images into malignant or benign nodules and (ii) to classify the lung nodules into malignancy levels (1 to 5). The results of this approach were compared to four other feature extraction methods: gray-level co-occurrence matrix, local binary patterns, central moments, and statistical moments.

3Improved evidential fuzzy c-means method
Dempster-Shafer evidence theory (DS theory) is widely used in brain magnetic resonance imaging (MRI) segmentation, due to its efficient combination of the evidence from different sources. In this paper, an improved MRI segmentation method, which is based on fuzzy c-means (FCM) and DS theory, is proposed. Firstly, the average fusion method is used to reduce the uncertainty and the conflict information in the pictures. Then, the neighborhood information and the different influences of spatial location of neighborhood pixels are taken into consideration to handle the spatial information. Finally, the segmentation and the sensor data fusion are achieved by using the DS theory. The simulated images and the MRI images illustrate that our proposed method is more effective in image segmentation.

4Retinal Micro aneurysms Detection Using Local Convergence Index Features
Retinal microaneurysms (MAs) are the earliest clinical sign of diabetic retinopathy disease. Detection of MAs is crucial for the early diagnosis of diabetic retinopathy and prevention of blindness. In this paper, a novel and reliable method for automatic detection of MAs in retinal images is proposed. In the first stage of the proposed method, several preliminary microaneurysm candidates are extracted using a gradient weighting technique and an iterative thresholding approach. In the next stage, in addition to intensity and shape descriptors, a new set of features based on local convergence index filters is extracted for each candidate. Finally, the collective set of features is fed to a hybrid sampling/boosting classifier to discriminate the MAs from non-MAs candidates. The method is evaluated on images with different resolutions and modalities (color and scanning laser ophthalmoscope) using six publicly available data sets including the retinopathy online challenges (ROC) data set. The proposed method achieves an average sensitivity score of 0.471 on the ROC data set outperforming state-of-the-art approaches in an extensive comparison. The experimental results on the other five data sets demonstrate the effectiveness and robustness of the proposed MAs detection method regardless of different image resolutions and modalities.

5Delineation of Carpal Bones from Hand X-Ray Images through Prior Model, and Integration of Region-Based and Boundary-Based Segmentations
Image segmentation is critical and challenging in computer vision and medical image analysis. Despite decades of research, existing segmentation algorithms are still subject to typical segmentation problems, such as over-segmentation, under-segmentation, and non-closed and spurious edges. In this paper, taking the carpal bones from hand X-ray images as the foreground regions, we propose a segmentation approach to integrate segmentations from region-based and boundary-based methods to tackle these typical segmentation problems. First, adaptive local thresholding and adaptive Canny edge detection are explored to extract foreground regions and the edge map. Second, the integration of the edge map and foreground regions by XORing is proposed, to tackle the over-segmentation by adding a background boundary from the edge map near the carpal bone boundary so as to break the connection between the foreground and the over-segmented background, to handle under-segmentation by adding a foreground boundary from the edge map near the carpal bone boundary so as to enclose the missing foreground due to under-segmentation, and to complement non-closed edge and spurious edge from the edge map through the carpal bone regions from the local adaptive thresholding. Optionally, marker-controlled watershed segmentation or an active contourbased method is employed to refine the integrated segmentation.

6Robust Single-Image Super-Resolution Based on Adaptive Edge-Preserving Smoothing Regularization
Single-image super-resolution (SR) reconstruction via sparse representation has recently attracted broad interest. It is known that a low-resolution (LR) image is susceptible to noise or blur due to the degradation of the observed image, which would lead to a poor SR performance. In this paper, we propose a novel robust edge-preserving smoothing SR (REPS-SR) method in the framework of sparse representation. An EPS regularization term is designed based on gradient-domain-guided filtering to preserve image edges and reduce noise in the reconstructed image. Furthermore, a smoothing-aware factor adaptively determined by the estimation of the noise level of LR images without manual interference is presented to obtain an optimal balance between the data fidelity term and the proposed EPS regularization term. An iterative shrinkage algorithm is used to obtain the SR image results for LR images. The proposed adaptive smoothing-aware scheme makes our method robust to different levels of noise. Experimental results indicate that the proposed method can preserve image edges and reduce noise and outperforms the current state-of-the-art methods for noisy images.

7Glioma Segmentation with a Unified Algorithm in Multimodal MRI Images
To achieve the better segmentation performance, we propose a unified algorithm for automatic glioma segmentation. In this paper, we first use spatial fuzzy c-mean clustering to estimate region-of-interest in multimodal MRI images, and then extract some seed points from there for region growing based on a new notion “affinity”. In the end, we design a two-step strategy to refine the glioma border with region merging and improved distance regularization level set method. In BRATS 2015 database, we evaluate the accuracy and robustness of our method with performance scores, including dice, positive predictive value (PPV), and sensitivity metrics, as well as Hausdorff and Euclidean distance (HD&ED). The high metric values (dice = 0.86, PPV = 0.90, and sensitivity = 0.84) and small distance errors (HD = 14.39 mm and ED = 3.31 mm) indicate a remarkable accuracy. Also, we observe the ranking is No.1 in terms of dice and PPV, comparing with the state-of-the-art methods. In addition, the robustness is also at a high-level due to the refinement structure. And Spearman's rank coefficient test verities a significant correlation between the high-grade gliomas and low-grade gliomas. Overall, the proposed method is effective in segmenting gliomas in multimodal images or flair images, and has the potential in routine examinations of gliomas in daily clinical practice.

8BULDP: Biomimetic Uncorrelated Locality Discriminant Projection for Feature Extraction in Face Recognition
This paper develops a new dimensionality reduction method, named biomimetic uncorrelated locality discriminant projection (BULDP), for face recognition. It is based on unsupervised discriminant projection and two human bionic characteristics: principle of homology continuity and principle of heterogeneous similarity. With these two human bionic characteristics, we propose a novel adjacency coefficient representation, which does not only capture the category information between different samples, but also reflects the continuity between similar samples and the similarity between different samples. By applying this new adjacency coefficient into the unsupervised discriminant projection, it can be shown that we can transform the original data space into an uncorrelated discriminant subspace. A detailed solution of the proposed BULDP is given based on singular value decomposition. Moreover, we also develop a nonlinear version of our BULDP using kernel functions for nonlinear dimensionality reduction. The performance of the proposed algorithms is evaluated and compared with the state-of-the-art methods on four public benchmarks for face recognition. Experimental results show that the proposed BULDP method and its nonlinear version achieve much competitive recognition performance.

9Material Decomposition Using Ensemble Learning for Spectral X-ray Imaging
Material decomposition allows the reconstruction of material-specific images in spectral X-ray imaging, which requires efficient decomposition models. Due to the presence of nonideal effects in X-ray imaging systems, it is difficult to explicitly estimate the imaging systems for material decomposition tasks. As an alternative to previous empirical material decomposition methods, we investigated material decomposition using ensemble learning methods in this paper. Three ensemble methods with two decision trees as the base learning algorithms were investigated to perform material decomposition in both simulation and experiment. The results were quantitatively evaluated for comparison studies. In general, the results demonstrate that the proposed ensemble learning methods often outperform their base learning algorithms, and rarely reduce performance. Compared to the reference methods and its base learning algorithm, the performance of the Boosting method using REPTree with regularization is improved by over 42% and 13%, respectively, in the noiseless simulated scenario of the XCAT phantom with cardiac and respiratory motion, and over 36% and 17%, respectively, in the noisy scenario. Simultaneously, the performance is improved by over 9% and 8%, respectively, in the original torso phantom scenario, and over 13% and 12%, respectively, in the denoising scenario. The results indicate that ensemble learning with gradient descent optimization algorithms is more appropriate for material decomposition tasks.

10Artificial Neural Network Enhanced Bayesian PET Image Reconstruction
In positron emission tomography (PET) image reconstruction, the Bayesian framework with various regularization terms has been implemented to constrain the radio tracer distribution. Varying the regularizing weight of a maximum a posteriori (MAP) algorithm specifies a lower bound of the tradeoff between variance and spatial resolution measured from the reconstructed images. The purpose of this paper is to build a patch-based image enhancement scheme to reduce the size of the unachievable region below the bound and thus to quantitatively improve the Bayesian PET imaging. We cast the proposed enhancement as a regression problem which models a highly nonlinear and spatial-varying mapping between the reconstructed image patches and an enhanced image patch. An artificial neural network model named multilayer perceptron (MLP) with backpropagation was used to solve this regression problem through learning from examples. Using the BrainWeb phantoms, we simulated brain PET data at different count levels of different subjects with and without lesions. The MLP was trained using the image patches reconstructed with a MAP algorithm of different regularization parameters for one normal subject at a certain count level. To evaluate the performance of the trained MLP, reconstructed images from other simulations and two patient brain PET imaging data sets were processed.

11Image Segmentation for Intensity Inhomogeneity in Presence of High Noise
Automated segmentation of fine objects details in a given image is becoming of crucial interest in different imaging fields. In this paper, we propose a new variational level-set model for both global and interactive\selective segmentation tasks, which can deal with intensity inhomogeneity and the presence of noise. The proposed method maintains the same performance on clean and noisy vector-valued images. The model utilizes a combination of locally computed denoising constrained surface and a denoising fidelity term to ensure a fine segmentation of local and global features of a given image. A two-phase level-set formulation has been extended to a multi-phase formulation to successfully segment medical images of the human brain. Comparative experiments with state-of-the-art models show the advantages of the proposed method.

12Deconvolution and Restoration of Optical Endomicroscopy Images
Optical endomicroscopy (OEM) is an emerging technology platform with preclinical and clinical imaging applications. Pulmonary OEM via fibre bundles has the potential to provide in vivo, in situ molecular signatures of disease such as infection and inflammation. However,a enhancing the quality of data acquired by this technique for better visualization and subsequent analysis remains a challenging problem. Cross coupling between fiber cores and sparse sampling by imaging fiber bundles are the main reasons for image degradation, and poor detection performance (i.e., inflammation, bacteria, etc.). In this paper, we address the problem of deconvolution and restoration of OEM data. We propose a hierarchical Bayesian model to solve this problem and compare three estimation algorithms to exploit the resulting joint posterior distribution. The first method is based on Markov chain Monte Carlo methods, however, it exhibits a relatively long computational time. The second and third algorithms deal with this issue and are based on a variational Bayes approach and an alternating direction method of multipliers algorithm, respectively. Results on both synthetic and real datasets illustrate the effectiveness of the proposed methods for restoration of OEM images.

13Can Signal-to-Noise Ratio Perform as a Baseline Indicator for Medical Image Quality Assessment
Natural image quality assessment (NIQA) wins increasing attention, while NIQA models are rarely used in the medical community. A couple of studies employ the NIQA methodologies for medical image quality assessment (MIQA), but building the benchmark data sets necessitates considerable time and professional skills. In particular, the characteristics of synthesized distortions are different from those of clinical distortions, which make the results not so convincing. In clinic, signal-to-noise ratio (SNR) is widely used, which is defined as the quotient of the mean signal intensity measured in a tissue region of interest (ROI) and the standard deviation of the signal intensity in an air region outside the imaged object, and both regions are outlined by specialists. We take advantage of the knowledge that SNR is routinely used and concern whether SNR measure can perform as a baseline metric for the development of MIQA algorithms. To address the issue, the inter-observer reliability of SNR measure is investigated regarding to different tissue ROIs [white matter (WM); cerebral spinal fluid (CSF)] in magnetic resonance (MR) images. A total of 192 T2, 88 T1, 76 T2 and 55 contrast-enhanced T1 (T1C) weighted images are analyzed. Statistical analysis indicates that SNR values show consistency between different observers to the same ROI in each modality (Wilcoxon rank sum test, pw ≥ 0.11; and paired sample t-test, pp 0.28).

14Incorporating a Noise Reduction Technique Into X-Ray Tensor Tomography
X-ray tensor tomography (XTT) is a novel imaging modality for the three-dimensional reconstruction of X-ray scattering tensors from dark-field images obtained in a grating interferometry setup. The two-dimensional dark-field images measured in XTT are degraded by noise effects, such as detector readout noise and insufficient photon statistics, and consequently, the three-dimensional volumes reconstructed from this data exhibit noise artifacts. In this paper, we investigate the best way to incorporate a denoising technique into the XTT reconstruction pipeline, i.e., the popular total variation denoising technique. We propose two different schemes of including denoising in the reconstruction process, one using a column block-parallel iterative scheme and one using a whole-system approach. In addition, we compare the results when using a simple denoising approach applied either before or after reconstruction. The effectiveness is evaluated qualitatively and quantitatively based on datasets from an industrial sample and a clinical sample. The results clearly demonstrate the superiority of including denoising in the reconstruction process, along with slight advantages of the whole-system approach.

15Deep Regression Segmentation for Cardiac Bi-Ventricle MR Images
Cardiac bi-ventricle segmentation can help physicians to obtain clinical indices, such as mass and volume of left ventricle (LV) and right ventricle (RV). In this paper, we propose a regression segmentation framework to delineate boundaries of bi-ventricle from cardiac magnetic resonance (MR) images by building a regression model automatically and accurately. First, we extract DAISY feature from images. Then, a point based representation method is employed to depict the boundaries. Finally, we use DAISY as input and boundary points as labels to train the regression model based on deep belief network. Regression combined deep learning and DAISY feature can capture high level image information and accurately segment biventricle with fewer assumptions and lower computational cost. In our experiment, the performance of the proposed framework is compared with manual segmentation on 145 clinical subjects (2900 images in total), which are collected from three hospitals affiliated with two health care centers (London Healthcare Center and St. Josephs HealthCare). The results of our method and manually segmented method are highly consistent. High Pearson's correlation coefficient between automated boundaries and manual annotation is up to 0.995 (endocardium of LV), 0.997 (epicardium of LV), and 0.985 (RV). Average Dice metric is up to 0.916 (endocardium of LV), 0.941 (epicardium of LV), and 0.844 (RV). Altogether, experimental results are capable of demonstrating the efficacy of our regression segmentation framework for cardiac MR images.

16Mass Segmentation in Automated 3-D Breast Ultrasound Using Adaptive Region Growing and Supervised Edge-Based Deformable Model
Automated 3-D breast ultrasound has been proposed as a complementary modality to mammography for early detection of breast cancers. To facilitate the interpretation of these images, computer aided detection systems are being developed in which mass segmentation is an essential component for feature extraction and temporal comparisons. However, automated segmentation of masses is challenging because of the large variety in shape, size, and texture of these 3-D objects. In this paper, the authors aim to develop a computerized segmentation system, which uses a seed position as the only priori of the problem. A two-stage segmentation approach has been proposed incorporating shape information of training masses. At the first stage, a new adaptive region growing algorithm is used to give a rough estimation of the mass boundary. The similarity threshold of the proposed algorithm is determined using a Gaussian mixture model based on the volume and circularity of the training masses. In the second stage, a novel geometric edge-based deformable model is introduced using the result of the first stage as the initial contour. In a data set of 50 masses, including 38 malignant and 12 benign lesions, the proposed segmentation method achieved a mean Dice of 0.74 ± 0.19 which outperformed the adaptive region growing with a mean Dice of 0.65 ± 0.2 (p-value <; 0.02).

17Image Segmentation Using Disjunctive Normal Bayesian Shape and Appearance Models
The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. For instance, most active shape and appearance models require landmark points and assume unimodal shape and appearance distributions, and the level set representation does not support construction of local priors. In this paper, we present novel appearance and shape models for image segmentation based on a differentiable implicit parametric shape representation called a disjunctive normal shape model (DNSM). The DNSM is formed by the disjunction of polytopes, which themselves are formed by the conjunctions of half-spaces. The DNSM's parametric nature allows the use of powerful local prior statistics, and its implicit nature removes the need to use landmarks and easily handles topological changes. In a Bayesian inference framework, we model arbitrary shape and appearance distributions using nonparametric density estimations, at any local scale. The proposed local shape prior results in accurate segmentation even when very few training shapes are available, because the method generates a rich set of shape variations by locally combining training samples. We demonstrate the performance of the framework by applying it to both 2-D and 3-D data sets with emphasis on biomedical image segmentation applications.

18Deep Neural Networks for the Recognition and Classification of Heart Murmurs Using Neuromorphic Auditory Sensors
Auscultation is one of the most used techniques for detecting cardiovascular diseases, which is one of the main causes of death in the world. Heart murmurs are the most common abnormal finding when a patient visits the physician for auscultation. These heart sounds can either be innocent, which are harmless, or abnormal, which may be a sign of a more serious heart condition. However, the accuracy rate of primary care physicians and expert cardiologists when auscultating is not good enough to avoid most of both type-I (healthy patients are sent for echocardiogram) and type-II (pathological patients are sent home without medication or treatment) errors made. In this paper, the authors present a novel convolutional neural network based tool for classifying between healthy people and pathological patients using a neuromorphic auditory sensor for FPGA that is able to decompose the audio into frequency bands in real time. For this purpose, different networks have been trained with the heart murmur information contained in heart sound recordings obtained from nine different heart sound databases sourced from multiple research groups. These samples are segmented and preprocessed using the neuromorphic auditory sensor to decompose their audio information into frequency bands and, after that, sonogram images with the same size are generated. These images have been used to train and test different convolutional neural network architectures. The best results have been obtained with a modified version of the AlexNet model, achieving 97% accuracy (specificity: 95.12%, sensitivity: 93.20%, and type-II errors.

19A Meshfree Representation for Cardiac Medical Image Computing
The prominent advantage of meshfree method, is the way to build the representation of computational domain, based on the nodal points without any explicit meshing connectivity. Therefore, meshfree method can conveniently process the numerical computation inside interested domains with large deformation or inhomogeneity. In this paper, we adopt the idea of meshfree representation into cardiac medical image analysis in order to overcome the difficulties caused by large deformation and inhomogeneous materials of the heart. In our implementation, as element-free Galerkin method can efficiently build a meshfree representation using its shape function with moving least square fitting, we apply this meshfree method to handle large deformation or inhomogeneity for solving cardiac segmentation and motion tracking problems. We evaluate the performance of meshfree representation on a synthetic heart data and an in-vivo cardiac MRI image sequence. Results showed that the error of our framework against the ground truth was 0.1189 ± 0.0672 while the error of the traditional FEM was 0.1793 ± 0.1166. The proposed framework has minimal consistency constraints, handling large deformation and material discontinuities are simple and efficient, and it provides a way to avoid the complicated meshing procedures while preserving the accuracy with a relatively small number of nodes.

20Multimodal Breast Parenchymal Patterns Correlation Using a Patient-Specific Biomechanical Model
In this paper, we aim to produce a realistic 2-D projection of the breast parenchymal distribution from a 3-D breast magnetic resonance image (MRI). To evaluate the accuracy of our simulation, we compare our results with the local breast density (i.e., density map) obtained from the complementary full-field digital mammogram. To achieve this goal, we have developed a fully automatic framework, which registers MRI volumes to X-ray mammograms using a subject-specific biomechanical model of the breast. The optimization step modifies the position, orientation, and elastic parameters of the breast model to perform the alignment between the images. When the model reaches an optimal solution, the MRI glandular tissue is projected and compared with the one obtained from the corresponding mammograms. To reduce the loss of information during the ray-casting, we introduce a new approach that avoids resampling the MRI volume. In the results, we focus our efforts on evaluating the agreement of the distributions of glandular tissue, the degree of structural similarity, and the correlation between the real and synthetic density maps. Our approach obtained a high-structural agreement regardless the glandularity of the breast, whilst the similarity of the glandular tissue distributions and correlation between both images increase in denser breasts. Furthermore, the synthetic images show continuity with respect to large structures in the density maps.

21Texture Classification and Visualization of Time Series of Gait Dynamics in Patients with Neuro-Degenerative Diseases
The analysis of gait dynamics is helpful for predicting and improving the quality of life, morbidity, and mortality in neuro-degenerative patients. Feature extraction of physiological time series and classification between gait patterns of healthy control subjects and patients are usually carried out on the basis of 1-D signal analysis. The proposed approach presented in this paper departs itself from conventional methods for gait analysis by transforming time series into images, of which texture features can be extracted from methods of texture analysis. Here, the fuzzy recurrence plot algorithm is applied to transform gait time series into texture images, which can be visualized to gain insight into disease patterns. Several texture features are then extracted from fuzzy recurrence plots using the gray-level co-occurrence matrix for pattern analysis and machine classification to differentiate healthy control subjects from patients with Parkinson's disease, Huntington's disease, and amyotrophic lateral sclerosis. Experimental results using only the right stride-intervals of the four groups show the effectiveness of the application of the proposed approach.

22Non-Rigid Contour-Based Registration of Cell Nuclei in 2-D Live Cell Microscopy Images Using a Dynamic Elasticity Model
The analysis of the pure motion of subnuclear structures without influence of the cell nucleus motion and deformation is essential in live cell imaging. In this paper, we propose a 2-D contour-based image registration approach for compensation of nucleus motion and deformation in fluorescence microscopy time-lapse sequences. The proposed approach extends our previous approach, which uses a static elasticity model to register cell images. Compared with that scheme, the new approach employs a dynamic elasticity model for the forward simulation of nucleus motion and deformation based on the motion of its contours. The contour matching process is embedded as a constraint into the system of equations describing the elastic behavior of the nucleus. This results in better performance in terms of the registration accuracy. Our approach was successfully applied to real live cell microscopy image sequences of different types of cells including image data that was specifically designed and acquired for evaluation of cell image registration methods. An experimental comparison with the existing contour-based registration methods and an intensity-based registration method has been performed. We also studied the dependence of the results on the choice of method parameters.

23Optic Disk Detection in Fundus Image Based on Structured Learning
Automated optic disk (OD) detection plays an important role in developing a computer aided system for eye diseases. In this paper, we propose an algorithm for the OD detection based on structured learning. A classifier model is trained based on structured learning. Then, we use the model to achieve the edge map of OD. Thresholding is performed on the edge map, thus a binary image of the OD is obtained. Finally, circle Hough transform is carried out to approximate the boundary of OD by a circle. The proposed algorithm has been evaluated on three public datasets and obtained promising results. The results (an area overlap and Dices coefficients of 0.8605 and 0.9181, respectively, an accuracy of 0.9777, and a true positive and false positive fraction of 0.9183 and 0.0102) show that the proposed method is very competitive with the state-of-the-art methods and is a reliable tool for the segmentation of OD.

24Automatic Detection of Retinal Lesions for Screening of Diabetic Retinopathy
Objective: Diabetic retinopathy (DR) is characterized by the progressive deterioration of retina with the appearance of different types of lesions that include micro-aneurysms, hemorrhages, exudates, etc. Detection of these lesions plays a significant role for early diagnosis of DR. Methods: To this aim, this paper proposes a novel and automated lesion detection scheme, which consists of the four main steps: vessel extraction and optic disc removal, preprocessing, candidate lesion detection, and postprocessing. The optic disc and the blood vessels are suppressed first to facilitate further processing. Curvelet-based edge enhancement is done to separate out the dark lesions from the poorly illuminated retinal background, while the contrast between the bright lesions and the background is enhanced through an optimally designed wideband bandpass filter. The mutual information of the maximum matched filter response and the maximum Laplacian of Gaussian response are then jointly maximized. Differential evolution algorithm is used to determine the optimal values for the parameters of the fuzzy functions that determine the thresholds of segmenting the candidate regions. Morphology-based postprocessing is finally applied to exclude the falsely detected candidate pixels. Results and Conclusions: Extensive simulations on different publicly available databases highlight an improved performance over the existing methods with an average accuracy of 97.71 % and robustness in detecting the various types of DR lesions irrespective of their intrinsic properties.

253D Feature Constrained Reconstruction for Low-Dose CT Imaging
Low-dose computed tomography (LDCT) images are often highly degraded by amplified mottle noise and streak artifacts. Maintaining image quality under low-dose scan protocols is a well-known challenge. Recently, sparse representation-based techniques have been shown to be efficient in improving such CT images. In this paper, we propose a 3D feature constrained reconstruction (3D-FCR) algorithm for LDCT image reconstruction. The feature information used in the 3D-FCR algorithm relies on a 3D feature dictionary constructed from available high quality standard-dose CT sample. The CT voxels and the sparse coefficients are sequentially updated using an alternating minimization scheme. The performance of the 3D-FCR algorithm was assessed through experiments conducted on phantom simulation data and clinical data. A comparison with previously reported solutions was also performed. Qualitative and quantitative results show that the proposed method can lead to a promising improvement of LDCT image quality.

26A Novel Method to Predict Knee Osteoarthritis Progression on MRI Using Machine Learning Methods
This study explored the hidden biomedical information from knee MR images for osteoarthritis (OA) prediction. We have computed the Cartilage Damage Index (CDI) information from 36 informative locations on tibiofemoral cartilage compartment from 3D MR imaging and used PCA analysis to process the feature set. Four machine learning methods (artificial neural network (ANN), support vector machine (SVM), random forest and naïve Bayes) were employed to predict the progression of OA, which was measured by change of Kellgren and Lawrence (KL) grade, Joint Space Narrowing on Medial compartment (JSM) grade and Joint Space Narrowing on Lateral compartment (JSL) grade. To examine the different effect of medial and lateral informative locations, we have divided the 36- dimensional feature set into 18-dimensional medial feature set and 18-dimensional lateral feature set and run the experiment on four classifiers separately. Experiment results showed that the medial feature set generated better prediction performance than the lateral feature set, while using the total 36-dimensional feature set generated the best. PCA analysis is helpful in feature space reduction and performance improvement. For KL grade prediction, the best performance was achieved by ANN with AUC = 0.761 and F-measure = 0.714. For JSM grade prediction, the best performance was achieved by random forest with AUC = 0.785 and F-measure = 0.743, while for JSL grade prediction, the best performance was achieved by the ANN with AUC = 0.695 and Fmeasure = 0.796. As experiment results showing that the informative locations on medial compartment provide more distinguishing features than informative locations on lateral compartment.

27Learning to Detect Blue-white Structures in Dermoscopy Images with Weak Supervision
We propose a novel approach to identify one of the most significant dermoscopic criteria in the diagnosis of cutaneous Melanoma: the blue-whitish structure (BWS). In this paper, we achieve this goal in a Multiple Instance Learning (MIL) framework using only image-level labels indicating whether the feature is present or not. To this aim, each image is represented as a bag of (non-overlapping) regions where each region may or may not be identified as an instance of BWS. A probabilistic graphical model [1] is trained (in MIL fashion) to predict the bag (image) labels. As output, we predict the classification label for the image (i.e., the presence or absence of BWS in each image) and as well we localize the feature in the image. Experiments are conducted on a challenging dataset with results outperforming state-of-the-art techniques, with BWS detection besting competing methods in terms of performance. This study provides an improvement on the scope of modelling for computerized image analysis of skin lesions. In particular, it propounds a framework for identification of dermoscopic local features from weakly-labelled data.

28Optimized Optical Coherence Tomography Imaging with Hough Transform-based Fixed-pattern Noise Reduction
Fixed-pattern noise seriously affects the clinical application of optical coherence tomography (OCT), especially, in the imaging of tumorous tissue. We propose a Hough transform-based fixed-pattern noise reduction (HTFPNR) method to reduce the fixed-pattern noise for optimizing imaging of tumorous tissue with OCT system. Using by the HTFPNR method, we detect and map the outline of fixed-pattern noise in the OCT images, and finally efficiently reduce the fixed-pattern noise by the longitudinal and horizontal intelligent processing procedure. We adopt the image-to-noise ratio with full information (INRfi) and the noise reduction ratio (NRR) to evaluate the outcome of fixed-pattern noise reduction ratio, respectively. The INRfi of OCT image’s noise reduction of ex vivo brainstem tumor is approximate 21.92 dB. Six groups of OCT images including three types of fixed-pattern noises have been validated via experimental evaluation of the ex vivo gastric tumor. In the different types of fixed-pattern noise, the mean INRfis are 25.24 dB, 23.04 dB and 19.35 dB, respectively. This result demonstrates that it is highly efficient and useful in fixed-pattern noise reduction. The fluctuating range of the NRR is 0.84-0.88 for three types of added noise in the OCT images. This result demonstrates that the HTFPNR method can as possible as save useful information by comparing to previous research. This proposed HTFPNR method can be used into the fixed-pattern noise reduction of OCT images in other soft biological tissue in the future.

29Automated Region of Interest Detection Method in Scintigraphic Glomerular Filtration Rate Estimation
The glomerular filtration rate (GFR) is a crucial index to measure renal function. In daily clinical practice, the GFR can be estimated using the Gates method, which requires the clinicians to define the region of interest (ROI) for the kidney and the corresponding background in dynamic renal scintigraphy. The manual placement of ROIs to estimate the GFR is subjective and labor-intensive, however, making it an undesirable and unreliable process. This work presents a fully automated ROI detection method to achieve accurate and robust GFR estimations. After image preprocessing, the ROI for each kidney was delineated using a shape prior constrained level set (spLS) algorithm and then the corresponding background ROIs were obtained according to the defined kidney ROIs. In computer simulations, the spLS method had the best performance in kidney ROI detection compared with the previous threshold method (Threshold) and the Chan-Vese level set (cvLS) method. In further clinical applications, 223 sets of 99mTc-diethylenetriaminepentaacetic acid (99mTc-DTPA) renal scintigraphic images from patients with abnormal renal function were reviewed. Compared with the former ROI detection methods (Threshold and cvLS), the GFR estimations based on the ROIs derived by the spLS method had the highest consistency and correlations (r=0.98, p<0.001) with the reference estimated by experienced physicians.

30Automatic Retinal Vessel Segmentation via Deeply Supervised and Smoothly Regularized Network
In recent years, retinal vessel segmentation technology has become an important component for disease screening and diagnosing in clinical medicine. However, retinal vessel segmentation is a challenging task due to complex distribution of blood vessels, relatively low contrast between target and background, and potential presence of illumination and pathologies. In this paper, we propose an automatic retinal vessel segmentation network using deep supervision and smoothness regularization, which integrates holistically-nested edge detector (HED) and global smoothness regularization from conditional random ?elds (CRFs). It is an end-to-end and pixel-to-pixel deep convolutional network, can perform better results than HED-based methods and the methods where CRF inference is applied as a post-processing method. With co-constraints between pixels, the proposed DSSRN obtains better results. Finally, we show that our proposed method obtains a sate-of-the-art vessel segmentation performance on all three benchmarks, DRIVE, STARE and CHASE DB1.

31A Hybrid Model for Image Denoising Combining Modified Isotropic Diffusion Model and Modified Perona-Malik Model
In this article, a hybrid image denoising algorithm based on directional diffusion is proposed. Specifically, we developed a new noise-removal model by combining the modified isotropic diffusion (ID) model and the modified Perona-Malik (PM) model. The novel hybrid model can adapt the diffusion process along the tangential direction of edges in the original image via a new control function based on the patch similarity modulus. In addition, the patch similarity modulus is used as the new structure indicator for the modified Perona-Malik model. The feature of second order directional derivative of edge’s tangential direction allows the proposed model to reduce the aliasing and the noise around edge during edge preserving smoothing. The proposed method is thus able to efficiently preserve the edges, textures, thin lines, weak edges and fine details, meanwhile preventing the staircase effects. Computer experiments on synthetic image and nature images demonstrate that the proposed model achieves a better performance than the conventional partial differential equations (PDEs) models and some recent advanced models.

32Pattern Classification for Gastrointestinal Stromal Tumors by Integration of Radiomics and Deep Convolutional Features
Predicting malignant potential is one of the most critical components of a computer-aided diagnosis (CAD) system for gastrointestinal stromal tumors (GISTs). These tumors have been studied only on the basis of subjective computed tomography (CT) findings. Among various methodologies, radiomics and deep learning algorithms, specifically convolutional neural networks (CNNs), have recently been confirmed to achieve significant success by outperforming the state-of-the-art performances in medical image pattern classification and have rapidly become leading methodologies in this field. However, the existing methods generally use radiomics or deep convolutional features independently for pattern classification, which tend to take into account only global or local features, respectively. In this paper, we introduce and evaluate a hybrid structure that includes different features selected with radiomics model and CNN and integrates these features to deal with GIST classification. Radiomics model and CNN architecture are constructed for global radiomics and local convolutional feature selections, respectively. Subsequently, we utilize distinct radiomics and deep convolutional features to perform pattern classification for GIST. Specifically, we propose a new pooling strategy to assemble the deep convolutional features of 54 3D patches from the same case and integrate these features with the radiomics features for independent case, followed by random forests (RF) classifier. Our method can be extensively evaluated using multiple clinical datasets.

33Supervised Saliency Map Driven Segmentation of Lesions in Dermoscopic Images
Lesion segmentation is the first step in most automatic melanoma recognition systems. Deficiencies and difficulties in dermoscopic images such as color inconstancy, hair occlusion, dark corners and color charts make lesion segmentation an intricate task. In order to detect the lesion in the presence of these problems, we propose a supervised saliency detection method tailored for dermoscopic images based on the discriminative regional feature integration (DRFI). DRFI method incorporates multi-level segmentation, regional contrast, property, background descriptors, and a random forest regressor to create saliency scores for each region in the image. In our improved saliency detection method, mDRFI, we have added some new features to regional property descriptors. Also, in order to achieve more robust regional background descriptors, a thresholding algorithm is proposed to obtain a new pseudo-background region. Findings reveal that mDRFI is superior to DRFI in detecting the lesion as the salient object in dermoscopic images. The proposed overall lesion segmentation framework uses detected saliency map to construct an initial mask of the lesion through thresholding and post-processing operations. The initial mask is then evolving in a level set framework to fit better on the lesion's boundaries. The results of evaluation tests on three public datasets show that our proposed segmentation method outperforms the other conventional state-of-the-art segmentation.

34Structure-preserving Guided Retinal Image Filtering and Its Application for Optic Disc Analysis
Retinal fundus photographs have been used in the diagnosis of many ocular diseases such as glaucoma, pathological myopia, age-related macular degeneration and diabetic retinopathy. With the development of computer science, computer aided diagnosis has been developed to process and analyse the retinal images automatically. One of the challenges in the analysis is that the quality of the retinal image is often degraded. For example, a cataract in human lens will attenuate the retinal image, just as a cloudy camera lens which reduces the quality of a photograph. It often obscures the details in the retinal images and posts challenges in retinal image processing and analysing tasks. In this paper, we approximate the degradation of the retinal images as a combination of human-lens attenuation and scattering. A novel structure-preserving guided retinal image filtering (SGRIF) is then proposed to restore images based on the attenuation and scattering model. The proposed SGRIF consists of a step of global structure transferring and a step of global edge-preserving smoothing. Our results show that the proposed SGRIF method is able to improve the contrast of retinal images, measured by histogram flatness measure, histogram spread and variability of local luminosity. In addition, we further explored the benefits of SGRIF for subsequent retinal image processing and analysing tasks. In the two applications of deep learning based optic cup segmentation and sparse learning based cup-to-disc ratio (CDR) computation.

35Disc-aware Ensemble Network for Glaucoma Screening from Fundus Image
Glaucoma is a chronic eye disease that leads to irreversible vision loss. Most of the existing automatic screening methods firstly segment the main structure, and subsequently calculate the clinical measurement for detection and screening of glaucoma. However, these measurement-based methods rely heavily on the segmentation accuracy, and ignore various visual features. In this paper, we introduce a deep learning technique to gain additional image-relevant information, and screen glaucoma from the fundus image directly. Specifically, a novel Disc-aware Ensemble Network (DENet) for automatic glaucoma screening is proposed, which integrates the deep hierarchical context of the global fundus image and the local optic disc region. Four deep streams on different levels and modules are respectively considered as global image stream, segmentation-guided network, local disc region stream, and disc polar transformation stream. Finally, the output probabilities of different streams are fused as the final screening result. The experiments on two glaucoma datasets (SCES and new SINDI datasets) show our method outperforms other state-of-the-art algorithms.

36Pulmonary Artery-Vein Classification in CT Images Using Deep Learning
Recent studies show that pulmonary vascular diseases may specifically affect arteries or veins through different physiologic mechanisms. To detect changes in the two vascular trees, physicians manually analyze the chest computed tomography (CT) image of the patients in search of abnormalities. This process is time-consuming, difficult to standardize and thus not feasible for large clinical studies or useful in real-world clinical decision making. Therefore, automatic separation of arteries and veins in CT images is becoming of great interest, as it may help physicians accurately diagnose pathological conditions. In this work, we present a novel, fully automatic approach to classifying vessels from chest CT images into arteries and veins. The algorithm follows three main steps: first, a scale-space particles segmentation to isolate vessels; then a 3D convolutional neural network (CNN) to obtain a first classification of vessels; finally, graph-cuts (GC) optimization to refine the results. To justify the usage of the proposed CNN architecture, we compared different 2D and 3D CNNs that may use local information from bronchus- and vessel-enhanced images provided to the network with different strategies. We also compared the proposed CNN approach with a Random Forests (RF) classifier. The methodology was trained and evaluated on the superior and inferior lobes of the right lung of eighteen clinical cases with non-contrast chest CT scans, in comparison with manual classification.

37Enhancing the image quality via transferred deep residual learning of coarse PET sonograms
Increasing the image quality of positron emission tomography (PET) is an essential topic in the PET community. For instance, thin pixelated crystals have been used to provide high spatial resolution images but at the cost of sensitivity and manufacture expense. In this study, we proposed an approach to enhance the PET image resolution and noise property for PET scanners with large pixelated crystals. To address the problem of coarse blurred sinograms with large parallax errors associated with large crystals, we developed a data-driven, single-image super-resolution (SISR) method for sinograms, based on the novel deep residual convolutional neural network (CNN). Unlike the CNN-based SISR on natural images, periodically padded sinogram data and dedicated network architecture were used to make it more efficient for PET imaging. Moreover, we included the transfer learning scheme in the approach to process cases with poor labeling and small training data set. The approach was validated via analytically simulated data (with and without noise), Monte Carlo simulated data, and pre-clinical data. Using the proposed method, we could achieve comparable image resolution and better noise property with large crystals of bin sizes 4 times of thin crystals with a bin size from 1×1 mm2 to 1.6×1.6 mm2. Our approach uses external PET data as the prior knowledge for training and does not require additional information during inference.

38Design of a Gabor Filter HW Accelerator for Applications in Medical Imaging
The Gabor filter (GF) has been proved to show good spatial frequency and position selectivity, which makes it a very suitable solution for visual search, object recognition, and, in general, multimedia processing applications. GFs prove useful also in the processing of medical imaging to improve part of the several filtering operations for their enhancement, denoising, and mitigation of artifact issues. However, the good performances of GFs are compensated by a hardware complexity that traduces in a large amount of mapped physical resources. This paper presents three different designs of a GF, showing different tradeoffs between accuracy, area, power, and timing. From the comparative study, it is possible to highlight the strength points of each one and choose the best design. The designs have been targeted to a Xilinx field-programmable gate array (FPGA) platform and synthesized to 90-nm CMOS standard cells. FPGA implementations achieve a maximum operating frequency among the different designs of 179 MHz, while 350 MHz is obtained from CMOS synthesis. Therefore, 86 and 168 full-HD (1920 x 1080) f/s could be processed, with FPGA and std_cell implementations, respectively. In order to meet space constraints, several considerations are proposed to achieve an optimization in terms of power consumption, while still ensuring real-time performances.

39Deep Neural Networks for Ultrasound Beamforming
We investigate the use of deep neural networks (DNNs) for suppressing off-axis scattering in ultrasound channel data. Our implementation operates in the frequency domain via the short-time Fourier transform. The inputs to the DNN consisted of the separated real and imaginary components (i.e. inphase and quadrature components) observed across the aperture of the array, at a single frequency and for a single depth. Different networks were trained for different frequencies. The output had the same structure as the input and the real and imaginary components were combined as complex data before an inverse short-time Fourier transform was used to reconstruct channel data. Using simulation, physical phantom experiment, and in vivo scans from a human liver, we compared this DNN approach to standard delay-and-sum (DAS) beamforming and an adaptive imaging technique that uses the coherence factor (CF). For a simulated point target, the side lobes when using the DNN approach were about 60 dB below those of standard DAS. For a simulated anechoic cyst, the DNN approach improved contrast ratio (CR) and contrast-to-noise (CNR) ratio by 8.8 dB and 0.3 dB, respectively, compared to DAS. For an anechoic cyst in a physical phantom, the DNN approach improved CR and CNR by 17.1 dB and 0.7 dB, respectively. For two in vivo scans, the DNN approach improved CR and CNR by 13.8 dB and 9.7 dB, respectively. We also explored methods for examining how the networks in this work function.

40Kidney Detection in 3D Ultrasound Imagery Via Shape to Volume Registration Based on Spatially Aligned Neural Network
This paper introduces a computer-aided kidney shape detection method suitable for volumetric (3D) ultrasound images. Using shape and texture priors, the proposed method automates the process of kidney detection, which is a problem of great importance in computer-assisted trauma diagnosis. This paper introduces a new complex-valued implicit shape model which represents the multi-regional structure of the kidney shape. A spatially aligned neural network classifiers with complex-valued output is designed to classify voxels into background and multi-regional structure of the kidney shape. The complex values of the shape model and classification outputs are selected and incorporated in a new similarity metric such the shape-to-volume registration process only fits the shape model on the actual kidney shape in input ultrasound volumes. The algorithm's accuracy and sensitivity are evaluated using both simulated and actual 3D ultrasound images, and it is compared against the performance of the state-of-the-art. The results support the claims about accuracy and robustness of the proposed kidney detection method, and statistical analysis validates its superiority over state-of-the-art.

41Surrogate-assisted Retinal OCT Image Classification Based on Convolutional Neural Networks
Optical Coherence Tomography (OCT) is becoming one of the most important modalities for the noninvasive assessment of retinal eye diseases. As the number of acquired OCT volumes increases, automating the OCT image analysis is becoming increasingly relevant. In this paper, we propose a surrogate-assisted classification method to classify retinal OCT images automatically based on convolutional neural networks (CNNs). Image denoising is first performed to reduce the noise. Thresholding and morphological dilation are applied to extract the masks. The denoised images and the masks are then employed to generate a lot of surrogate images, which are used to train the CNN model. Finally, The prediction for a test image is determined by the average of the outputs from the trained CNN model on the surrogate images. The proposed method has been evaluated on different databases. The results (AUC of 0.9783 in the local database and AUC of 0.9856 in the Duke database) show that the proposed method is a very promising tool for classifying the retinal OCT images automatically.

42Automatic Segmentation of Acute Ischemic Stroke from DWI using 3D Fully Convolutional DenseNets
Acute ischemic stroke is recognized as a common cerebral vascular disease in aging people. Accurate diagnosis and timely treatment can effectively improve the blood supply of the ischemic area and reduce the risk of disability or even death. Understanding the location and size of infarcts plays a critical role in the diagnosis decision. However, manual localization and quantification of stroke lesions are laborious and timeconsuming. In this paper, we propose a novel automatic method to segment acute ischemic stroke from diffusion weighted images (DWI) using deep 3D convolutional neural networks (CNNs). Our method can efficiently utilize 3D contextual information and automatically learn very discriminative features in an end-to-end and data-driven way. To relieve the difficulty of training very deep 3D CNN, we equip our network with dense connectivity to enable the unimpeded propagation of information and gradients throughout the network. We train our model with Dice objective function to combat the severe class imbalance problem in data. A DWI dataset containing 242 subjects (90 for training, 62 for validation and 90 for testing) with various types of acute ischemic stroke was constructed to evaluate our method. Our model achieved high performance on various metrics (Dice similarity coefficient: 79.13%, lesion-wise precision: 92.67%, lesion-wise F1 score: 89.25%), outperforming other state-of-the-art CNN methods by a large margin.

43Interactive Medical Image Segmentation using Deep Learning with Image-specific Fine-tuning
Convolutional neural networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they have not demonstrated sufficiently accurate and robust results for clinical use. In addition, they are limited by the lack of image-specific adaptation and the lack of generalizability to previously unseen object classes (a.k.a. zero-shot learning). To address these problems, we propose a novel deep learning-based interactive segmentation framework by incorporating CNNs into a bounding box and scribble-based segmentation pipeline. We propose image-specific fine-tuning to make a CNN model adaptive to a specific test image, which can be either unsupervised (without additional user interactions) or supervised (with additional scribbles). We also propose a weighted loss function considering network and interaction-based uncertainty for the fine-tuning. We applied this framework to two applications: 2D segmentation of multiple organs from fetal Magnetic Resonance (MR) slices, where only two types of these organs were annotated for training; and 3D segmentation of brain tumor core (excluding edema) and whole brain tumor (including edema) from different MR sequences, where only the tumor core in one MR sequence was annotated for training.

44Rapid contour detection for image classification
The author introduces a contour detection method that has relatively low complexity yet still highly accurate. The method is based on extrema detection along the four principal orientations, a trick that can be used to detect not only edges but, in particular, also ridges and rivers. The author makes a comparison to the popular Canny algorithm and shows that the proposed method's only downside is that it cannot detect very high curvatures in edge contours. The method is applied to the task of image classification (satellite images, Caltech-101, etc.) and it is demonstrated that the use of all three contour types (edges, ridges, and rives) improves classification accuracy as opposed to the use of only edge contours. Thus, for image classification, it is more important to extract multiple contour features; the use of the exact detection method appears to play a smaller role. The author's simple method is also appealing for use in individual frames, due to its low complexity.

45Joint Optic Disc and Cup Segmentation Based on Multi-label Deep Network and Polar Transformation
Glaucoma is a chronic eye disease that leads to irreversible vision loss. The cup to disc ratio (CDR) plays an important role in the screening and diagnosis of glaucoma. Thus, the accurate and automatic segmentation of optic disc (OD) and optic cup (OC) from fundus images is a fundamental task. Most existing methods segment them separately, and rely on hand-crafted visual feature from fundus images. In this paper, we propose a deep learning architecture, named M-Net, which solves the OD and OC segmentation jointly in a one-stage multilabel system. The proposed M-Net mainly consists of multi-scale input layer, U-shape convolutional network, side-output layer, and multi-label loss function. The multi-scale input layer constructs an image pyramid to achieve multiple level receptive field sizes. The U-shape convolutional network is employed as the main body network structure to learn the rich hierarchical representation, while the side-output layer acts as an early classifier that produces a companion local prediction map for different scale layers. Finally, a multi-label loss function is proposed to generate the final segmentation map. For improving the segmentation performance further, we also introduce the polar transformation, which provides the representation of the original image in the polar coordinate system.

46Automatic Segmentation of Acute Ischemic Stroke from DWI using 3D Fully Convolutional DenseNets
Acute ischemic stroke is recognized as a common cerebral vascular disease in aging people. Accurate diagnosis and timely treatment can effectively improve the blood supply of the ischemic area and reduce the risk of disability or even death. Understanding the location and size of infarcts plays a critical role in the diagnosis decision. However, manual localization and quantification of stroke lesions are laborious and time consuming. In this paper, we propose a novel automatic method to segment acute ischemic stroke from diffusion weighted images (DWI) using deep 3D convolutional neural networks (CNNs). Our method can efficiently utilize 3D contextual information and automatically learn very discriminative features in an end-to-end and data-driven way. To relieve the difficulty of training very deep 3D CNN, we equip our network with dense connectivity to enable the unimpeded propagation of information and gradients throughout the network. We train our model with Dice objective function to combat the severe class imbalance problem in data. A DWI dataset containing 242 subjects (90 for training, 62 for validation and 90 for testing) with various types of acute ischemic stroke was constructed to evaluate our method. Our model achieved high performance on various metrics (Dice similarity coefficient: 79.13%, lesion-wise precision: 92.67%, lesion-wise F1 score: 89.25%), outperforming other state-of-the-art CNN methods by a large margin.

47SDI+: a Novel Algorithm for Segmenting Dermoscopic Images
Malignant skin lesions are among the most common types of cancer, and automated systems for their early detection are of fundamental importance. We propose SDI+, an unsupervised algorithm for the segmentation of skin lesions in dermoscopic images. It is articulated into three steps, aimed at extracting preliminary information on possible confounding factors, accurately segmenting the lesion, and post-processing the result. The overall method achieves high accuracy on dark skin lesions and can handle several cases where confounding factors could inhibit a clear understanding by a human operator. We present extensive experimental results and comparisons achieved by the SDI+ algorithm on the ISIC 2017 dataset, highlighting the advantages and disadvantages.

48Glaucoma Detection from Fundus Images Using MATLAB GUI
A troublesome disease in which damages of the optic nerve of eye's is nothing but the glaucoma, which causes irretrievable loss of vision. Glaucoma is a disease where if treatment is get late, the person can blind. Normally glaucoma detects when there is an increase in the fluid in the front of eye. When that extra fluid is increased, the pressure in your eye is also getting increased. Accordingly, the size of the optic disc and optic cup is increased as a result diameter also increased. The ratio of the cup and disc diameter is called cup-to-disc ratio (CDR). Threshold type segmentation method is used in this system for localizing the optic disc and optic cup. Another edge detection and ellipse fitting algorithm are also used. The proposed system for optic disc and optic cup localization and CDR calculation is MATLAB GUI software.

49Classification of Medical Images in the Biomedical Literature by Jointly Using Deep and Handcrafted Visual Features
The classification of medical images and illustrations from the biomedical literature is important for automated literature review, retrieval and mining. Although deep learning is effective for large-scale image classification, it may not be the optimal choice for this task as there is only a small training dataset. We propose a combined deep and handcrafted visual feature (CDHVF) based algorithm that uses features learned by three fine-tuned and pre-trained deep convolutional neural networks (DCNNs) and two handcrafted descriptors in a joint approach. We evaluated the CDHVF algorithm on the ImageCLEF 2016 Subfigure Classification dataset and it achieved an accuracy of 85.47%, which is higher than the best performance of other purely visual approaches listed in the challenge leaderboard. Our results indicate that handcrafted features complement the image representation learned by DCNNs on small training datasets and improve accuracy in certain medical image classification problems.

50Automatic Estimation of Fetal Abdominal Circumference from Ultrasound Images
The classification of medical images and illustrations from the biomedical literature is important for automated literature review, retrieval and mining. Although deep learning is effective for large-scale image classification, it may not be the optimal choice for this task as there is only a small training dataset. We propose a combined deep and handcrafted visual feature (CDHVF) based algorithm that uses features learned by three fine-tuned and pre-trained deep convolutional neural networks (DCNNs) and two handcrafted descriptors in a joint approach. We evaluated the CDHVF algorithm on the ImageCLEF 2016 Subfigure Classification dataset and it achieved an accuracy of 85.47%, which is higher than the best performance of other purely visual approaches listed in the challenge leaderboard. Our results indicate that handcrafted features complement the image representation learned by DCNNs on small training datasets and improve accuracy in certain medical image classification problems.



Topic Highlights



Bio Medical Projects :

Generally, Bio Medical Projects increased to advancement. Based on their treatments and equipment. Indeed , the advancement of Bio Medical Engineering developed. Electronic projects benefited to it. Because its the evergreen domain. Where, its importance saves thousands of people. Elysiumpro project partner supports you.

Thus, its revolutionized field keeps on changing according to its needs.

Trending Electronic projects :

This type of projects is trending because of the live implementation. Many industries seeking for students who are experienced. So, get the projects and place in top MNC’s. As, projects are the proof that you are potential enough to implement your ideas practically. Elysiumpro project partner will help you to obtain your goal.Do your projects and be in demand. Bio gadgets, bio medicals and RFID projects can be done.Just check out our projects.

Indeed , We are providing the services to enhance the students knowledge.Our Academy is helping them to. Make them achieve their milestones.This will be turning to design their career in the productive and reachable manner. Furthermore, you can get an exposure towards the modern technology. To point out, choose the domain that is currently in trend. So that it will be an added advantage for your presentation. As well as for your interview.

 

 

Hi there! Click one of our representatives below and we will get back to you as soon as possible.

Chat with us on WhatsApp
Online Payment
LiveZilla Live Chat Software