Datamining Projects – ElysiumPro

Datamining Projects

CSE Projects
Description
D Data Mining is the computing process of discovering patterns in large data sets involving the intersection of machine learning, statistics and database. We provide data mining algorithms with source code to students that can solve many real time issues with various software based systems.
Download Project List

Quality Factor

  • 100% Assured Results
  • Best Project Explanation
  • Tons of Reference
  • Cost optimized
  • Controlpanel Access


1Heterogeneous Information Network Embedding for Recommendation
Due to the flexibility in modelling data heterogeneity, heterogeneous information network (HIN) has been adopted to characterize complex and heterogeneous auxiliary data in recommender systems, called HIN based recommendation. It is challenging to develop effective methods for HIN based recommendation in both extraction and exploitation of the information from HINs. Most of HIN based recommendation methods rely on path based similarity, which cannot fully mine latent structure features of users and items. In this paper, we propose a novel heterogeneous network embedding based approach for HIN based recommendation, called HERec. To embed HINs, we design a meta-path based random walk strategy to generate meaningful node sequences for network embedding. The learned node embeddings are first transformed by a set of fusion functions, and subsequently integrated into an extended matrix factorization (MF) model. The extended MF model together with fusion functions are jointly optimized for the rating prediction task. Extensive experiments on three real-world datasets demonstrate the effectiveness of the HERec model. Moreover, we show the capability of the HERec model for the cold-start problem, and reveal that the transformed embedding information from HINs can improve the recommendation performance.

2Efficient Vertical Mining of High Average-Utility Item sets based on Novel Upper-Bounds
Mining High Average-Utility Itemsets (HAUIs) in a quantitative database is an extension of the traditional problem of frequent itemset mining, having several practical applications. Discovering HAUIs is more challenging than mining frequent itemsets using the traditional support model since the average-utilities of itemsets do not satisfy the downward-closure property. To design algorithms for mining HAUIs that reduce the search space of itemsets, prior studies have proposed various upper-bounds on the average-utilities of itemsets. However, these algorithms can generate a huge amount of unpromising HAUI candidates, which result in high memory consumption and long runtimes. To address this problem, this paper proposes four tight average-utility upper-bounds, based on a vertical database representation, and three efficient pruning strategies. Furthermore, a novel generic framework for comparing average-utility upper-bounds is presented. Based on these theoretical results, an efficient algorithm named dHAUIM is introduced for mining the complete set of HAUIs. dHAUIM represents the search space and quickly compute upper-bounds using a novel IDUL structure. Extensive experiments show that dHAUIM outperforms three state-of-the-art algorithms for mining HAUIs in terms of runtime on both real-life and synthetic databases. Moreover, results show that the proposed pruning strategies dramatically reduce the number of candidate HAUIs.

3Privacy Characterization and Quantification in Data Publishing
The increasing interest in collecting and publishing large amounts of individuals' data to public for purposes such as medical research, market analysis and economical measures has created major privacy concerns about individual's sensitive information. To deal with these concerns, many Privacy-Preserving Data Publishing (PPDP) techniques have been proposed in literature. However, they lack a proper privacy characterization and measurement. In this paper, we first present a novel multi-variable privacy characterization and quantification model. Based on this model, we are able to analyze the prior and posterior adversarial belief about attribute values of individuals. Then we show that privacy should not be measured based on one metric. We demonstrate how this could result in privacy misjudgment. We propose two different metrics for quantification of privacy leakage, distribution leakage and entropy leakage. Using these metrics, we analyzed some of the most well-known PPDP techniques such as k-anonymity, l-diversity and t-closeness. Based on our framework and the proposed metrics, we can determine that all the existing PPDP schemes have limitations in privacy characterization. Our proposed privacy characterization and measurement framework contributes to better understanding and evaluation of these techniques. Thus this paper provides a foundation for design and analysis of PPDP schemes.

4Efficient Mining of Frequent Patterns on Uncertain Graphs
Uncertainty is intrinsic to a wide spectrum of real-life applications, which inevitably applies to graph data. Representative uncertain graphs are seen in bio-informatics, social networks, etc. This paper motivates the problem of frequent subgraph mining on single uncertain graphs, and investigates two different - probabilistic and expected - semantics in terms of support definitions. First, we present an enumeration-evaluation algorithm to solve the problem under probabilistic semantics. By showing the support computation under probabilistic semantics is #P-complete, we develop an approximation algorithm with accuracy guarantee for efficient problem-solving. To enhance the solution, we devise computation sharing techniques to achieve better mining performance. Afterwards, the algorithm is extended in a similar flavor to handle the problem under expected semantics, where checkpoint-based pruning and validation techniques are integrated. Experiment results on real-life datasets confirm the practical usability of the mining algorithms.

5Harnessing Multi-source Data about Public Sentiments and Activities for Informed Design
The intelligence of Smart Cities (SC) is represented by its ability in collecting, managing, integrating, analyzing and mining multi-source data for valuable insights. In order to harness multi-source data for an informed place design, this paper presents "Public Sentiments and Activities in Places" multi-source data analysis flow (PSAP) in an Informed Design Platform (IDP). In terms of key contributions, PSAP implements 1) an Interconnected Data Model (IDM) to manage multi-source data independently and integrally, 2) an efficient and effective data mining mechanism based on multi-dimension and multi-measure queries (MMQs), and 3) concurrent data processing cascades with Sentiments in Places Analysis Mechanism (SPAM) and Activities in Places Analysis Mechanism (APAM), to fuse social network data with other data on public sentiment and activity comprehensively. As proved by a holistic evaluation, both SPAM and APAM outperform compared methods. Specifically, SPAM improves its classification accuracy gradually and significantly from 72.37% to about 85% within 9 crowd-calibration cycles, and APAM with an ensemble classifier achieves the highest precision of 92.13%, which is approximately 13% higher than the second best method. Finally, by applying MMQs on "Sentiment&Activity Linked Data", various place design insights of our testbed are mined to improve its livability.

6An Efficient Method for High Quality and Cohesive Topical Phrase Mining
A phrase is a natural, meaningful, and essential semantic unit. In topic modeling, visualizing phrases for individual topics is an effective way to explore and understand unstructured text corpora. However, from phrase quality and topical cohesion perspectives, the outcomes of existing approaches remain to be improved. Usually, the process of topical phrase mining is twofold: phrase mining and topic modeling. For phrase mining, existing approaches often suffer from order sensitive and inappropriate segmentation problems, which make them often extract inferior quality phrases. For topic modeling, traditional topic models do not fully consider the constraints induced by phrases, which may weaken the cohesion. Moreover, existing approaches often suffer from losing domain terminologies since they neglect the impact of domain-level topical distribution. In this paper, we propose an efficient method for high quality and cohesive topical phrase mining. In our framework, we integrate quality guaranteed phrase mining method, a novel topic model incorporating the constraint of phrases, and a novel document clustering method into an iterative framework to improve both phrase quality and topical cohesion. We also describe efficient algorithmic designs to execute these methods efficiently. The empirical verification demonstrates that our method outperforms the state-of-the-art methods from the aspects of both interpretability and efficiency.

7Semi-supervised ensemble clustering based on selected constraint projection
Traditional cluster ensemble approaches have several limitations. (1) Few make use of prior knowledge provided by experts. (2) It is difficult to achieve good performance in high-dimensional datasets. (3) All of the weight values of the ensemble members are equal, which ignores different contributions from different ensemble members. (4) Not all pairwise constraints contribute to the final result. In the face of this situation, we propose double weighting semi-supervised ensemble clustering based on selected constraint projection (DCECP) to address these limitations. Specifically, DCECP first adopts the random subspace technique in combination with the constraint projection procedure to handle high-dimensional datasets. Second, it treats prior knowledge of experts as pairwise constraints, and assigns different subsets of pairwise constraints to different ensemble members. An adaptive ensemble member weighting process is designed to associate different weight values with different ensemble members. Third, the weighted normalized cut algorithm is adopted to summarize clustering solutions and generate the final result. Finally, nonparametric statistical tests are used to compare multiple algorithms on real-world datasets. Our experiments on 15 high-dimensional datasets show that DCECP performs better than most clustering algorithms.

8Approximate Order-Sensitive k-NN Queries over Correlated High-Dimensional Data
The k Nearest Neighbor (k-NN) query has been gaining more importance in extensive applications involving information retrieval, data mining and databases. Specifically, in order to trade off accuracy for efficiency, approximate solutions for the k-NN query are extensively explored. However, the precision is usually order-insensitive, which is defined on the result set instead of the result sequence. In many situations, it cannot reasonably reflect the query result quality. In this paper, we focus on the approximate k-NN query problem with the order-sensitive precision requirement and propose a novel scheme based on the projection-filter-refinement framework. Basically, we adopt PCA to project the high-dimensional data objects into the low-dimensional space. Then, a filter condition is inferred to execute efficient pruning over the projected data. In addition, an index strategy named OR-tree is proposed to reduce the I/O cost. The extensive experiments based on several real-world data sets and a synthetic data set are conducted to verify the effectiveness and efficiency of the proposed solution. Compared to the state-of-the-art methods, our method can support order-sensitive k-NN queries with higher result precision while retaining satisfactory CPU and I/O efficiency.

9Mining Summaries for Knowledge Graph Search
Querying heterogeneous and large-scale knowledge graphs is expensive. This paper studies a graph summarization framework to facilitate knowledge graph search. (1) We introduce a class of reduced summaries. Characterized by approximate graph pattern matching, these summaries are capable of summarizing entities in terms of their neighborhood similarity up to a certain hop, using small and informative graph patterns. (2) We study a diversified graph summarization problem. Given a knowledge graph, it is to discover top-k summaries that maximize a bi-criteria function, characterized by both informativeness and diversity. We show that diversified summarization is feasible for large graphs, by developing both sequential and parallel summarization algorithms. (a) We show that there exists a 2-approximation algorithm to discover diversified summaries. We further develop an anytime sequential algorithm which discovers summaries under resource constraints. (b) We present a new parallel algorithm with quality guarantees. The algorithm is parallel scalable, which ensures its feasibility in distributed graphs. (3) We also develop a summary-based query evaluation scheme, which only refers to a small number of summaries. Using real-world knowledge graphs, we experimentally verify the effectiveness and efficiency of our summarization algorithms, and query processing using summaries.

10CRAFTER: a Tree-ensemble Clustering Algorithm for Static Datasets with Mixed Attributes and High Dimensionality
Clustering is an important aspect of data mining, while clustering high-dimensional mixed-attribute data in a scalable fashion still remains a challenging problem. In this paper, we propose a tree-ensemble clustering algorithm for static datasets, CRAFTER, to tackle this problem. CRAFTER is able to handle categorical and numeric attributes simultaneously, and scales well with the dimensionality and the size of datasets. CRAFTER leverages the advantages of a tree-ensemble to handle mixed attributes and high dimensionality. The concept of the class probability estimates is utilized to identify the representative data points for clustering. Through a series of experiments on both synthetic and real datasets, we have demonstrated that CRAFTER is superior than Random Forest Clustering (RFC), an existing tree-based clustering method, in terms of both the clustering quality and the computational cost.

11Paradoxical Correlation Pattern Mining
Given a large transactional database, correlation computing/association analysis aims at efficiently finding strongly correlated items. For traditional association analysis, relationships among variables are usually measured at a global level. In this study, we investigate confounding factors that can help to capture abnormal correlation behaviors at a local level. Indeed, many real-world phenomena are localized to specific markets or subpopulations. Such local relationships may not be visible or may be miscalculated when collectively analyzing the entire data. In particular, confounding effects that change the direction of correlation are a most severe problem because the global correlations alone leads to errant conclusions. To this end, we propose CONFOUND, an efficient algorithm to identify paradoxical correlation patterns (i.e., where controlling for a third item changes the direction of association for strongly correlated pairs) using effective pruning strategies. Moreover, we also provide an enhanced version of this algorithm, called CONFOUND+, which substantially speeds up the confounder search step. Finally, experimental results showed that our proposed CONFOUND and CONFOUND+ algorithms can effectively identify confounders and the computational performance is orders of magnitude faster than benchmark methods.

12Deconvolution and Restoration of Optical Endomicroscopy Images
Optical endomicroscopy (OEM) is an emerging technology platform with preclinical and clinical imaging applications. Pulmonary OEM via fibre bundles has the potential to provide in vivo, in situ molecular signatures of disease such as infection and inflammation. However,a enhancing the quality of data acquired by this technique for better visualization and subsequent analysis remains a challenging problem. Cross coupling between fiber cores and sparse sampling by imaging fiber bundles are the main reasons for image degradation, and poor detection performance (i.e., inflammation, bacteria, etc.). In this paper, we address the problem of deconvolution and restoration of OEM data. We propose a hierarchical Bayesian model to solve this problem and compare three estimation algorithms to exploit the resulting joint posterior distribution. The first method is based on Markov chain Monte Carlo methods, however, it exhibits a relatively long computational time. The second and third algorithms deal with this issue and are based on a variational Bayes approach and an alternating direction method of multipliers algorithm, respectively. Results on both synthetic and real datasets illustrate the effectiveness of the proposed methods for restoration of OEM images.

13Can Signal-to-Noise Ratio Perform as a Baseline Indicator for Medical Image Quality Assessment
Natural image quality assessment (NIQA) wins increasing attention, while NIQA models are rarely used in the medical community. A couple of studies employ the NIQA methodologies for medical image quality assessment (MIQA), but building the benchmark data sets necessitates considerable time and professional skills. In particular, the characteristics of synthesized distortions are different from those of clinical distortions, which make the results not so convincing. In clinic, signal-to-noise ratio (SNR) is widely used, which is defined as the quotient of the mean signal intensity measured in a tissue region of interest (ROI) and the standard deviation of the signal intensity in an air region outside the imaged object, and both regions are outlined by specialists. We take advantage of the knowledge that SNR is routinely used and concern whether SNR measure can perform as a baseline metric for the development of MIQA algorithms. To address the issue, the inter-observer reliability of SNR measure is investigated regarding to different tissue ROIs [white matter (WM); cerebral spinal fluid (CSF)] in magnetic resonance (MR) images. A total of 192 T2, 88 T1, 76 T2 and 55 contrast-enhanced T1 (T1C) weighted images are analyzed. Statistical analysis indicates that SNR values show consistency between different observers to the same ROI in each modality (Wilcoxon rank sum test, pw ≥ 0.11; and paired sample t-test, pp 0.28).

14Incorporating a Noise Reduction Technique Into X-Ray Tensor Tomography
X-ray tensor tomography (XTT) is a novel imaging modality for the three-dimensional reconstruction of X-ray scattering tensors from dark-field images obtained in a grating interferometry setup. The two-dimensional dark-field images measured in XTT are degraded by noise effects, such as detector readout noise and insufficient photon statistics, and consequently, the three-dimensional volumes reconstructed from this data exhibit noise artifacts. In this paper, we investigate the best way to incorporate a denoising technique into the XTT reconstruction pipeline, i.e., the popular total variation denoising technique. We propose two different schemes of including denoising in the reconstruction process, one using a column block-parallel iterative scheme and one using a whole-system approach. In addition, we compare the results when using a simple denoising approach applied either before or after reconstruction. The effectiveness is evaluated qualitatively and quantitatively based on datasets from an industrial sample and a clinical sample. The results clearly demonstrate the superiority of including denoising in the reconstruction process, along with slight advantages of the whole-system approach.

15Deep Regression Segmentation for Cardiac Bi-Ventricle MR Images
Cardiac bi-ventricle segmentation can help physicians to obtain clinical indices, such as mass and volume of left ventricle (LV) and right ventricle (RV). In this paper, we propose a regression segmentation framework to delineate boundaries of bi-ventricle from cardiac magnetic resonance (MR) images by building a regression model automatically and accurately. First, we extract DAISY feature from images. Then, a point based representation method is employed to depict the boundaries. Finally, we use DAISY as input and boundary points as labels to train the regression model based on deep belief network. Regression combined deep learning and DAISY feature can capture high level image information and accurately segment biventricle with fewer assumptions and lower computational cost. In our experiment, the performance of the proposed framework is compared with manual segmentation on 145 clinical subjects (2900 images in total), which are collected from three hospitals affiliated with two health care centers (London Healthcare Center and St. Josephs HealthCare). The results of our method and manually segmented method are highly consistent. High Pearson's correlation coefficient between automated boundaries and manual annotation is up to 0.995 (endocardium of LV), 0.997 (epicardium of LV), and 0.985 (RV). Average Dice metric is up to 0.916 (endocardium of LV), 0.941 (epicardium of LV), and 0.844 (RV). Altogether, experimental results are capable of demonstrating the efficacy of our regression segmentation framework for cardiac MR images.

16Mass Segmentation in Automated 3-D Breast Ultrasound Using Adaptive Region Growing and Supervised Edge-Based Deformable Model
Automated 3-D breast ultrasound has been proposed as a complementary modality to mammography for early detection of breast cancers. To facilitate the interpretation of these images, computer aided detection systems are being developed in which mass segmentation is an essential component for feature extraction and temporal comparisons. However, automated segmentation of masses is challenging because of the large variety in shape, size, and texture of these 3-D objects. In this paper, the authors aim to develop a computerized segmentation system, which uses a seed position as the only priori of the problem. A two-stage segmentation approach has been proposed incorporating shape information of training masses. At the first stage, a new adaptive region growing algorithm is used to give a rough estimation of the mass boundary. The similarity threshold of the proposed algorithm is determined using a Gaussian mixture model based on the volume and circularity of the training masses. In the second stage, a novel geometric edge-based deformable model is introduced using the result of the first stage as the initial contour. In a data set of 50 masses, including 38 malignant and 12 benign lesions, the proposed segmentation method achieved a mean Dice of 0.74 ± 0.19 which outperformed the adaptive region growing with a mean Dice of 0.65 ± 0.2 (p-value <; 0.02).

17Image Segmentation Using Disjunctive Normal Bayesian Shape and Appearance Models
The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. For instance, most active shape and appearance models require landmark points and assume unimodal shape and appearance distributions, and the level set representation does not support construction of local priors. In this paper, we present novel appearance and shape models for image segmentation based on a differentiable implicit parametric shape representation called a disjunctive normal shape model (DNSM). The DNSM is formed by the disjunction of polytopes, which themselves are formed by the conjunctions of half-spaces. The DNSM's parametric nature allows the use of powerful local prior statistics, and its implicit nature removes the need to use landmarks and easily handles topological changes. In a Bayesian inference framework, we model arbitrary shape and appearance distributions using nonparametric density estimations, at any local scale. The proposed local shape prior results in accurate segmentation even when very few training shapes are available, because the method generates a rich set of shape variations by locally combining training samples. We demonstrate the performance of the framework by applying it to both 2-D and 3-D data sets with emphasis on biomedical image segmentation applications.

18Deep Neural Networks for the Recognition and Classification of Heart Murmurs Using Neuromorphic Auditory Sensors
Auscultation is one of the most used techniques for detecting cardiovascular diseases, which is one of the main causes of death in the world. Heart murmurs are the most common abnormal finding when a patient visits the physician for auscultation. These heart sounds can either be innocent, which are harmless, or abnormal, which may be a sign of a more serious heart condition. However, the accuracy rate of primary care physicians and expert cardiologists when auscultating is not good enough to avoid most of both type-I (healthy patients are sent for echocardiogram) and type-II (pathological patients are sent home without medication or treatment) errors made. In this paper, the authors present a novel convolutional neural network based tool for classifying between healthy people and pathological patients using a neuromorphic auditory sensor for FPGA that is able to decompose the audio into frequency bands in real time. For this purpose, different networks have been trained with the heart murmur information contained in heart sound recordings obtained from nine different heart sound databases sourced from multiple research groups. These samples are segmented and preprocessed using the neuromorphic auditory sensor to decompose their audio information into frequency bands and, after that, sonogram images with the same size are generated. These images have been used to train and test different convolutional neural network architectures. The best results have been obtained with a modified version of the AlexNet model, achieving 97% accuracy (specificity: 95.12%, sensitivity: 93.20%, and type-II errors.

19A Meshfree Representation for Cardiac Medical Image Computing
The prominent advantage of meshfree method, is the way to build the representation of computational domain, based on the nodal points without any explicit meshing connectivity. Therefore, meshfree method can conveniently process the numerical computation inside interested domains with large deformation or inhomogeneity. In this paper, we adopt the idea of meshfree representation into cardiac medical image analysis in order to overcome the difficulties caused by large deformation and inhomogeneous materials of the heart. In our implementation, as element-free Galerkin method can efficiently build a meshfree representation using its shape function with moving least square fitting, we apply this meshfree method to handle large deformation or inhomogeneity for solving cardiac segmentation and motion tracking problems. We evaluate the performance of meshfree representation on a synthetic heart data and an in-vivo cardiac MRI image sequence. Results showed that the error of our framework against the ground truth was 0.1189 ± 0.0672 while the error of the traditional FEM was 0.1793 ± 0.1166. The proposed framework has minimal consistency constraints, handling large deformation and material discontinuities are simple and efficient, and it provides a way to avoid the complicated meshing procedures while preserving the accuracy with a relatively small number of nodes.

20Multimodal Breast Parenchymal Patterns Correlation Using a Patient-Specific Biomechanical Model
In this paper, we aim to produce a realistic 2-D projection of the breast parenchymal distribution from a 3-D breast magnetic resonance image (MRI). To evaluate the accuracy of our simulation, we compare our results with the local breast density (i.e., density map) obtained from the complementary full-field digital mammogram. To achieve this goal, we have developed a fully automatic framework, which registers MRI volumes to X-ray mammograms using a subject-specific biomechanical model of the breast. The optimization step modifies the position, orientation, and elastic parameters of the breast model to perform the alignment between the images. When the model reaches an optimal solution, the MRI glandular tissue is projected and compared with the one obtained from the corresponding mammograms. To reduce the loss of information during the ray-casting, we introduce a new approach that avoids resampling the MRI volume. In the results, we focus our efforts on evaluating the agreement of the distributions of glandular tissue, the degree of structural similarity, and the correlation between the real and synthetic density maps. Our approach obtained a high-structural agreement regardless the glandularity of the breast, whilst the similarity of the glandular tissue distributions and correlation between both images increase in denser breasts. Furthermore, the synthetic images show continuity with respect to large structures in the density maps.

21Texture Classification and Visualization of Time Series of Gait Dynamics in Patients with Neuro-Degenerative Diseases
The analysis of gait dynamics is helpful for predicting and improving the quality of life, morbidity, and mortality in neuro-degenerative patients. Feature extraction of physiological time series and classification between gait patterns of healthy control subjects and patients are usually carried out on the basis of 1-D signal analysis. The proposed approach presented in this paper departs itself from conventional methods for gait analysis by transforming time series into images, of which texture features can be extracted from methods of texture analysis. Here, the fuzzy recurrence plot algorithm is applied to transform gait time series into texture images, which can be visualized to gain insight into disease patterns. Several texture features are then extracted from fuzzy recurrence plots using the gray-level co-occurrence matrix for pattern analysis and machine classification to differentiate healthy control subjects from patients with Parkinson's disease, Huntington's disease, and amyotrophic lateral sclerosis. Experimental results using only the right stride-intervals of the four groups show the effectiveness of the application of the proposed approach.

22Non-Rigid Contour-Based Registration of Cell Nuclei in 2-D Live Cell Microscopy Images Using a Dynamic Elasticity Model
The analysis of the pure motion of subnuclear structures without influence of the cell nucleus motion and deformation is essential in live cell imaging. In this paper, we propose a 2-D contour-based image registration approach for compensation of nucleus motion and deformation in fluorescence microscopy time-lapse sequences. The proposed approach extends our previous approach, which uses a static elasticity model to register cell images. Compared with that scheme, the new approach employs a dynamic elasticity model for the forward simulation of nucleus motion and deformation based on the motion of its contours. The contour matching process is embedded as a constraint into the system of equations describing the elastic behavior of the nucleus. This results in better performance in terms of the registration accuracy. Our approach was successfully applied to real live cell microscopy image sequences of different types of cells including image data that was specifically designed and acquired for evaluation of cell image registration methods. An experimental comparison with the existing contour-based registration methods and an intensity-based registration method has been performed. We also studied the dependence of the results on the choice of method parameters.

23Optic Disk Detection in Fundus Image Based on Structured Learning
Automated optic disk (OD) detection plays an important role in developing a computer aided system for eye diseases. In this paper, we propose an algorithm for the OD detection based on structured learning. A classifier model is trained based on structured learning. Then, we use the model to achieve the edge map of OD. Thresholding is performed on the edge map, thus a binary image of the OD is obtained. Finally, circle Hough transform is carried out to approximate the boundary of OD by a circle. The proposed algorithm has been evaluated on three public datasets and obtained promising results. The results (an area overlap and Dices coefficients of 0.8605 and 0.9181, respectively, an accuracy of 0.9777, and a true positive and false positive fraction of 0.9183 and 0.0102) show that the proposed method is very competitive with the state-of-the-art methods and is a reliable tool for the segmentation of OD.

24Automatic Detection of Retinal Lesions for Screening of Diabetic Retinopathy
Objective: Diabetic retinopathy (DR) is characterized by the progressive deterioration of retina with the appearance of different types of lesions that include micro-aneurysms, hemorrhages, exudates, etc. Detection of these lesions plays a significant role for early diagnosis of DR. Methods: To this aim, this paper proposes a novel and automated lesion detection scheme, which consists of the four main steps: vessel extraction and optic disc removal, preprocessing, candidate lesion detection, and postprocessing. The optic disc and the blood vessels are suppressed first to facilitate further processing. Curvelet-based edge enhancement is done to separate out the dark lesions from the poorly illuminated retinal background, while the contrast between the bright lesions and the background is enhanced through an optimally designed wideband bandpass filter. The mutual information of the maximum matched filter response and the maximum Laplacian of Gaussian response are then jointly maximized. Differential evolution algorithm is used to determine the optimal values for the parameters of the fuzzy functions that determine the thresholds of segmenting the candidate regions. Morphology-based postprocessing is finally applied to exclude the falsely detected candidate pixels. Results and Conclusions: Extensive simulations on different publicly available databases highlight an improved performance over the existing methods with an average accuracy of 97.71 % and robustness in detecting the various types of DR lesions irrespective of their intrinsic properties.

253D Feature Constrained Reconstruction for Low-Dose CT Imaging
Low-dose computed tomography (LDCT) images are often highly degraded by amplified mottle noise and streak artifacts. Maintaining image quality under low-dose scan protocols is a well-known challenge. Recently, sparse representation-based techniques have been shown to be efficient in improving such CT images. In this paper, we propose a 3D feature constrained reconstruction (3D-FCR) algorithm for LDCT image reconstruction. The feature information used in the 3D-FCR algorithm relies on a 3D feature dictionary constructed from available high quality standard-dose CT sample. The CT voxels and the sparse coefficients are sequentially updated using an alternating minimization scheme. The performance of the 3D-FCR algorithm was assessed through experiments conducted on phantom simulation data and clinical data. A comparison with previously reported solutions was also performed. Qualitative and quantitative results show that the proposed method can lead to a promising improvement of LDCT image quality.

26A Novel Method to Predict Knee Osteoarthritis Progression on MRI Using Machine Learning Methods
This study explored the hidden biomedical information from knee MR images for osteoarthritis (OA) prediction. We have computed the Cartilage Damage Index (CDI) information from 36 informative locations on tibiofemoral cartilage compartment from 3D MR imaging and used PCA analysis to process the feature set. Four machine learning methods (artificial neural network (ANN), support vector machine (SVM), random forest and naïve Bayes) were employed to predict the progression of OA, which was measured by change of Kellgren and Lawrence (KL) grade, Joint Space Narrowing on Medial compartment (JSM) grade and Joint Space Narrowing on Lateral compartment (JSL) grade. To examine the different effect of medial and lateral informative locations, we have divided the 36- dimensional feature set into 18-dimensional medial feature set and 18-dimensional lateral feature set and run the experiment on four classifiers separately. Experiment results showed that the medial feature set generated better prediction performance than the lateral feature set, while using the total 36-dimensional feature set generated the best. PCA analysis is helpful in feature space reduction and performance improvement. For KL grade prediction, the best performance was achieved by ANN with AUC = 0.761 and F-measure = 0.714. For JSM grade prediction, the best performance was achieved by random forest with AUC = 0.785 and F-measure = 0.743, while for JSL grade prediction, the best performance was achieved by the ANN with AUC = 0.695 and Fmeasure = 0.796. As experiment results showing that the informative locations on medial compartment provide more distinguishing features than informative locations on lateral compartment.

27Learning to Detect Blue-white Structures in Dermoscopy Images with Weak Supervision
We propose a novel approach to identify one of the most significant dermoscopic criteria in the diagnosis of cutaneous Melanoma: the blue-whitish structure (BWS). In this paper, we achieve this goal in a Multiple Instance Learning (MIL) framework using only image-level labels indicating whether the feature is present or not. To this aim, each image is represented as a bag of (non-overlapping) regions where each region may or may not be identified as an instance of BWS. A probabilistic graphical model [1] is trained (in MIL fashion) to predict the bag (image) labels. As output, we predict the classification label for the image (i.e., the presence or absence of BWS in each image) and as well we localize the feature in the image. Experiments are conducted on a challenging dataset with results outperforming state-of-the-art techniques, with BWS detection besting competing methods in terms of performance. This study provides an improvement on the scope of modelling for computerized image analysis of skin lesions. In particular, it propounds a framework for identification of dermoscopic local features from weakly-labelled data.

28Optimized Optical Coherence Tomography Imaging with Hough Transform-based Fixed-pattern Noise Reduction
Fixed-pattern noise seriously affects the clinical application of optical coherence tomography (OCT), especially, in the imaging of tumorous tissue. We propose a Hough transform-based fixed-pattern noise reduction (HTFPNR) method to reduce the fixed-pattern noise for optimizing imaging of tumorous tissue with OCT system. Using by the HTFPNR method, we detect and map the outline of fixed-pattern noise in the OCT images, and finally efficiently reduce the fixed-pattern noise by the longitudinal and horizontal intelligent processing procedure. We adopt the image-to-noise ratio with full information (INRfi) and the noise reduction ratio (NRR) to evaluate the outcome of fixed-pattern noise reduction ratio, respectively. The INRfi of OCT image’s noise reduction of ex vivo brainstem tumor is approximate 21.92 dB. Six groups of OCT images including three types of fixed-pattern noises have been validated via experimental evaluation of the ex vivo gastric tumor. In the different types of fixed-pattern noise, the mean INRfis are 25.24 dB, 23.04 dB and 19.35 dB, respectively. This result demonstrates that it is highly efficient and useful in fixed-pattern noise reduction. The fluctuating range of the NRR is 0.84-0.88 for three types of added noise in the OCT images. This result demonstrates that the HTFPNR method can as possible as save useful information by comparing to previous research. This proposed HTFPNR method can be used into the fixed-pattern noise reduction of OCT images in other soft biological tissue in the future.

29Automated Region of Interest Detection Method in Scintigraphic Glomerular Filtration Rate Estimation
The glomerular filtration rate (GFR) is a crucial index to measure renal function. In daily clinical practice, the GFR can be estimated using the Gates method, which requires the clinicians to define the region of interest (ROI) for the kidney and the corresponding background in dynamic renal scintigraphy. The manual placement of ROIs to estimate the GFR is subjective and labor-intensive, however, making it an undesirable and unreliable process. This work presents a fully automated ROI detection method to achieve accurate and robust GFR estimations. After image preprocessing, the ROI for each kidney was delineated using a shape prior constrained level set (spLS) algorithm and then the corresponding background ROIs were obtained according to the defined kidney ROIs. In computer simulations, the spLS method had the best performance in kidney ROI detection compared with the previous threshold method (Threshold) and the Chan-Vese level set (cvLS) method. In further clinical applications, 223 sets of 99mTc-diethylenetriaminepentaacetic acid (99mTc-DTPA) renal scintigraphic images from patients with abnormal renal function were reviewed. Compared with the former ROI detection methods (Threshold and cvLS), the GFR estimations based on the ROIs derived by the spLS method had the highest consistency and correlations (r=0.98, p<0.001) with the reference estimated by experienced physicians.

30Automatic Retinal Vessel Segmentation via Deeply Supervised and Smoothly Regularized Network
In recent years, retinal vessel segmentation technology has become an important component for disease screening and diagnosing in clinical medicine. However, retinal vessel segmentation is a challenging task due to complex distribution of blood vessels, relatively low contrast between target and background, and potential presence of illumination and pathologies. In this paper, we propose an automatic retinal vessel segmentation network using deep supervision and smoothness regularization, which integrates holistically-nested edge detector (HED) and global smoothness regularization from conditional random ?elds (CRFs). It is an end-to-end and pixel-to-pixel deep convolutional network, can perform better results than HED-based methods and the methods where CRF inference is applied as a post-processing method. With co-constraints between pixels, the proposed DSSRN obtains better results. Finally, we show that our proposed method obtains a sate-of-the-art vessel segmentation performance on all three benchmarks, DRIVE, STARE and CHASE DB1.

31A Hybrid Model for Image Denoising Combining Modified Isotropic Diffusion Model and Modified Perona-Malik Model
In this article, a hybrid image denoising algorithm based on directional diffusion is proposed. Specifically, we developed a new noise-removal model by combining the modified isotropic diffusion (ID) model and the modified Perona-Malik (PM) model. The novel hybrid model can adapt the diffusion process along the tangential direction of edges in the original image via a new control function based on the patch similarity modulus. In addition, the patch similarity modulus is used as the new structure indicator for the modified Perona-Malik model. The feature of second order directional derivative of edge’s tangential direction allows the proposed model to reduce the aliasing and the noise around edge during edge preserving smoothing. The proposed method is thus able to efficiently preserve the edges, textures, thin lines, weak edges and fine details, meanwhile preventing the staircase effects. Computer experiments on synthetic image and nature images demonstrate that the proposed model achieves a better performance than the conventional partial differential equations (PDEs) models and some recent advanced models.

32Pattern Classification for Gastrointestinal Stromal Tumors by Integration of Radiomics and Deep Convolutional Features
Predicting malignant potential is one of the most critical components of a computer-aided diagnosis (CAD) system for gastrointestinal stromal tumors (GISTs). These tumors have been studied only on the basis of subjective computed tomography (CT) findings. Among various methodologies, radiomics and deep learning algorithms, specifically convolutional neural networks (CNNs), have recently been confirmed to achieve significant success by outperforming the state-of-the-art performances in medical image pattern classification and have rapidly become leading methodologies in this field. However, the existing methods generally use radiomics or deep convolutional features independently for pattern classification, which tend to take into account only global or local features, respectively. In this paper, we introduce and evaluate a hybrid structure that includes different features selected with radiomics model and CNN and integrates these features to deal with GIST classification. Radiomics model and CNN architecture are constructed for global radiomics and local convolutional feature selections, respectively. Subsequently, we utilize distinct radiomics and deep convolutional features to perform pattern classification for GIST. Specifically, we propose a new pooling strategy to assemble the deep convolutional features of 54 3D patches from the same case and integrate these features with the radiomics features for independent case, followed by random forests (RF) classifier. Our method can be extensively evaluated using multiple clinical datasets.

33Supervised Saliency Map Driven Segmentation of Lesions in Dermoscopic Images
Lesion segmentation is the first step in most automatic melanoma recognition systems. Deficiencies and difficulties in dermoscopic images such as color inconstancy, hair occlusion, dark corners and color charts make lesion segmentation an intricate task. In order to detect the lesion in the presence of these problems, we propose a supervised saliency detection method tailored for dermoscopic images based on the discriminative regional feature integration (DRFI). DRFI method incorporates multi-level segmentation, regional contrast, property, background descriptors, and a random forest regressor to create saliency scores for each region in the image. In our improved saliency detection method, mDRFI, we have added some new features to regional property descriptors. Also, in order to achieve more robust regional background descriptors, a thresholding algorithm is proposed to obtain a new pseudo-background region. Findings reveal that mDRFI is superior to DRFI in detecting the lesion as the salient object in dermoscopic images. The proposed overall lesion segmentation framework uses detected saliency map to construct an initial mask of the lesion through thresholding and post-processing operations. The initial mask is then evolving in a level set framework to fit better on the lesion's boundaries. The results of evaluation tests on three public datasets show that our proposed segmentation method outperforms the other conventional state-of-the-art segmentation.

34Structure-preserving Guided Retinal Image Filtering and Its Application for Optic Disc Analysis
Retinal fundus photographs have been used in the diagnosis of many ocular diseases such as glaucoma, pathological myopia, age-related macular degeneration and diabetic retinopathy. With the development of computer science, computer aided diagnosis has been developed to process and analyse the retinal images automatically. One of the challenges in the analysis is that the quality of the retinal image is often degraded. For example, a cataract in human lens will attenuate the retinal image, just as a cloudy camera lens which reduces the quality of a photograph. It often obscures the details in the retinal images and posts challenges in retinal image processing and analysing tasks. In this paper, we approximate the degradation of the retinal images as a combination of human-lens attenuation and scattering. A novel structure-preserving guided retinal image filtering (SGRIF) is then proposed to restore images based on the attenuation and scattering model. The proposed SGRIF consists of a step of global structure transferring and a step of global edge-preserving smoothing. Our results show that the proposed SGRIF method is able to improve the contrast of retinal images, measured by histogram flatness measure, histogram spread and variability of local luminosity. In addition, we further explored the benefits of SGRIF for subsequent retinal image processing and analysing tasks. In the two applications of deep learning based optic cup segmentation and sparse learning based cup-to-disc ratio (CDR) computation.

35Disc-aware Ensemble Network for Glaucoma Screening from Fundus Image
Glaucoma is a chronic eye disease that leads to irreversible vision loss. Most of the existing automatic screening methods firstly segment the main structure, and subsequently calculate the clinical measurement for detection and screening of glaucoma. However, these measurement-based methods rely heavily on the segmentation accuracy, and ignore various visual features. In this paper, we introduce a deep learning technique to gain additional image-relevant information, and screen glaucoma from the fundus image directly. Specifically, a novel Disc-aware Ensemble Network (DENet) for automatic glaucoma screening is proposed, which integrates the deep hierarchical context of the global fundus image and the local optic disc region. Four deep streams on different levels and modules are respectively considered as global image stream, segmentation-guided network, local disc region stream, and disc polar transformation stream. Finally, the output probabilities of different streams are fused as the final screening result. The experiments on two glaucoma datasets (SCES and new SINDI datasets) show our method outperforms other state-of-the-art algorithms.

36Pulmonary Artery-Vein Classification in CT Images Using Deep Learning
Recent studies show that pulmonary vascular diseases may specifically affect arteries or veins through different physiologic mechanisms. To detect changes in the two vascular trees, physicians manually analyze the chest computed tomography (CT) image of the patients in search of abnormalities. This process is time-consuming, difficult to standardize and thus not feasible for large clinical studies or useful in real-world clinical decision making. Therefore, automatic separation of arteries and veins in CT images is becoming of great interest, as it may help physicians accurately diagnose pathological conditions. In this work, we present a novel, fully automatic approach to classifying vessels from chest CT images into arteries and veins. The algorithm follows three main steps: first, a scale-space particles segmentation to isolate vessels; then a 3D convolutional neural network (CNN) to obtain a first classification of vessels; finally, graph-cuts (GC) optimization to refine the results. To justify the usage of the proposed CNN architecture, we compared different 2D and 3D CNNs that may use local information from bronchus- and vessel-enhanced images provided to the network with different strategies. We also compared the proposed CNN approach with a Random Forests (RF) classifier. The methodology was trained and evaluated on the superior and inferior lobes of the right lung of eighteen clinical cases with non-contrast chest CT scans, in comparison with manual classification.

37Enhancing the image quality via transferred deep residual learning of coarse PET sonograms
Increasing the image quality of positron emission tomography (PET) is an essential topic in the PET community. For instance, thin pixelated crystals have been used to provide high spatial resolution images but at the cost of sensitivity and manufacture expense. In this study, we proposed an approach to enhance the PET image resolution and noise property for PET scanners with large pixelated crystals. To address the problem of coarse blurred sinograms with large parallax errors associated with large crystals, we developed a data-driven, single-image super-resolution (SISR) method for sinograms, based on the novel deep residual convolutional neural network (CNN). Unlike the CNN-based SISR on natural images, periodically padded sinogram data and dedicated network architecture were used to make it more efficient for PET imaging. Moreover, we included the transfer learning scheme in the approach to process cases with poor labeling and small training data set. The approach was validated via analytically simulated data (with and without noise), Monte Carlo simulated data, and pre-clinical data. Using the proposed method, we could achieve comparable image resolution and better noise property with large crystals of bin sizes 4 times of thin crystals with a bin size from 1×1 mm2 to 1.6×1.6 mm2. Our approach uses external PET data as the prior knowledge for training and does not require additional information during inference.

38Design of a Gabor Filter HW Accelerator for Applications in Medical Imaging
The Gabor filter (GF) has been proved to show good spatial frequency and position selectivity, which makes it a very suitable solution for visual search, object recognition, and, in general, multimedia processing applications. GFs prove useful also in the processing of medical imaging to improve part of the several filtering operations for their enhancement, denoising, and mitigation of artifact issues. However, the good performances of GFs are compensated by a hardware complexity that traduces in a large amount of mapped physical resources. This paper presents three different designs of a GF, showing different tradeoffs between accuracy, area, power, and timing. From the comparative study, it is possible to highlight the strength points of each one and choose the best design. The designs have been targeted to a Xilinx field-programmable gate array (FPGA) platform and synthesized to 90-nm CMOS standard cells. FPGA implementations achieve a maximum operating frequency among the different designs of 179 MHz, while 350 MHz is obtained from CMOS synthesis. Therefore, 86 and 168 full-HD (1920 x 1080) f/s could be processed, with FPGA and std_cell implementations, respectively. In order to meet space constraints, several considerations are proposed to achieve an optimization in terms of power consumption, while still ensuring real-time performances.

39Deep Neural Networks for Ultrasound Beamforming
We investigate the use of deep neural networks (DNNs) for suppressing off-axis scattering in ultrasound channel data. Our implementation operates in the frequency domain via the short-time Fourier transform. The inputs to the DNN consisted of the separated real and imaginary components (i.e. inphase and quadrature components) observed across the aperture of the array, at a single frequency and for a single depth. Different networks were trained for different frequencies. The output had the same structure as the input and the real and imaginary components were combined as complex data before an inverse short-time Fourier transform was used to reconstruct channel data. Using simulation, physical phantom experiment, and in vivo scans from a human liver, we compared this DNN approach to standard delay-and-sum (DAS) beamforming and an adaptive imaging technique that uses the coherence factor (CF). For a simulated point target, the side lobes when using the DNN approach were about 60 dB below those of standard DAS. For a simulated anechoic cyst, the DNN approach improved contrast ratio (CR) and contrast-to-noise (CNR) ratio by 8.8 dB and 0.3 dB, respectively, compared to DAS. For an anechoic cyst in a physical phantom, the DNN approach improved CR and CNR by 17.1 dB and 0.7 dB, respectively. For two in vivo scans, the DNN approach improved CR and CNR by 13.8 dB and 9.7 dB, respectively. We also explored methods for examining how the networks in this work function.

40Kidney Detection in 3D Ultrasound Imagery Via Shape to Volume Registration Based on Spatially Aligned Neural Network
This paper introduces a computer-aided kidney shape detection method suitable for volumetric (3D) ultrasound images. Using shape and texture priors, the proposed method automates the process of kidney detection, which is a problem of great importance in computer-assisted trauma diagnosis. This paper introduces a new complex-valued implicit shape model which represents the multi-regional structure of the kidney shape. A spatially aligned neural network classifiers with complex-valued output is designed to classify voxels into background and multi-regional structure of the kidney shape. The complex values of the shape model and classification outputs are selected and incorporated in a new similarity metric such the shape-to-volume registration process only fits the shape model on the actual kidney shape in input ultrasound volumes. The algorithm's accuracy and sensitivity are evaluated using both simulated and actual 3D ultrasound images, and it is compared against the performance of the state-of-the-art. The results support the claims about accuracy and robustness of the proposed kidney detection method, and statistical analysis validates its superiority over state-of-the-art.

41Surrogate-assisted Retinal OCT Image Classification Based on Convolutional Neural Networks
Optical Coherence Tomography (OCT) is becoming one of the most important modalities for the noninvasive assessment of retinal eye diseases. As the number of acquired OCT volumes increases, automating the OCT image analysis is becoming increasingly relevant. In this paper, we propose a surrogate-assisted classification method to classify retinal OCT images automatically based on convolutional neural networks (CNNs). Image denoising is first performed to reduce the noise. Thresholding and morphological dilation are applied to extract the masks. The denoised images and the masks are then employed to generate a lot of surrogate images, which are used to train the CNN model. Finally, The prediction for a test image is determined by the average of the outputs from the trained CNN model on the surrogate images. The proposed method has been evaluated on different databases. The results (AUC of 0.9783 in the local database and AUC of 0.9856 in the Duke database) show that the proposed method is a very promising tool for classifying the retinal OCT images automatically.

42Automatic Segmentation of Acute Ischemic Stroke from DWI using 3D Fully Convolutional DenseNets
Acute ischemic stroke is recognized as a common cerebral vascular disease in aging people. Accurate diagnosis and timely treatment can effectively improve the blood supply of the ischemic area and reduce the risk of disability or even death. Understanding the location and size of infarcts plays a critical role in the diagnosis decision. However, manual localization and quantification of stroke lesions are laborious and timeconsuming. In this paper, we propose a novel automatic method to segment acute ischemic stroke from diffusion weighted images (DWI) using deep 3D convolutional neural networks (CNNs). Our method can efficiently utilize 3D contextual information and automatically learn very discriminative features in an end-to-end and data-driven way. To relieve the difficulty of training very deep 3D CNN, we equip our network with dense connectivity to enable the unimpeded propagation of information and gradients throughout the network. We train our model with Dice objective function to combat the severe class imbalance problem in data. A DWI dataset containing 242 subjects (90 for training, 62 for validation and 90 for testing) with various types of acute ischemic stroke was constructed to evaluate our method. Our model achieved high performance on various metrics (Dice similarity coefficient: 79.13%, lesion-wise precision: 92.67%, lesion-wise F1 score: 89.25%), outperforming other state-of-the-art CNN methods by a large margin.

43Interactive Medical Image Segmentation using Deep Learning with Image-specific Fine-tuning
Convolutional neural networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they have not demonstrated sufficiently accurate and robust results for clinical use. In addition, they are limited by the lack of image-specific adaptation and the lack of generalizability to previously unseen object classes (a.k.a. zero-shot learning). To address these problems, we propose a novel deep learning-based interactive segmentation framework by incorporating CNNs into a bounding box and scribble-based segmentation pipeline. We propose image-specific fine-tuning to make a CNN model adaptive to a specific test image, which can be either unsupervised (without additional user interactions) or supervised (with additional scribbles). We also propose a weighted loss function considering network and interaction-based uncertainty for the fine-tuning. We applied this framework to two applications: 2D segmentation of multiple organs from fetal Magnetic Resonance (MR) slices, where only two types of these organs were annotated for training; and 3D segmentation of brain tumor core (excluding edema) and whole brain tumor (including edema) from different MR sequences, where only the tumor core in one MR sequence was annotated for training.

44Rapid contour detection for image classification
The author introduces a contour detection method that has relatively low complexity yet still highly accurate. The method is based on extrema detection along the four principal orientations, a trick that can be used to detect not only edges but, in particular, also ridges and rivers. The author makes a comparison to the popular Canny algorithm and shows that the proposed method's only downside is that it cannot detect very high curvatures in edge contours. The method is applied to the task of image classification (satellite images, Caltech-101, etc.) and it is demonstrated that the use of all three contour types (edges, ridges, and rives) improves classification accuracy as opposed to the use of only edge contours. Thus, for image classification, it is more important to extract multiple contour features; the use of the exact detection method appears to play a smaller role. The author's simple method is also appealing for use in individual frames, due to its low complexity.

45Joint Optic Disc and Cup Segmentation Based on Multi-label Deep Network and Polar Transformation
Glaucoma is a chronic eye disease that leads to irreversible vision loss. The cup to disc ratio (CDR) plays an important role in the screening and diagnosis of glaucoma. Thus, the accurate and automatic segmentation of optic disc (OD) and optic cup (OC) from fundus images is a fundamental task. Most existing methods segment them separately, and rely on hand-crafted visual feature from fundus images. In this paper, we propose a deep learning architecture, named M-Net, which solves the OD and OC segmentation jointly in a one-stage multilabel system. The proposed M-Net mainly consists of multi-scale input layer, U-shape convolutional network, side-output layer, and multi-label loss function. The multi-scale input layer constructs an image pyramid to achieve multiple level receptive field sizes. The U-shape convolutional network is employed as the main body network structure to learn the rich hierarchical representation, while the side-output layer acts as an early classifier that produces a companion local prediction map for different scale layers. Finally, a multi-label loss function is proposed to generate the final segmentation map. For improving the segmentation performance further, we also introduce the polar transformation, which provides the representation of the original image in the polar coordinate system.

46Automatic Segmentation of Acute Ischemic Stroke from DWI using 3D Fully Convolutional DenseNets
Acute ischemic stroke is recognized as a common cerebral vascular disease in aging people. Accurate diagnosis and timely treatment can effectively improve the blood supply of the ischemic area and reduce the risk of disability or even death. Understanding the location and size of infarcts plays a critical role in the diagnosis decision. However, manual localization and quantification of stroke lesions are laborious and time consuming. In this paper, we propose a novel automatic method to segment acute ischemic stroke from diffusion weighted images (DWI) using deep 3D convolutional neural networks (CNNs). Our method can efficiently utilize 3D contextual information and automatically learn very discriminative features in an end-to-end and data-driven way. To relieve the difficulty of training very deep 3D CNN, we equip our network with dense connectivity to enable the unimpeded propagation of information and gradients throughout the network. We train our model with Dice objective function to combat the severe class imbalance problem in data. A DWI dataset containing 242 subjects (90 for training, 62 for validation and 90 for testing) with various types of acute ischemic stroke was constructed to evaluate our method. Our model achieved high performance on various metrics (Dice similarity coefficient: 79.13%, lesion-wise precision: 92.67%, lesion-wise F1 score: 89.25%), outperforming other state-of-the-art CNN methods by a large margin.

47SDI+: a Novel Algorithm for Segmenting Dermoscopic Images
Malignant skin lesions are among the most common types of cancer, and automated systems for their early detection are of fundamental importance. We propose SDI+, an unsupervised algorithm for the segmentation of skin lesions in dermoscopic images. It is articulated into three steps, aimed at extracting preliminary information on possible confounding factors, accurately segmenting the lesion, and post-processing the result. The overall method achieves high accuracy on dark skin lesions and can handle several cases where confounding factors could inhibit a clear understanding by a human operator. We present extensive experimental results and comparisons achieved by the SDI+ algorithm on the ISIC 2017 dataset, highlighting the advantages and disadvantages.

48Glaucoma Detection from Fundus Images Using MATLAB GUI
A troublesome disease in which damages of the optic nerve of eye's is nothing but the glaucoma, which causes irretrievable loss of vision. Glaucoma is a disease where if treatment is get late, the person can blind. Normally glaucoma detects when there is an increase in the fluid in the front of eye. When that extra fluid is increased, the pressure in your eye is also getting increased. Accordingly, the size of the optic disc and optic cup is increased as a result diameter also increased. The ratio of the cup and disc diameter is called cup-to-disc ratio (CDR). Threshold type segmentation method is used in this system for localizing the optic disc and optic cup. Another edge detection and ellipse fitting algorithm are also used. The proposed system for optic disc and optic cup localization and CDR calculation is MATLAB GUI software.

49Classification of Medical Images in the Biomedical Literature by Jointly Using Deep and Handcrafted Visual Features
The classification of medical images and illustrations from the biomedical literature is important for automated literature review, retrieval and mining. Although deep learning is effective for large-scale image classification, it may not be the optimal choice for this task as there is only a small training dataset. We propose a combined deep and handcrafted visual feature (CDHVF) based algorithm that uses features learned by three fine-tuned and pre-trained deep convolutional neural networks (DCNNs) and two handcrafted descriptors in a joint approach. We evaluated the CDHVF algorithm on the ImageCLEF 2016 Subfigure Classification dataset and it achieved an accuracy of 85.47%, which is higher than the best performance of other purely visual approaches listed in the challenge leaderboard. Our results indicate that handcrafted features complement the image representation learned by DCNNs on small training datasets and improve accuracy in certain medical image classification problems.

50Automatic Estimation of Fetal Abdominal Circumference from Ultrasound Images
Ultrasound diagnosis is routinely used in obstetrics and gynecology for fetal biometry, and owing to its time-consuming process, there has been a great demand for automatic estimation. However, the automated analysis of ultrasound images is complicated because they are patient-specific, operator-dependent, and machine-specific. Among various types of fetal biometry, the accurate estimation of abdominal circumference (AC) is especially difficult to perform automatically because the abdomen has low contrast against surroundings, non-uniform contrast, and irregular shape compared to other parameters. We propose a method for the automatic estimation of the fetal AC from 2D ultrasound data through a specially designed convolutional neural network (CNN), which takes account of doctors' decision process, anatomical structure, and the characteristics of the ultrasound image. The proposed method uses CNN to classify ultrasound images (stomach bubble, amniotic fluid, and umbilical vein) and Hough transformation for measuring AC. We test the proposed method using clinical ultrasound data acquired from 56 pregnant women. Experimental results show that, with relatively small training samples, the proposed CNN provides sufficient classification results for AC estimation through the Hough transformation. The proposed method automatically esti mates AC from ultrasound images. The method is quantitatively evaluated, and shows stable performance in most cases and even for ultrasound images deteriorated by shadowing artifacts.

Hi there! Click one of our representatives below and we will get back to you as soon as possible.

Chat with us on WhatsApp
Online Payment
LiveZilla Live Chat Software