Digital Image Processing Projects – ElysiumPro

Image Processing Projects

CSE Projects, ECE Projects
Description
I Image Processing means processing images using mathematical algorithm. ElysiumPro provides a comprehensive set of reference-standard algorithms and workflow process for students to do implement image segmentation, image enhancement, geometric transformation, and 3D image processing for research.
Download Project List

Quality Factor

  • 100% Assured Results
  • Best Project Explanation
  • Tons of Reference
  • Cost optimized
  • Controlpanel Access


1Discriminative Transfer Learning for General Image Restoration
Recently, several discriminative learning approaches have been proposed for effective image restoration, achieving convincing tradeoff between image quality and computational efficiency. However, these methods require separate training for each restoration task (e.g., denoising, deblurring, and demosaicing) and problem condition (e.g., noise level of input images). This makes it time-consuming and difficult to encompass all tasks and conditions during training. In this paper, we propose a discriminative transfer learning method that incorporates formal proximal optimization and discriminative learning for general image restoration. The method requires a single-pass discriminative training and allows for reuse across various problems and conditions while achieving an efficiency comparable to previous discriminative approaches. Furthermore, after being trained, our model can be easily transferred to new likelihood terms to solve untrained tasks, or be combined with existing priors to further improve image restoration quality.

2Image Segmentation for Intensity Inhomogeneity in Presence of High Noise
Automated segmentation of fine objects details in a given image is becoming of crucial interest in different imaging fields. In this paper, we propose a new variational level-set model for both global and interactive\selective segmentation tasks, which can deal with intensity inhomogeneity and the presence of noise. The proposed method maintains the same performance on clean and noisy vector-valued images. The model utilizes a combination of locally computed denoising constrained surface and a denoising fidelity term to ensure a fine segmentation of local and global features of a given image. A two-phase level-set formulation has been extended to a multi-phase formulation to successfully segment medical images of the human brain. Comparative experiments with state-of-the-art models show the advantages of the proposed method.

3Low-Dose CT Image Denoising Using a Generative Adversarial Network with Wasserstein Distance and Perceptual Loss
The continuous development and extensive use of computed tomography (CT) in medical practice has raised a public concern over the associated radiation dose to the patient. Reducing the radiation dose may lead to increased noise and artifacts, which can adversely affect the radiologists' judgment and confidence. Hence, advanced image reconstruction from low-dose CT data is needed to improve the diagnostic performance, which is a challenging problem due to its ill-posed nature. Over the past years, various low-dose CT methods have produced impressive results. However, most of the algorithms developed for this application, including the recently popularized deep learning techniques, aim for minimizing the mean-squared error (MSE) between a denoised CT image and the ground truth under generic penalties. Although the peak signal-to-noise ratio is improved, MSE- or weighted-MSE-based methods can compromise the visibility of important structural details after aggressive denoising. This paper introduces a new CT image denoising method based on the generative adversarial network (GAN) with Wasserstein distance and perceptual similarity. The Wasserstein distance is a key concept of the optimal transport theory and promises to improve the performance of GAN. The perceptual loss suppresses noise by comparing the perceptual features of a denoised output against those of the ground truth in an established feature space, while the GAN focuses more on migrating the data noise distribution from strong to weak statistically.

4Towards Optimal Denoising of Image Contrast
Most conventional imaging modalities detect light indirectly by observing high-energy photons. The random nature of photon emission and detection is often the dominant sources of noise in imaging. Such case is referred to as photon-limited imaging, and the noise distribution is well modeled as Poisson. Multiplicative multiscale innovation (MMI) presents a natural model for Poisson count measurement, where the inter-scale relation is represented as random partitioning (binomial distribution) or local image contrast. In this paper, we propose a nonparametric empirical Bayes estimator that minimizes the mean square error of MMI coefficients. The proposed method achieves better performance compared with state-of-the-art methods in both synthetic and real sensor image experiments under low illumination.

5Denoising of Microscopy Images: A Review of the State-of-the-Art, and a New Sparsity-Based Method
This paper reviews the state-of-the-art in denoising methods for biological microscopy images and introduces a new and original sparsity-based algorithm. The proposed method combines total variation (TV) spatial regularization, enhancement of low-frequency information, and aggregation of sparse estimators and is able to handle simple and complex types of noise (Gaussian, Poisson, and mixed), without any a priori model and with a single set of parameter values. An extended comparison is also presented, that evaluates the denoising performance of the thirteen (including ours) state-of-the-art denoising methods specifically designed to handle the different types of noises found in bioimaging. Quantitative and qualitative results on synthetic and real images show that the proposed method outperforms the other ones on the majority of the tested scenarios.

6Dehazing for Multispectral Remote Sensing Images Based on a Convolutional Neural Network with the Residual Architecture
Multispectral remote sensing images are often contaminated by haze, which causes low image quality. In this paper, a novel dehazing method based on a deep convolutional neural network (CNN) with the residual structure is proposed for multispectral remote sensing images. First, multiple CNN individuals with the residual structure are connected in parallel and each individual is used to learn a regression from the hazy image to the clear image. Then, the outputs of CNN individuals are fused with weight maps to produce the final dehazing result. In the designed network, the CNN individuals, mining multiscale haze features through multiscale convolutions, are trained using different levels of haze samples to achieve different dehazing abilities. In addition, the weight maps change with the haze distribution, and the fusion of the CNN individuals is adaptive. The designed network is end-to-end, and putting a hazy image into it, the clear scene can be restored. To train the network, a wavelength-dependent haze simulation method is proposed to generate labeled data, which can synthesize hazy multispectral images highly close to real conditions. Experimental results show that the proposed method can accurately remove the haze in each band of multispectral images under different scenes.

7Joint Denoising/Compression of Image Contours via Shape Prior and Context Tree
The advent of depth sensing technologies means that the extraction of object contours in images-a common and important pre-processing step for later higher level computer vision tasks like object detection and human action recognition-has become easier. However, captured depth images contain acquisition noise and the detected contours suffer from errors as a result. In this paper, we propose to jointly denoise and compress detected contours in an image for bandwidth-constrained transmission to a client, who can then carry out aforementioned application-specific tasks using the decoded contours as input. First, we prove theoretically that in general a joint denoising/compression approach can outperform a separate two-stage approach that first denoises then encodes contours lossily. Adopting a joint approach, we propose a burst error model that models typical errors encountered in an observed string of directional edges. We then formulate a rate-constrained maximum a posteriori problem that trades off the posterior probability of an estimated string given with its code rate. We design a dynamic programming algorithm that solves the posed problem optimally, and propose a compact context representation called total suffix tree that can reduce complexity of the algorithm dramatically. To the best of our knowledge, we are the first in the literature to study the problem of joint denoising/compression of image contours and offer a computation-efficient optimization algorithm.

8Fast Superpixel Based Subspace Low Rank Learning Method for Hyperspectral Denoising
Sequential data, such as video frames and event data, have been widely applied in the realworld. As a special kind of sequential data, hyperspectral images (HSIs) can be regarded as a sequence of 2-D images in the spectral dimension, which can be effectively utilized for distinguishing different landcovers according to the spectral sequences. This paper presents a novel noise reduction method based on superpixel-based subspace low rank representation for hyperspectral imagery. First, under the framework of a linear mixture model, the original hyperspectral cube is assumed to be low rank in the spectral domain, which could be represented by decomposing HSI data into two sub-matrices of lower ranks. Meanwhile, due to the high correlation of neighboring pixels, the spectra within each neighborhood would also promote low rankness, and the local spatial low rankness could be exploited by enforcing the nuclear norm within superpixel-based regions in the decomposed subspace. The superpixels are easily and effectively generated by utilizing state-of-the-art superpixel segmentation algorithms in the first principal component of the original HSI. Moreover, benefiting from the subspace decomposition, the proposed method has an overwhelming superiority in computational cost than the state-of-the-art LR-based methods. The final model could be efficiently solved by the augmented Lagrangian method.

9In SAR-BM3D: A Nonlocal Filter for SAR Interferometric Phase Restoration
The block-matching 3-D (BM3D) algorithm, based on the nonlocal approach, is one of the most effective methods to date for additive white Gaussian noise image denoising. Likewise, its extension to synthetic aperture radar (SAR) amplitude images, SAR-BM3D, is a state-of-the-art SAR despeckling algorithm. In this paper, we further extend BM3D to address the restoration of SAR interferometric phase images. While keeping the general structure of BM3D, its processing steps are modified to take into account the peculiarities of the SAR interferometry signal. Experiments on simulated and real-world Tandem-X SAR interferometric pairs prove the effectiveness of the proposed method.

10Automatic Contrast-Limited Adaptive Histogram Equalization with Dual Gamma Correction
We propose automatic contrast-limited adaptive histogram equalization (CLAHE) for image contrast enhancement. We automatically set the clip point for CLAHE based on textureness of a block. Also, we introduce dual gamma correction into CLAHE to achieve contrast enhancement while preserving naturalness. First, we redistribute the histogram of the block in CLAHE based on the dynamic range of each block. Second, we perform dual gamma correction to enhance the luminance, especially in dark regions while reducing over-enhancement artifacts. Since automatic CLAHE adaptively enhances contrast in each block while boosting luminance, it is very effective in enhancing dark images and daylight ones with strong dark shadows. Moreover, automatic CLAHE is computationally efficient, i.e., more than 35 frames/s at 1024 × 682 resolution, due to the independent block processing for contrast enhancement. Experimental results demonstrate that automatic CLAHE with dual gamma correction achieves good performance in contrast enhancement and outperforms state-of-the-art methods in terms of visual quality and quantitative measures.

11Gabor feature-based composite kernel method for hyper spectral image classification
Different from the traditional kernel classifiers that map the original data into high-dimensional kernel space, a novel classifier that projects Gabor features of the hyperspectral image into the kernel induced space through composite kernel technique is presented. The proposed method can not only improve the flexibility of the exploitation of spatial information but also successfully apply the kernel technique from a very different perspective to strengthen the discriminative ability. Experiments on the Indian Pines dataset demonstrate the superiority of the proposed method.

12Detection and classification of mammary lesions using artificial neural networks and morphological wavelets
Breast cancer is a worldwide public health problem, with a high rate of incidence and mortality. The most widely used to perform early on possible abnormalities in breast tissue is mammography. In this work we aim to verify and analyze the application of classifiers based on neural networks (multi-layer perceptrons, MLP, and radial hassis functions, RBF), and support vector machines (SVM) with several different kernels, in order to detect the presence of breast lesions and classify them into malignant of benign. We used the IRMA database, composed by 2,796 patch images, which consist of 128×128 pixels region of interest of real mammography images. IRMA database is organized by BI-RADS classification (normal, benign and malignant) and tissue type (dense, extremely dense, adipose, and fibroglandular), generating 12 classes. Each image was represented by texture patterns (Haralick and Zernike moments) extracted from the components of the two levels decomposition by morphological wavelets. Multi-layer perceptrons with two layers were the successful methods, reaching an accuracy rate of 96.20%, proving the possibility of building a computer-aided diagnosis system to improve accuracy of mammogram analysis, contributing to improve prognosis as well.

13Detection of Age-Related Macular Degeneration in Fundus Images by an Associative Classifier
In this paper we propose the application of a novel associative classifier, the Heaviside's Classifier, for the early detection of Age-Related Macular Degeneration un retinal fundus images. Retinal fundus images are, first, processed by a simple method based on the Homomorphic filtering and some basic mathematical morphology operations; in the second phase we extract relevant features of the images using the Zernike moments, we also apply a feature selection method to select the best features from the original features set. The dataset created from the images with the best features are used to train and test a new classification model whose learning and classification phases are based on the Heaviside's Function. Experimental results show that our method is capable to achieve an accuracy value about the 94.12% with a dataset created from images belonging to famous image repositories.

14Classification of Breast Cancer Based on Histology Images Using Convolutional Neural Networks
In recent years, the classification of breast cancer has been the topic of interest in the field of Healthcare informatics, because it is the second main cause of cancer-related deaths in women. Breast cancer can be identified using a biopsy where tissue is removed and studied under microscope. The diagnosis is based on the qualification of the histopathologist, who will look for abnormal cells. However, if the histopathologist is not well-trained, this may lead to wrong diagnosis. With the recent advances in image processing and machine learning, there is an interest in attempting to develop a reliable pattern recognition based systems to improve the quality of diagnosis. In this paper, we compare two machine learning approaches for the automatic classification of breast cancer histology images into benign and malignant and into benign and malignant sub-classes. The first approach is based on the extraction of a set of handcrafted features encoded by two coding models (bag of words and locality constrained linear coding) and trained by support vector machines, while the second approach is based on the design of convolutional neural networks. We have also experimentally tested dataset augmentation techniques to enhance the accuracy of the convolutional neural network as well as “handcrafted features + convolutional neural network”and “convolutional neural network features + classifier”configurations.

15Photoacoustic Source Detection and Reflection Artifact Removal Enabled by Deep Learning
Interventional applications of photoacoustic imaging typically require visualization of point-like targets, such as the small, circular, cross-sectional tips of needles, catheters, or brachytherapy seeds. When these point-like targets are imaged in the presence of highly echogenic structures, the resulting photoacoustic wave creates a reflection artifact that may appear as a true signal. We propose to use deep learning techniques to identify these types of noise artifacts for removal in experimental photoacoustic data. To achieve this goal, a convolutional neural network (CNN) was first trained to locate and classify sources and artifacts in pre-beamformed data simulated with k-Wave. Simulations initially contained one source and one artifact with various medium sound speeds and 2-D target locations. Based on 3,468 test images, we achieved a 100% success rate in classifying both sources and artifacts. After adding noise to assess potential performance in more realistic imaging environments, we achieved at least 98% success rates for channel signal-to-noise ratios (SNRs) of -9dB or greater, with a severe decrease in performance below -21dB channel SNR. We then explored training with multiple sources and two types of acoustic receivers and achieved similar success with detecting point sources.

16An Improved Heuristic Optimization Algorithm for Feature Learning Based on Morphological Filtering and it’s Application
Hyperspectral remote sensing sensors can provide plenty of valuable information with hundreds of spectral bands at each pixel. Feature selection and spectral-spatial information play an important role in the field of hyperspectral image (HSI) classification. In this paper, a novel two-stage spectral-spatial HSI classification method is proposed. In first stage, the standard particle swarm optimization (PSO) is adopted to optimize the parameters, and a novel binary PSO with mutation mechanism is used for feature selection simultaneously. Then, the support vector machine classifier is performed. In second stage, in order to reduce salt and pepper phenomenon, mathematical morphology post-processing is used to further refine the obtained results of the above stage. Experiments are conducted on two real hyperspectral data sets. The evaluation results show that the proposed approach achieves better accuracy than several state-of-the-art methods.

17Medical Image Forgery Detection for Smart Healthcare
With the invention of new communication technologies, new features and facilities are provided in a smart healthcare framework. The features and facilities aim to provide a seamless, easy-to-use, accurate, and real-time healthcare service to clients. As health is a sensitive issue, it should be taken care of with utmost security and caution. This article proposes a new medical image forgery detection system for the healthcare framework to verify that images related to healthcare are not changed or altered. The system works on a noise map of an image, applies a multi-resolution regression filter on the noise map, and feeds the output to support-vector-machine-based and extreme-learning-based classifiers. The noise map is created in an edge computing resource, while the filtering and classification are done in a core cloud computing resource. In this way, the system works seamlessly and in real time. The bandwidth requirement of the proposed system is also reasonable.

18Atherosclerotic Plaque Pathological Analysis by Unsupervised K -Means Clustering
This paper introduced a high-throughput pathological analysis algorithm by using of unsupervised K-means clustering principle and lab color space. The accuracy of this algorithm was verified by comparing with well-established commercially available software. For each type of pathological staining special for atherosclerotic plaque components analysis, accurate pathological analysis results could be obtained by selecting the appropriate cluster classification number (usually 3 to 5, but not limited to 3 to 5). Bland-Altman and linear regression analysis further confirmed that the self-developed algorithm correlated well with the well-established software (correlation coefficient R2 ranged from 0.72 to 0.99). Moreover, the intraand interobserver coefficient of variation were relatively minor, indicating very good reproducibility. So we draw a conclusion that the self-developed algorithm could reduce the human interference factors, improve the efficiency, and be suitable for a large number of analyses of atherosclerotic pathology.

19Facial Expressions Recognition Based on Cognition and Mapped Binary Patterns
In this paper, a new expression recognition approach is presented based on cognition and mapped binary patterns. At first, the approach is based on the LBP operator to extract the facial contours. Secondly, the establishment of pseudo-3-D model is used to segment the face area into six facial expression sub-regions. In this context, the sub-regions and the global facial expression images use the mapped LBP method for feature extraction, and then use two classifications which are the support vector machine and softmax with two kinds of emotion classification models the basic emotion model and the circumplex emotion model. At last, we perform a comparative experiment on the expansion of the Cohn-Kanade (CK +) facial expression data set and the test data sets collected from ten volunteers. The experimental results show that the method can effectively remove the confounding factors in the image. And the result of using the circumplex emotion model is obviously better than the traditional emotional model. By referring to relevant studies of human cognition, we verified that eyes and mouth express more emotion.

20Breast Cancer Classification Based on Fully-Connected Layer First Convolutional Neural Networks
Both Wisconsin diagnostic breast cancer (WDBC) database and the Wisconsin breast cancer database (WBCD) are structured datasets described by cytological features. In this paper, we were seeking to identify ways improve the classification performance for each of the datasets based on convolutional neural networks (CNN). However, CNN is designed for unstructured data, especially for image data, which has been proven to be successful in the field of image recognition. A typical CNN may not keep its performance for structured data. In order to take advantage of CNN to improve the classification performance for structured data, we proposed fully-connected layer first CNN (FCLF-CNN), in which the fully-connected layers are embedded before the first convolutional layer. We used the fully-connected layer as an encoder or an approximator to transfer raw samples into representations with more locality. In order to get a better performance, we trained four kinds of FCLF-CNNs and made an ensemble FCLF-CNN by integrating them. We then applied it to the WDBC and WBCD datasets and obtained the results by a fivefold cross validation. The results showed that the FCLF-CNN can achieve a better classification performance than pure multi-layer perceptrons and pure CNN for both datasets. The ensemble FCLF-CNN can achieve an accuracy of 99.28%, a sensitivity of 98.65%, and a specificity of 99.57% for WDBC, and an accuracy of 98.71%, a sensitivity of 97.60%, and a specificity of 99.43% for WBCD. The results for both datasets are competitive compared to the results of other research.

21Fusing DTCWT and LBP Based Features for Rotation, Illumination and Scale Invariant Texture Classification
Classification of texture images with different orientation, illumination, and scale changes is a challenging problem in computer vision and pattern recognition. This paper proposes two descriptors and uses them jointly to fulfill such task. One can obtain an image pyramid by applying dual-tree complex wavelet transform (DTCWT) on the original image, and generate local binary patterns (LBP) in DTCWT domain, called LBPDTCWT, as local texture features. Moreover, log-polar (LP) transform is applied on the original image, and the energies of DTCWT coefficients on detail subbands of the LP image, called LPDTCWTE, are taken as global texture features. We fuse the two kinds of features for texture classification, and the experimental results on benchmark data sets show that our proposed method can achieve better performance than other the state-of-the-art methods.

22Superpixel Segmentation Using Gaussian Mixture Model
Superpixel segmentation partitions an image into perceptually coherent segments of similar size, namely, superpixels. It is becoming a fundamental preprocessing step for various computer vision tasks because superpixels significantly reduce the number of inputs and provide a meaningful representation for feature extraction. We present a pixel-related Gaussian mixture model (GMM) to segment images into superpixels. GMM is a weighted sum of Gaussian functions, each one corresponding to a superpixel, to describe the density of each pixel represented by a random variable. Different from previously proposed GMMs, our weights are constant, and Gaussian functions in the sums are subsets of all the Gaussian functions, resulting in segments of similar size and an algorithm of linear complexity with respect to the number of pixels. In addition to the linear complexity, our algorithm is inherently parallel and allows fast execution on multi-core systems. During the expectation-maximization iterations of estimating the unknown parameters in the Gaussian functions, we impose two lower bounds to truncate the eigenvalues of the covariance matrices, which enables the proposed algorithm to control the regularity of superpixels. Experiments on a well-known segmentation dataset show that our method can efficiently produce superpixels that adhere to object boundaries better than the current state-of-the-art methods.

23A Newly Developed Ground Truth Dataset for Visual Saliency in Videos
Visual saliency models aim to detect important and eye catching portions in a scene by exploiting human visual system characteristics. The effectiveness of visual saliency models is evaluated by comparing saliency maps with a ground truth data set. In recent years, several visual saliency computation algorithms and ground truth data sets have been proposed for images. However, there is lack of ground truth data sets for videos. A new human labeled ground truth is prepared for video sequences that are commonly used in video coding. The selected videos are from different genres including conversational, sports, outdoor, and indoor having low, medium, and high motion. Saliency mask is obtained for each video by nine different subjects, which are asked to label the salient region in each frame in the form of a rectangular bounding box. A majority voting criteria is used to construct a final ground truth saliency mask for each frame. Sixteen different state-of-the-art visual saliency algorithms are selected for comparison and their effectiveness is computed quantitatively on the newly developed ground truth. It is evident from results that multiple kernel learning and spectral residual-based saliency algorithms perform best for different genres and motion-type videos in terms of F-measure and execution time, respectively.

24Detection of Age-Related Macular Degeneration in Fundus Images by an Associative Classifier
In this paper we propose the application of a novel associative classifier, the Heaviside's Classifier, for the early detection of Age-Related Macular Degeneration un retinal fundus images. Retinal fundus images are, first, processed by a simple method based on the Homomorphic filtering and some basic mathematical morphology operations; in the second phase we extract relevant features of the images using the Zernike moments, we also apply a feature selection method to select the best features from the original features set. The dataset created from the images with the best features are used to train and test a new classification model whose learning and classification phases are based on the Heaviside's Function. Experimental results show that our method is capable to achieve an accuracy value about the 94.12% with a dataset created from images belonging to famous image repositories.

25Text segmentation of health examination item based on character statistics and information measurement
This study explores the segmentation algorithm of item text data, especially of single long length data in health examination. In the specific implementation, a large amount of historical health examination data is analysed. Using the method of character statistics, the connection tightness values TABs between two adjacent characters are calculated. Three parameters, the candidate number N, the best position BP, and balance weight BW are set. The total segmentation indexes SIs are calculated, thus determined the segmentation position Pos. The optimal parameter values are determined by the method of information measurement. Experimental results show that the accuracy rate is 78.6% and reaches 82.9% in the most frequently appeared text item. The complexity of the algorithm is O(n). Using no existing domain knowledge, it is very simple and fast. By executed repeatedly, it is convenient to obtain the characteristics of each single item of text data, furthermore, to distinguish respective express preference of different physicians to the same item. The assumption is verified that without professional domain knowledge, a large amount of historical data can provide valuable clues for the text understanding. The results of this research are being applied and verified in the following research works in the field of health examination.

26Enhanced multidimensional field embedding method by potential fields for hyper spectral image classification and visualization
Multidimensional field embedding methods have been demonstrated to effectively characterise spectral signatures in hyperspectral images. However, high-dimensional data composed of a number of classes presents challenges to the existing embedding methods. This Letter proposes an enhanced multidimensional field embedding algorithm based on the force field formulation. The comparative performance of the proposed algorithm is evaluated in the classification and visualisation of commonly used hyperspectral images. Experimental results demonstrate its superiority over previously used field embedding techniques.

27An Algorithm for Concrete Crack Extraction and Identification Based on Machine Vision
This paper proposes solutions to the large extraction error, the difficulty of identification, and other problems existing in crack processing. The first solution entails enlarging the grayscale difference between the crack and background via adaptive grayscale linear transformation using the OTSU algorithm for segmentation and combining the extending direction of the skeleton line and the grayscale feature of the crack edge to fill the broken part of the binary image to obtain a complete image of the crack. The second solution is to improve several major characteristic parameters of the crack image to be more suitable for the characteristic description of the crack. Finally, a comparison of different types of input features and different accuracies performed using the training support vector machine verifies the accuracy and practicability of the proposed algorithm for extracting and recognizing cracks.

28Skeletal Maturity Recognition Using a Fully Automated System With Convolutional Neural Networks
In this paper, we present an automated skeletal maturity recognition system that takes a single hand radiograph as an input and finally output the bone age prediction. Unlike the conventional manually diagnostic methods, which are laborious, fallible, and time-consuming, the proposed system takes input images and generates classification results directly. It first accurately detects the distal radius and ulna areas from the hand and wrist X-ray images by a faster region-based convolutional neural network (CNN) model. Then, a well-tuned CNN classification model is applied to estimate the bone ages. In the experiment section, we employed a data set of 1101 hand and wrist radiographs and conducted comprehensive experiments on the proposed system. We discussed the model performance according to various network configurations, multiple optimization algorithms, and different training sample amounts. After parameter optimization, the proposed model is finally achieved 92% and 90% classification accuracies for radius and ulna grades, respectively.

29A Multi resolution Gray-Scale and Rotation Invariant Descriptor for Texture Classification
Texture classification algorithms using local binary pattern (LBP) and its variants usually can achieve attractive results. However, the selected rotation invariant structural patterns in numerous LBP variants are not absolutely continuous invariant to any rotation angle. To improve the classification effectiveness on this occasion, in this paper, we introduce a robust descriptor based on the principal curvatures (PCs) and rotation invariant version of CLBP_Sign operator in completed LBP (CLBP), namely PC-LBP. Different from the original LBP and many LBP variants, PCs are employed in this paper to represent each local structure information due to their continuous rotation invariance. Simultaneously, both micro-and macro-structure texture information can also be captured through PCs, which comprise maximum and minimum curvatures. Inspired by the similar coding strategy of the CLBP_Sign operator, a new operator CLBP_PC is developed. By exploiting complementary information resulting from the two operators combination, the final PC-LBP descriptor has the properties of conspicuous rotation invariance, strong discriminativeness, gray scale invariance, needless of pretraining, and high computational efficiency. In addition, to improve the robustness of texture classification with multiresolution, a multiscale sampling approach is designed by adjusting three parameters accordingly.

30Multi-Organ Plant Classification Based on Convolutional and Recurrent Neural Networks
Classification of plants based on a multi-organ approach is very challenging. Although additional data provide more information that might help to disambiguate between species, the variability in shape and appearance in plant organs also raises the degree of complexity of the problem. Despite promising solutions built using deep learning enable representative features to be learned for plant images, the existing approaches focus mainly on generic features for species classification, disregarding the features representing plant organs. In fact, plants are complex living organisms sustained by a number of organ systems. In our approach, we introduce a hybrid generic-organ convolutional neural network (HGO-CNN), which takes into account both organ and generic information, combining them using a new feature fusion scheme for species classification. Next, instead of using a CNN-based method to operate on one image with a single organ, we extend our approach. We propose a new framework for plant structural learning using the recurrent neural network-based method. This novel approach supports classification based on a varying number of plant views, capturing one or more organs of a plant, by optimizing the contextual dependencies between them. We also present the qualitative results of our proposed models based on feature visualization techniques and show that the outcomes of visualizations depict our hypothesis and expectation.

31Digital Affine Shear Filter Banks with 2-Layer Structure and Their Applications in Image Processing
Digital affine shear filter banks with 2-layer structure (DAS-2 filter banks) are constructed and are shown to be with the perfect reconstruction property. The implementation of digital affine shear transforms using the transition and subdivision operators are given. The redundancy rate analysis shows that our digital affine shear transforms have redundancy rate no more than 8 and it decreases with respect to the number of directional filters. Numerical experiments on image processing demonstrate the advantages of our DAS-2 filter banks over many other state-of-the-art frame-based transforms. The connection between DAS-2 filter banks and affine shear tight frames with 2-layer structure is established. Characterizations and constructions of affine shear tight frames with 2-layer structure are provided.

32ABO/Rh Blood Typing Method for Samples in Microscope Slides by Using Image Processing
The correct determination of blood groups is very important to prevent complications during transfusion practices, since cannot exists incompatibility between donor and blood receiver. The ABO and Rh blood group systems currently are the most used, because allow to get results in a simple and low cost way. This work presents a method for blood typing of lamina samples using digital image processing. During samples analysis (48 samples of 30 different patients with 18 high resolution pictures taken with a 5 Megapixels camera and 30 low resolution pictures taken with a webcam 640x360 pixels), the proposed method presented a hit ratio of 97.92% for Anti- A samples with sensibility of 100% and specificity of 96,3%. The hit ratio presented in Anti-B tests was 89.58% with a sensibility of 83.33% and specificity of 92,86%. During Anti- D reagent analysis, the developed method presented a better efficacy in high resolution pictures analysis, with 88.89% of hit ratio, 84.62% of sensibility and 100% of specificity.

33Dilation and Erosion on the Triangular Tessellation: An Independent Approach
In this paper, a new idea for morphological operations, i.e., dilation and erosion on the regular triangular tessellation is presented. The triangles have two orientations; they are addressed by zero-sum and one-sum triplets and called even and odd pixels, respectively. The triangular grid is not a lattice, that is, there are grid vectors that do not translate the grid to itself. Different sets of vectors translate the even and odd pixels into the grid: for even pixels vectors with sum 0 and 1 can be used, while for odd pixels vectors with sum 0 and -1 are appropriate. Based on this fact, we introduce a technique in which one can work “independently”with the even and the odd pixels in morphological operations. Examples and various properties of the “independent”dilation and erosion are analyzed.

34Development and Validation of a Method for Measurement of Root Length in 2D Images
The plant root system analysis using traditional methods is complex, time-consuming, and in most cases does not provide the required accuracy. Therefore, new methods using digital image processing has been proposed. This paper presents a new algorithm for estimate the length of washed roots using digital image processing. First the algorithm loaded the image and extracted its resolution. Then the image was converted to grayscale and binarized. The objects from binary image were detected using its contours. The binary image was also used to thin the objects. After that the length was estimated based on resulting skeletons from previous procedure. The proposed algorithm was validated using copper wires. The copper wires length were measured using a caliper, and compared to estimated length by proposed algorithm and a method from literature, with different angles of inclination. When comparing estimated lengths from proposed algorithm and literature method, the proposed algorithm got the best results in 67,03% of test cases. Since plant roots are normally randomly arranged in the images, it is the greatest importance to develop a method of measuring root length that is invariant or has low sensitivity to rotation, and which is still precise and accurate.

35Approximate DCT Image Compression Using Inexact Computing
This paper proposes a new framework for digital image processing; it relies on inexact computing to address some of the challenges associated with the discrete cosine transform (DCT) compression. The proposed framework has three levels of processing; the first level uses approximate DCT for image compressing to eliminate all computational intensive floating-point multiplications and executing the DCT processing by integer additions and in some cases logical right/left shifts. The second level further reduces the amount of data (from the first level) that need to be processed by filtering those frequencies that cannot be detected by human senses. Finally, to reduce power consumption and delay, the third level introduces circuit level inexact adders to compute the DCT. For assessment, a set of standardized images are compressed using the proposed three-level framework. Different figures of merits (such as energy consumption, delay, power-signal-to-noise-ratio, average-difference, and absolute-maximum-difference) are compared to existing compression methods; an error analysis is also pursued confirming the simulation results. Results show very good improvements in reduction for energy and delay, while maintaining acceptable accuracy levels for image processing applications.

36Optical Fiber Bragg Grating Instrumentation Applied to Horse Gait Detection
This paper presents two in vivo instrumentation techniques to study the different types of gait of horses performing athletics using fiber Bragg gratings (FBG). These techniques can be used as an auxiliary tool in the early diagnosis of injuries related to the horse's locomotor system, mainly in the distal portion of the digit, one of the most common causes of retirement when they are athletes. Therefore, the first technique presented consists of the fixation of FBGs without encapsulation, directly on the dorsal wall of the hoof in each of the limbs. In the second technique presented, the FBG sensor is encapsulated in a prototype developed using a composite material reinforced with carbon fiber in a horseshoe shape. The second technique is associated with digital image processing (DIP) for better visualization of the deformation and compression forces that act upon the limbs. The first method showed sensitivity to detection of the digit compression against the ground, being able to identify walking patterns. The second technique, with the encapsulated sensor elements, also allows the capture of characteristic signals of gait, such as step walk, trot, and gallop under training conditions. Both, FBG sensor interrogation and DIP, analysis techniques have proven good performance and promising results for the clinical and biomechanical study and medical evaluations of horses even during dynamic training and competitions.

37Extended Stir Trace benchmarking of biometric and forensic qualities of morphed face images
Since its introduction in 2014, the face morphing forgery (FMF) attack has received significant attention from the biometric and media forensic research communities. The attack aims at creating artificially weakened templates which can be successfully matched against multiple persons. If successful, the attack has an immense impact on many biometric authentication scenarios including the application of electronic machine-readable travel document (eMRTD) at automated border control gates. We extend the StirTrace framework for benchmarking FMF attacks by adding five issues: a novel three-fold definition for the quality of morphed images, a novel FMF realisation (combined morphing), a post-processing operation to simulate the digital image format used in eMRTD (passport scaling 15 kB), an automated face recognition system (VGG face descriptor) as additional means for biometric quality assessment and two feature spaces for FMF detection (keypoint features and fusion of keypoint and Benford features) as additional means for forensic quality assessment. We show that the impact of StirTrace post-processing operations on the biometric quality of morphed face images is negligible except for two noise operators and passport scaling 15 kB, the impact on the forensic quality depends on the type of post-processing, and the new FMF realisation outperforms the previously considered ones.

38Automated spectral domain approach of quasi-periodic denoising in natural images using notch filtration with exact noise profile
The domain of noise fading from digital images, by virtue of its enormous appellation amongst the researchers, stands out uniquely in the recent research field of image processing over the last few decades. Periodic noises are unintended spurious signals which often agitate an image during acquisition/transmission, thereby resulting in repetitive patterns having spatial dependency and extensively demeaning visual excellence of the image. However, high amplitude noisy spectral components are clearly noticeable from the remaining uncorrupted ones in the corresponding Fourier transformed corrupted image spectrum. Hence, it is easier to distinguish and minimise those noisy components using an appropriate thresholding and filtration technique. Therefore, to start with, a simple yet elegant model of the noise-free natural image has been developed from the corrupted one followed by a proper thresholding method to get the noisy bitmap. Finally, an elegant adaptive sinc restoration filter with the concept of extracting the exact shape of a noise spectrum profile has been applied in the filtration phase. The performance of the proposed algorithm has been assessed both visually and statistically with other state-of-the-art algorithms in the literature in terms of various performance measurement attributes, providing evidence of achieving more effective restoration with considerable lower computational time.

39Image completion using multispectral imaging
Here, the authors explore the potential of multispectral imaging applied to image completion. Snapshot multispectral cameras correspond to breakthrough technologies that are suitable for everyday use. Therefore, they correspond to an interesting alternative to digital cameras. In their experiments, multispectral images are acquired using an ultracompact snapshot camera-recorder that senses 16 different spectral channels in the visible spectrum. Direct exploitation of completion algorithms by extension of the spectral channels exhibits only minimum enhancement. A dedicated method that consists in a prior segmentation of the scene has been developed to address this issue. The segmentation derives from an analysis of the spectral data and is employed to constrain research area of exemplar-based completion algorithms. The full processing chain takes benefit from standard methods that were developed by both hyperspectral imaging and computer vision communities. Results indicate that image completion constrained by spectral presegmentation ensures better consideration of the surrounding materials and simultaneously improves rendering consistency, in particular for completion of flat regions that present no clear gradients and little structure variance. The authors validate their method with a perceptual evaluation based on 20 volunteers. This study shows for the first time the potential of multispectral imaging applied to image completion.

40Urban Land Cover Classification with Missing Data Modalities Using Deep Convolutional Neural Networks
Automatic urban land cover classification is a fundamental problem in remote sensing, e.g., for environmental monitoring. The problem is highly challenging, as classes generally have high interclass and low intraclass variances. Techniques to improve urban land cover classification performance in remote sensing include fusion of data from different sensors with different data modalities. However, such techniques require all modalities to be available to the classifier in the decision-making process, i.e., at test time, as well as in training. If a data modality is missing at test time, current state-of-the-art approaches have in general no procedure available for exploiting information from these modalities. This represents a waste of potentially useful information. We propose as a remedy a convolutional neural network (CNN) architecture for urban land cover classification which is able to embed all available training modalities in the so-called hallucination network. The network will in effect replace missing data modalities in the test phase, enabling fusion capabilities even when data modalities are missing in testing. We demonstrate the method using two datasets consisting of optical and digital surface model (DSM) images. We simulate missing modalities by assuming that DSM images are missing during testing. Our method outperforms both standard CNNs trained only on optical images as well as an ensemble of two standard CNNs. We further evaluate the potential of our method to handle situations where only some DSM images are missing during testing. Overall, we show that we can clearly exploit training time information of the missing modality during testing.

41Ultrasound Open Platforms for Next-Generation Imaging Technique Development
Open platform (OP) ultrasound systems are aimed primarily at the research community. They have been at the forefront of the development of synthetic aperture, plane wave, shear wave elastography, and vector flow imaging. Such platforms are driven by a need for broad flexibility of parameters that are normally preset or fixed within clinical scanners. OP ultrasound scanners are defined to have three key features including customization of the transmit waveform, access to the prebeamformed receive data, and the ability to implement real-time imaging. In this paper, a formative discussion is given on the development of OPs from both the research community and the commercial sector. Both software- and hardware-based architectures are considered, and their specifications are compared in terms of resources and programmability. Software-based platforms capable of real-time beamforming generally make use of scalable graphics processing unit architectures, whereas a common feature of hardware-based platforms is the use of field-programmable gate array and digital signal processor devices to provide additional on-board processing capacity. OPs with extended number of channels (>256) are also discussed in relation to their role in supporting 3-D imaging technique development. With the increasing maturity of OP ultrasound scanners, the pace of advancement in ultrasound imaging algorithms is poised to be accelerated.

42Robust Multi-Classifier for Camera Model Identification Based on Convolution Neural Network
With the prevalence of adopting data-driven convolution neural network (CNN)-based algorithms into the community of digital image forensics, some novel supervised classifiers have indeed increasingly sprung up with nearly perfect detection rate, compared with the conventional supervised mechanism. The goal of this paper is to investigate a robust multi-classifier for dealing with one of the image forensic problems, referred to as source camera identification. The main contributions of this paper are threefold: (1) by mainly analyzing the image features characterizing different source camera models, we design an improved architecture of CNN for adaptively and automatically extracting characteristics, instead of hand-crafted extraction; (2) the proposed efficient CNN-based multi-classifier is capable of simultaneously classifying the tested images acquired by a large scale of different camera models, instead of utilizing a binary classifier; and (3) numerical experiments show that our proposed multi-classifier can effectively classify different camera models while achieving an average accuracy of nearly 100% relying on majority voting, which indeed outperforms some prior arts; meanwhile, its robustness has been verified by considering that the images are attacked by post-processing such as JPEG compression and noise adding.

43Graph Signal Processing: Overview, Challenges, and Applications
Research in graph signal processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper, we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing, along with a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas. We then summarize recent advances in developing basic GSP tools, including methods for sampling, filtering, or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning.

44Statistical Iterative CBCT Reconstruction Based on Neural Network
Cone-beam computed tomography (CBCT) plays an important role in radiation therapy. Statistical iterative reconstruction (SIR) algorithms with specially designed penalty terms provide good performance for low-dose CBCT imaging. Among others, the total variation (TV) penalty is the current state-of-the-art in removing noises and preserving edges, but one of its well-known limitations is its staircase effect. Recently, various penalty terms with higher order differential operators were proposed to replace the TV penalty to avoid the staircase effect, at the cost of slightly blurring object edges. We developed a novel SIR algorithm using a neural network for CBCT reconstruction. We used a data-driven method to learn the “potential regularization term” rather than design a penalty term manually. This approach converts the problem of designing a penalty term in the traditional statistical iterative framework to designing and training a suitable neural network for CBCT reconstruction. We proposed using transfer learning to overcome the data deficiency problem and an iterative deblurring approach specially designed for the CBCT iterative reconstruction process during which the noise level and resolution of the reconstructed images may change. Through experiments conducted on two physical phantoms, two simulation digital phantoms, and patient data, we demonstrated the excellent performance of the proposed network-based SIR for CBCT reconstruction, both visually and quantitatively. Our proposed method can overcome the staircase effect, preserve both edges and regions with smooth intensity transition, and provide reconstruction results at high resolution and low noise level.

45Low-Noise Readout Integrated Circuit for Terahertz Array Detector
In the field of terahertz (THz) imaging applications, using a 0.18-μm CMOS process, a 1 × 16 low-noise readout integrated circuit (ROIC) is developed for a 1 × 64 Nb5 N6 microbolometer array detector. This circuit consists of a digitally programmable current digital-to-analog converter and an amplifier module, which are responsible for biasing the microbolometer and amplifying its output signals with minimum added noise, respectively. Test results show that the ROIC achieves an average gain of ~47 dB and a voltage noise spectral density of ~9.34 nV/√Hz at 10 kHz, which meet the requirements for the THz array detector. Moreover, the responsivity of the Nb5 N6 microbolometer detector is -580 V/W, and the corresponding noise equivalent power is 17 pW/√Hz. Together with the ROIC, the 1 × 64 Nb5 N6 microbolometer array detector is preliminarily used for THz imaging applications. The imaging results prove that the ROIC can be used with the detector to develop an efficient and low-cost THz imaging system.

46A Robust Image Watermarking Technique with an Optimal DCT-Psycho-visual Threshold
This paper presents a reliable digital watermarking technique that provides high imperceptibility and robustness for copyright protection using an optimal discrete cosine transform (DCT) psychovisual threshold. An embedding process in this watermarking technique utilizes certain frequency regions of DCT, such that insertion of watermark bits causes the least image distortion. Thus, the optimal psychovisual threshold is determined to embed the watermark in the host image for the best image quality. During the insertion of watermark bits into the certain frequencies of the image, watermark bits are not directly inserted into the frequency coefficient; rather, the certain coefficients are modified based on some rules to construct the watermarked image. The embedding frequencies are determined by using modified entropy finding large redundant areas. Furthermore, the watermark is scrambled before embedding to provide an additional security. In order to verify the proposed technique, our technique is tested under several signal processing and geometric attacks. The experimental results show that our technique achieves higher invisibility and robustness than the existing schemes. The watermark extraction produces high image quality after different types of attacks.

47A Survey of Image-Based Techniques for Hair Modeling
With the tremendous performance increase of today's graphics technologies, visual details of digital humans in games, online virtual worlds, and virtual reality applications are becoming significantly more demanding. Hair is a vital component of a person's identity and can provide strong cues about age, background, and even personality. More and more researchers focus on hair modeling in the fields of computer graphics and virtual reality. Traditional methods are physics-based simulation by setting different parameters. The computation is expensive, and the constructing process is non-intuitive, difficult to control. Conversely, image-based methods have the advantages of fast modeling and high fidelity. This paper surveys the state of the art in the major topics of image-based techniques for hair modeling, including single-view hair modeling, static hair modeling from multiple images, video-based dynamic hair modeling, and the editing and reusing of hair modeling results. We first summarize the single-view approaches, which can be divided into the orientation-field and data-driven-based methods. The static methods from multiple images and dynamic methods are then reviewed in Sections III and IV. In Section V, we also review the editing and reusing of hair modeling results. The future development trends and challenges of image-based methods are proposed in the end.

48Compressive sensing unmixing algorithm for breast cancer detection
In this paper, we describe a novel unmixing algorithm for detecting breast cancer. In this approach, the breast tissue is separated into three components, low water content (LWC), high water content (HWC), and cancerous tissues, and the goal of the optimization procedure is to recover the mixture proportions for each component. By utilizing this approach in a hybrid DBT / NRI system, the unmixing reconstruction process can be posed as a sparse recovery problem, such that compressive sensing (CS) techniques can be employed. A numerical analysis is performed, which demonstrates that cancerous lesions can be detected from their mixture proportion under the appropriate conditions.

49 Color and Vector Flow Imaging in Parallel Ultrasound With Sub-Nyquist Sampling
RF acquisition with a high-performance multichannel ultrasound system generates massive data sets in short periods of time, especially in “ultrafast” ultrasound when digital receive beamforming is required. Sampling at a rate four times the carrier frequency is the standard procedure since this rule complies with the Nyquist-Shannon sampling theorem and simplifies quadrature sampling. Bandpass sampling (or undersampling) outputs a bandpass signal at a rate lower than the maximal frequency without harmful aliasing. Advantages over Nyquist sampling are reduced storage volumes and data workflow, and simplified digital signal processing tasks. We used RF undersampling in color flow imaging (CFI) and vector flow imaging (VFI) to decrease data volume significantly (factor of 3 to 13 in our configurations). CFI and VFI with Nyquist and sub-Nyquist samplings were compared in vitro and in vivo. The estimate errors due to undersampling were small or marginal, which illustrates that Doppler and vector Doppler images can be correctly computed with a drastically reduced amount of RF samples. Undersampling can be a method of choice in CFI and VFI to avoid information overload and reduce data transfer and storage.

50 Secure and Robust Digital Image Watermarking Using Coefficient Differencing and Chaotic Encryption
This paper presents a chaotic encryption-based blind digital image watermarking technique applicable to both grayscale and color images. Discrete cosine transform (DCT) is used before embedding the watermark in the host image. The host image is divided into 8 × 8 nonoverlapping blocks prior to DCT application, and the watermark bit is embedded by modifying difference between DCT coefficients of adjacent blocks. Arnold transform is used in addition to chaotic encryption to add double-layer security to the watermark. Three different variants of the proposed algorithm have been tested and analyzed. The simulation results show that the proposed scheme is robust to most of the image processing operations like joint picture expert group compression, sharpening, cropping, and median filtering. To validate the efficiency of the proposed technique, the simulation results are compared with certain state-of-art techniques. The comparison results illustrate that the proposed scheme performs better in terms of robustness, security, and imperceptivity. Given the merits of the proposed scheme, it can be used in applications like e-healthcare and telemedicine to robustly hide electronic health records in medical images.




Topic Highlights



Digital Image Processing Projects

Being an Engineering student Project is a must attained one in your final year to procure degree. Digital Image Processing Projects is one of the best platform to give a shot. Because it is easy to understand the discipline. Elysium Pro ECE Final Year Project gives you better ideas on this field.

Elysium Pro ECE Final Year Project

DIP is nothing but the use of computer algorithm to act on the image digitally. So that one can extract the Information from that image for further use. Nowadays every techniques are incorporated or impacted by DIP. Some of the common applications are in the Medical stream, Color and video processing, remote sensing, transmission and encoding process.


Hi there! Click one of our representatives below and we will get back to you as soon as possible.

Chat with us on WhatsApp
Online Payment
LiveZilla Live Chat Software