FLANN Based Method for Extraction of Hard Exudates

DOI : 10.17577/IJERTV3IS111080

Download Full-Text PDF Cite this Publication

Text Only Version

FLANN Based Method for Extraction of Hard Exudates

  1. Udaya Bhaskar1, E. Pranay Kumar1 And Dr. N. B. Puhan2

    1. Student, School of Electrical Science, IIT Bhubaneswar

    2. Assistant Professor, School of Electrical Sciences, IIT Bhubaneswar

AbstractHard Exudates (HE)in retinal fundus images are one of the most prevalent earliest signs of Diabetic Retinopathy (DR) [1]. Diabetic Retinopathy is one of the main causes of vision loss and its prevalence keeps rising. Thus, the clinical examination of HEs is essential to the early diagnosis and treatment of diabetic retinopathy. In this paper a classification-based approach to classify HEs is illustrated. Classification performances were compared between Multi-Layered Perceptron (MLP) [11], Radial Basis Function (RBF) [11] and Functional Link Artificial Neural Network (FLANN) classifiers. Better classification performance was observed for FLANN classifier. GUI package was developed.

KeywordsDiabetic Retinopathy; Exudates Detection; Functional Link Artificial Neural Network (FLANN); Image Processing;

  1. INTRODUCTION

    Diabetic retinopathy (DR) is one of the most serious and most frequent eye diseases in the world. It is a complication of diabetes and the most common cause of blindness in adults. Due to the increasing number ofdiabetic patients, the number of people affected by DR is expected to increase. Early diagnosis and treatment has been shown to prevent visual loss and blindness. Digital retinal images obtained by the fundus camera are used to diagnose diabetic retinopathy.

    Hard exudates have been known as the specific marker of diabetic retinopathy (DR). Thus, the clinical examination of HEs is essential to the early diagnosis and treatment of diabetic retinopathy. The traditional detection of HEs is accomplished manually, which is a laborious and time- consuming work. As populations of DR expand, the workload of ophthalmologist increase notably and the deficiency of manual checking became significantly serious. It has prevented many patients from receiving effective treatment in time.

    Therefore, an automatic detection of HEs is an important task in computer aided diagnosis of DR. But the main difficulties for HEs detection are the interference of similar color objects such as cotton wool spots (CWS), optic disk (OD) and circular scars left after pan-retinal photocoagulation treatment (PRP) for DR, and the noise caused by normal macular reflection [14].To solve these problems, we hereby present an automated method for exudate detection in color retinal images. This method has three main stages: (a) Extraction of the exudates candidate regions Luminosity and Contrast Normalization, (b) Significant features extraction and

    (c) HEs classification by means of three different classifiers Multi-Layered Perceptron (MLP), Radial Basis Function

    (RBF) and Functional Link Artificial Neural Network (FLANN).

  2. LITERATURE

    The automated techniques for Hard Exudates detection presented in the literature can be roughly divided into four categories: (1) Thresholdingbased (2) Region growing- based

    (3) Morphological-based and (4) Classification-based methods [4] [5].

    Classification-based approach is discussed in the course of the paper. These approaches proved to give better performance and also have a huge scope for the implementation of the existing evolutionary algorithms. Candidate Region Extraction

    The classification-based methods can be broadly divided into three different steps. (A) Candidate region extraction (B) Feature extraction (C) Classification. The following section discusses about various methods employed in the literature for each step.

    1. Candidate region extraction

      This is the first step in the extraction process where we extract all the pixels that are probable to be exudate pixels. All the yellowish objects in the image are coarsely separated, these are the Hard Exudates candidate regions which are processed further to classify. The following are the various candidate region extraction methods that have been employed in the literature.

      1. Luminosity and contrast normalization. [9]

      2. K-means clustering [15]

      3. Fuzzy C-means Clustering [13]

      4. LoG Transformation on different Intensity Bands [5]

      5. Stationery Wavelet Transform [5]

      The performance measures for each candidate region extraction method are shown in Table 1. In this study, we have employed the Luminosity and contrast Normalization due to its better estimation of exudate pixels.

    2. Feature Extraction

      Feature analysis for classification is based on the discriminatory power of features. Traditional feature selection methods are classifier-driven, i.e., they rely on the results that a particular classifier obtains for different subsets of features. On the other hand, classifier independent feature analysis is more adequate for medical image analysis. It gathers information concerning the structure of the data rather

      than the requirements for a particular classifier. Misclassification probability tends to increase with the number of features and classifier structure is more difficult to interpret. Feature selection tries to avoid these problems by choosing the subset of the extracted features that is most useful for a specific problem.

      Logistic regression is a classifier-independent method commonly used for feature selection. According to the literature, the following 13 features [10], [11] were selected as they have highest discriminatory power for separating the hard exudates from other non-hard exudates yellowish objects.

      1. Mean blue value inside the region.

      2. Mean Green value inside the region

      3. Standard deviation of the red channel inside the region.

      4. Standard deviation of the blue channel inside the region

      5. Mean Green value around the region.

      6. Mean blue value around the region

      7. Region centroid in blue channel: x

      8. Region centroid in blue channel: y

      9. Color difference of the Red channel

      10. Color difference of the green channel

      11. Color difference of the blue channel

      12. Region compactness

      13. Homogeneity

    3. Classification

    The obtained candidate regions are then processed using the information of the feature vector to label a region as a HE or not, this process is called Classifying, which is done using a Classifier. The most popular classifiers are the Logistic Regression and the Neural Networks. In the Neural networks,

    TABLE 1.LITERATURE COMPARISON

    MLP [11] and RBF [11] have given better performance than other neural networks employed in the literature. In this study, we have implemented a new classifier known as Functional Link Artificial Neural Network (FLANN) [18]. The following table shows the performance measures of the methods employed in the literature.

  3. PROPOSED ALGORITHM

    The proposed algorithm is based on extracting and analyzing the features after pre-processing the image using Luminosity and Contrast Normalization (LCN) and finally using the FLANN classifier.

    1. Luminosity and Contrast Normalization

      Due to the inheriting property of the Retinal images acquisition techniques, we have a huge variability within the images. As a consequence, it becomes harder to distinguish retinal features and lesions in some areas. A preprocessing step is necessary to normalize the images and o increase the contrast between Hard Exudates and the background.

      This pre-processing step is further divided into three steps [9]. (a). Modelling, (b). Extraction of Background Pixels and (c). Estimation of Luminosity and contrast drifts.

      1. Modelling

        The luminosity and contrast correction system is based on the following model of the observed fundus image I.

        = = ( + ) (1) Where Io is the original image, is the original Background image, is the original foreground image, and function (. ) represents the acquisition transformation.

        The background image is the ideal image of a retinal

        fundus free of any vascular structure or visible lesion. The

        Paper

        Candidate Regions extraction

        Classifier

        Image Based Criterion

        SE (%)

        SP (%)

        AC (%)

        MaríaGarc ía,et.al

        [11]

        Luminosity Contrast

        Normalization

        MLANN

        100

        87.5

        93.8

        A

        Osareh,et. al [13]

        Fuzzy C- means Clustering

        NN

        93

        94.1

        Xiang

        Chen, et.al [14]

        Morphological reconstruction

        SVM

        94.7

        100

        90

        G.S.

        Annie

        Vimala,et. al [15]

        K means Clustering

        SVM

        96

        B.

        Harangi, et.al [16]

        Active Contours

        Naïve Bayes

        Classifier

        75

        75

        vascular structures, the optic disc and any visible lesion are

        modelled as an additive term to the background image. It is rather difficult to express properties of , due to the wide variability of retinal features and lesions that can be found in a fundus image. The only assumption made regarding is that the set of pixels not covered by vascular structures, optic disc or lesions, called the background set, is not empty.

        Can be statistically modelled as

        (, ) ~ ( , ) (2)

        i.e., as a white random field with mean value , representing the ideally uniform luminosity value, and standard deviation , representing the natural variability of retinal fundus pigmentation. This model can be further simplified by imposing = 0 and =1; this latter assumption is acceptable as any bias or amplification can be arbitrarily lumped into the luminosity and contrast drifts introduced by the acquisition function.

        The non-uniform contrast and luminosity within an image can be described as

        , = , = , , + (, ) (3)

        Where (, )the contrast is drift factor and (, ) is the luminosity drift term. The recovery of an estimate 0 of original image Io is based on the estimation of and and the compensation of the observed image as

        , = , (, ) (4)

        (, )

        Estimation of drift images can be achieved by considering their effects on the background component of observed image. By restricting our analysis to background set B, the expression simplifies to

        (, ) = (, ) (, ) + (, ) (5)

        Using the statistical model of we obtain the statistical description of background pixels is

        c) Estimation of Luminosity and contrast drifts

        The background pixels for each pixel is derived. From the assumption that the background pixel intensities in each are independent, identically distributed random variables. Thus the could be derived for each pixel by estimating mean value and standard deviation of this distribution in .

        This approach has to cope with the same computational problems mentioned in the previous section. Moreover, we are now dealing with a sparse set of pixels (background pixels are only a subset of all image pixels), which renders the application of filtering more difficult. A square- processing solution similar to the one presented in the previous section has been adopted. The image was divided into the same tessellation of squares Si, and from the set of

        background pixels B in , and of intensity values were

        (, ) ~ ((, ), (, )) (6)

        In summary, the proposed method derives estimates and

        from the background component of the observed image ((, ), (, ) ) by estimating mean and standard deviation of the above equation, and uses them to recover an estimate 0of the observed image0.

        b) Extraction of Background Pixels

        The estimation of and requires the preliminary extraction of the background set. To achieve this goal, the following assumptions have been made: for any pixel of the image, in a neighborhood of appropriate size s:

        1. Both and are constant;

        2. At least 50% of the pixels are background pixels;

        3. All background pixels have intensity values significantly different from those of foreground pixels.

        The first assumption comes directly from the model hypothesis that the spectral content of and is concentrated in the low frequencies, whereas the second one indicates that a sufficient portion of background area must be present in each . The third assumptionallows to determine whether pixels belong to background or not simply by examining their intensity, i.e., any two pixels in having the same intensity both belong either to background or to foreground.

        For each pixel (, ) in the image, mean (, ) and standard deviation (, ) of the statistical distribution of intensities in are estimated. As estimator for (, ) we used the sample mean;, estimator for (, ), was the sample standard deviation. Pixel (, ) is considered to belong to the background set if its intensity is close to the mean intensity in . This is mathematically expressed by saying that (, ) belongs to if itsMahalanobis distance

        from , , defined as

        estimated by using sample mean and standard deviation estimators.

    2. Classification

    FLANN model, first proposed by Pao [18] is a single layered neural network with single neuron at the output. The architecture of the FLANN contains less computational load and high convergence rate than those of traditional neural networks due to its single layered structure. FLANN model can capture the non-linear relationship between the inputs and the output unlike the Multiple Regression which can capture only linear relationship between them.In this study, only trigonometric expansion is employed.

  4. PERFORMANCE MEASURES

    In the retinal vessel segmentation process, the outcome is a pixel-based classification result. Any pixel is classified either as vessel or surrounding tissue. The classifications are the true positive (TP) where a pixel is identified as vessel in both the ground truth and segmented image, and the true negative (TN) where a pixel is classified as a non-vessel in the ground truth and the segmented image. The two misclassifications are the false negative (FN) where a pixel is classified as non-vessel in the segmented image but as a vessel pixel in the ground truth image, and the false positive (FP) where a pixel is marked as vessel in the segmented image but non-vessel in the ground truth image.

    Accordingly, the performance measures used are (1) Accuracy defined as the ratio of the total number of correctly classified pixels (sum of true positives and true negatives) to the number of pixels in the image field of view, (2) Sensitivity reflects the ability of the algorithm to detect the vessel pixes,

    (3) Specificity (SP) is the ability to detect non-vessel pixels

    = , (7)

    Is lower than a given threshold .

    In order to reduce the computational burden, the image was partitioned into a tessellation of squares of side . The value of = 200 pixels has been empirically chose.

    In the exudate segmentation process, the outcome is region level classification result. Use of pixel based performance measures for region level classifications involves some problems. These measures doesnt indicate anything about the size of misclassified region i.e., a region classified as False positive is regarded as one False positive irrespective of its size. This necessitates the use of new performance measures. We have used the following as the

    new performance measures in addition with pixel based measures.

    1. Maximum size of false positives

    2. Maximum size of false negatives

  5. RESULTS

    We have used the publicly available DIARET DB1 & DB0 database [19] [20]. The database has provided the rough ground truth and to get a more accurate classifier, we have manually labeled hard exudates in region level using the rough ground truth as the reference. For training the classifiers we have used 1290 exudate regions and 800 Non Exudate regions for training the classifiers and the minimum size of an exudate region is assumed to be 10.

    Candidate regions are selected by thresholding the image at the intensity on the right tail of the histogram having probability of 10 % of the peak value. The candidate regions obtained after thresholding are shown in Figure 1 (f). The obtained candidate regions are further processed to the second stage of the segmentation process, i.e., the classification part. The classified candidate regions are shown in the Figure 2.

    The performance measures obtained by implementing above methods for DIARET database are given in the Table 2.

      1. (b)

    (c) (d)

    (e) (f)

    Figure 1: Image-05 of DIARETDB1 (a) Green Channel (b) Background Pixels (c) Luminosity Drift (d) ContrastNormalization Histogram (e)Histogram of intensity (f) Candidate RegionsDrift

    Figure 2: Classifier Output

    TABLE 2 PERFORMANCE COMPARISON

    Paper

    Method

    Hidden Neuron

    Image Based Criterion

    Lesion Based Criterion

    SE (%)

    SP (%)

    AC (%)

    SE (%)

    PPV (%)

    MaríaGar cía,et.al [11]

    LCN- MLANN

    35

    99

    89

    94.2

    91.7

    88

    MaríaGar

    cía,et.al [11]

    LCN- RBF

    90

    97.6

    90

    93.1

    88

    89

    Papers

    LCN- FLANN

    99

    89

    94.3

    91.7

    90

    Training was done using 1300 exudate regions and 800 non- exudate regions and minimum size of an exudate region was assumed as 10 for the best results.

  6. GRAPHICAL USER INTERFACE

    We have developed a Graphical User Interface (GUI) package using MATLAB to segment the exudate regions for any input image using the three classifiers that we have studied. For a given input image, the GUI displays both image based and lesion based performance measures and image statistics like True Negatives (TN), True Positives (TP).This GUI also allows the user to calculate input feature vector and geometric features for individual exudate region using the Region of Interest (ROI) selection tool.

    1. Selection of Input image

      We select the input fundus image by using the Browse Image pushbutton in the Input Selection panel. Then we have to select the particular mask for removing the unwanted part of the image using the Browse Mask pushbutton. Using the Browse Ground Truth pushbutton we will select the ground truth for the particular input image, as a reference to compare the classifier output Exudate regions obtained at the end of the segmentation process. After selecting the classifier in the Algorithm Selection panel, we start the process by pressing the Start Pushbutton. The above

      discussed process is shown in the Figure 3. Reset Button would clear all the selections. Results pushbutton would display the performance measures and the image statistics.

    2. Region of Interest (ROI) selection tool

    The Select Exudate Regions pushbutton calls the selection tool that enables the user to select a part of the image and evaluate the Features of the ROI. This tool is provided with a selection tool that gives the user a degree of freedom to select in any particular shape and size.

    The Figure 4 shows the User Interface (UI) for the selection. Once the selection of the ROI is completed, thefeature values of the ROI are displayed in the UI screen shown in the Figure 5 (a). This new UI window gives the user to know the feature vector values and the geometric features of the selected ROI. The feature values displayed in this window are the same features that were used during the training of theclassifiers.

    The Figure 5 displays the UI for the ROI selection tool and its output feature values. The ROI is also displayed.

    Figure 3: Graphical User Interface main window

    Figure 4: Region Of Interest (ROI) selection window

    Figure 5: Properties of selected Region Of Interest (ROI)

  7. CONCLUSION AND FUTURE WORK

    This paper proposed an effective segmentation method to automatically extract Hard Exudates in color fundus images, which is significant for the early clinical diagnosis of DR. The experimental results showed that FLANN can detect HEs effectively and distinguish HEs accurately from other interferences when compared with MLP and RBF classifiers.A GUI was developed which performs the segmentation of the exudates for a given input fundus image. The GUI enables the user to select a ROI and get the feature vector and the regional features. In future, we intend to extend the proposed method to detect CWS and build an integrated diagnosis system of DR.

  8. ACKNOWLEDGMENT

    The authors would like to thank Dr. N. B. Puhanfor his valuable comments that improved the representation and research quality of this paper.

  9. REFERENCES

  1. Harihar Narasimha-Iyer, Ali Can, Bandrinath Roysam, Charles V. Stewart, Howard L.Tanenbau, Anna Majerovics, and Hanumant Singh. "Robust detection and classfication of longitudinal changes in color retinalfundus images for monitoring diabetic retinopathy". IEEE Transactions on Biomedical Engineering, 53(6):10841098, June 2006.

  2. Xiaohui Zhang and O. Chutatape. "Detection and classification of bright lesions in colour fundus images". In Proceedings of IEEE Internationl Confrence on Image Processing (ICIP), volume 1, pages 139142, October 2004.

  3. Alireza Osareh, Majid Mirmehdi, Barry Thomas, and Richard Markham. "Classification and localization of diabetic-related eye disease". In Proc. of 7th European Conference on Computer Vision (ECCV), pages 502516, 2002.

  4. Huan Wang, Wynne Hsu, Kheng Guan Goh, and Mong Li Lee. "An effective approach to detect lesions in color retinal images". In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages 181186, 2000.

  5. T. Walter, J.-C. Klein, P. Massin, and A. Erginay. "A contribution of image processing to the diagnosis of diabetic retinopathy – detection of exudates in color fundus images of the human retina". IEEE Transactions on Medical Imaging, 21:12361243, October 2002.

  6. C. I. S`anche, R. Hornero, M. I .L`opez, and J. Poza. "Retinal image analysis to detect and quantify lesions associated with diabetic retinopathy". In Proceedings of the 26th Annual International Confrence of the IEEE Engineering in Medicine and Biology Society (EMBS), pages 16241627, San Francisco, CA, USA, September 2004

  7. M. Niemeijer, M. D Abr`amoff, and B. van Ginneken. "Automatic detection of the presence of bright lesions in color fundus photgraphs". In Proceedings of IFMBE the 3rd European Medical and Biological Engineering Confrence, volume 11 of 1, pages 18232839, Prague and Czech Republic, November 2005.

  8. C. Sinthayothin, J. F. Boyce, T. H. Williamson, E. Mensah, S.Lal, and

    D. Usher. "Automated detection of diabetic retinopathy on digital fundus images". Diabetic Medicine, 19:105112, 2002.

  9. Marco Foracchia 1, Enrico Grisan, Alfredo Ruggeri. "Luminosity and contrast normalization in retinal images". Medical Image Analysis Volume 9, Issue 3, June 2005.

  10. Garcia M, et al. "Comparison of logistic regression and neural network classifiers in the detection of hard exudates in retinal images". Journal Conf Proc IEEE Eng Med Biol Soc. 2013;2013:5891-4. doi:10.1109/EMBC.2013.6610892

  11. María García, Roberto Hornero, Clara I. Sánchez, María I. López, and Ana Díez. "Feature Extraction and Selection for the Automatic Detection of Hard Exudates in Retinal Images". Proceedings of the 29th Annual International Conference of the IEEE EMBS Cité Internationale, Lyon, France August 23-26, 2007

  12. Huiqi Li and Opas Chutape. "Automated feature extraction in color retinmal images by a model based approach". IEEE Transactions on Biomedical Engineering, 51(2):246254, February 2004.

  13. A Osareh, M Mirmehdi, B Thomas and R Markham. "Automated identification of diabetic retinal exudates in digital colour images". Br J Ophthalmol 2003;87:12201223

  14. Xiang chen et.al. "A novel method for automatic hard exudates detection in color retinal images". Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012

  15. G.S. Annie Grace Vimala et.al. "Automatic Detection of Optic Disk and Exudate from Retinal Images Using Clustering Algorithm". Proceedings of7'h International Conference on Intelligent Systems and Control (ISCO 2013).

  16. B. Harangi, et. al. "Automatic Exudate Detection using Active Contour Model and Regionwise Classification". 978-1-4577-1787-1/12 ©2012 IEEE.

  17. Vesna Zeljkovic et.al. "Classification Algorithm of Retina Images of Diabetic Patients Based on Exudates Detection". 978-1-4673-2362- 8/12 ©2012 IEEE.

  18. Y-H. Pao, "Adaptive Pattern Recognition & Neural Networks", Reading, MA; Addison-Wesley, 1989.

  19. Tomi Kauppi et.al. "DIARETDB0: Evaluation Database and Methodology for Diabetic Retinopathy Algorithms".

  20. Tomi Kauppi et.al. "DIARETDB1: Evaluation Database and Methodology for Diabetic Retinopathy Algorithms".

Leave a Reply