Segmentation Of Optic Disc And Macula In Retinal Images

DOI : 10.17577/IJERTV2IS4910

Download Full-Text PDF Cite this Publication

Text Only Version

Segmentation Of Optic Disc And Macula In Retinal Images

Gogila Devi. K#1, Vasanthi. S *2

# PG Student, K.S.Rangasamy College of Technology Tiruchengode, Namakkal, Tamil Nadu, India.

* Associate Professor, K.S.Rangasamy College of Technology

Tiruchengode, Namakkal, Tamil Nadu, India.

Abstract

Image segmentation plays an vital role in image analysis for diagnosis of various retinopathic diseases. For the detection of glaucoma and diabetic retinopathy the manual examination of the optic disc is the standard clinical procedure. The proposed method is to use the radial line operator to automatically locate and extract the optic disc (OD) from the retinal fundus images. The radial line operator uses the multiple radial line segments on every pixel of image. The maximum variation pixels along the each radial line segments are taken to detect and segment OD. The input retinal images are preprocessed before applying circular transform. The optic disc diameter and the distance from optic disc to macula are found and classified using SVM (Support Vector Machine) and ELM (Extreme Learning Machine).

Keywords Optic disc, Macula, Segmentation, SVM, ELM

.

  1. Introduction

    Digital photography of the retinal image is used as a screening tool for patients suffering from sight threatening diseases such as Diabetic retinopathy (DR) and Glaucoma. The automatic screening system for retinal image diagnosis consist of reliable and efficient detection of normal features like optic disc, blood vessels and fovea in the retinal images are required. And also OD location helps to build a retinal coordinate system that can be used to determine the position of other retinal abnormalities, such as exudates, drusen, and hemorrhages.

    In literature a very few work have been reported about the location of optic disc and they havent

    addressed about the boundary of optic disc. OD localization methods can be classified into two main categories, appearance-based methods and model-based methods. Appearance-based methods identify the location of the OD as the location of the brightest round object within the retinal image. The optic disc represents the beginning of the optic nerve and is the point where the axons of retinal ganglion cells come together. The optic disc is also the entry point for the major blood vessels that supply the retina.The techniques that uses this property of OD are such as intensity thresholding [1] and [2], highest average variation [3], matched spatial filter [4], and principle component analysis [5]. For detection of optic disc, the optic disc center have been previously approximated as the centroid of the largest and brightest connected object in a binary retinal image obtained by thresholding the intensity channel. Reza et al. [6] also used the watershed transformation for OD segmentation. In [7], the approach is based on considering the largest area of pixels having highest gray level in the images for optic disc detection. In [8], stated a method to detect the location of the OD by detecting the area in the image which has the highest variation in brightness. As the optic disc often appears as a bright disc covered in dark vessels the variance in pixel brightness is the highest there.

    Some other methods were based on the anatomical structure that all major retinal blood vessels radiate from the OD. The matching of the expected directional pattern of the retinal blood vessels is used as OD detection algorithm and the retinal blood vessels are segmented using a simple and standard 2-D Gaussian matched filter [9]. In [10], the two methods were described and combined . In the first method calculation of fuzzy and then applies the hypothesis

    generation. The second method equalizes the illumination of the images green plane and then applies the hypothesis generation.The hypothesis generator returns either a location for the optic disc or no location at all. The matching of the expected directional pattern of the retinal blood vessels is used as OD detection algorithm and the retinal blood vessels are segmented using a simple and standard 2-D Gaussian matched filter.

    The proposed technique radial line operator works for the all kind of retinal fundus images with the pathological lesions, image artifacts, exudates, haemorrgaes etc. In the proposed work detect and segment OD simultaneously where other state of art methods uses different methods for OD detection and segmentation. The radial line operator is not only used for OD detection, it can also be used to detect circular shaped objects such as blood vessels in the fundus images. For classification of OD, Support Vector Machine and Extreme Learning machine [11] is used. The ELM is used in cancer cell segmentation [12] and extracting lesion from dermoscopic images based on their size and shape [13]. The proposed classifier is ELM and it is the new classifier for the optic disc classification.

  2. Methodology

    There were a set of 20 images collected from Aarthy Eye Hospital, Karur. Those fundus images were captured using the Carl Zeiss fundus digital camera with photographic angles of 20°, 30° and 50° with size of 2136 X 1538 . The aim of this work is to detect and extract the exact boundary of optic disc using circular transform. The preprocessing steps are histogram equalization, converting the RGB (Red Green Blue) image into RG (Red Green) image, down sampling, median filtering, reducing the search space for optic disc detection. The radial lines operate on the every pixel of the image and find the variation among the each line segment and the pixel with maximum variation on all line segments are taken into account for optic disc detection and segmentation.

    1. Preprocessing

      The first step in the preprocess is histogram equalization, to distribute the contrast equally in the image. This method usually increases the global contrast of the images, especially when the usable data of the image is represented by close contrast values. Due to this adjustment, the intensities can be better distributed on the histogram. This allows for areas of lower local contrast to gain a higher contrast. Histogram equalization accomplishes this by effectively spreading out the most frequency intensity

      values. For the given retinal input image Fig. 1(a) is histogram equalized with the reference image Fig. 1.(b) and the matched image in Fig. 2 is obtained with globally distributed contrast over the image.

      1. (b)

        Fig. 1. (a) Input Retinal Image . (b)Input Reference Image

        Fig. 2. Obtained Match Image

        Fig. 3 represents the image histogram where it plots the number of pixels for each tonal value. The horizontal axis represents the tonal variations, while the vertical axis represents the number of pixels in that particular tone. As the blue color component of the retinal image contains only little information about the optic disc detection the blue color is eliminated from RGB image. The intensity of the image is obtained by combining the green and red colour component of the image by the equation,

        I = c Ir + (1-c) Ig (1)

        where Ir represent the red colour component and Ig represents the green colour component of the image and c represents the constant value which is maintaining the weights of Ir and Ig.

        Fig. 3.Histogram analysis of the input, reference and match image

        The retinal vessels are more stronger in Ig , so to suppress the green colour component the Ir is set with more weight . The constant c value is chosen as 0.75 for the proposed preprocessing steps. The individual red, green and blue colur component planes are shown in Fig. 4. The intensity converted image is shown in Fig. 5(a).

        Fig. 4. Individual colour planes

        Next step is the downsampling of the image to reduce its size and computational cost. Downsampling of an image reduces the number of samples that can represent the signal. In this work, the downsampling is done with the factor of 0.25.The downsampled image is shown in Fig. 5(b).

        1. (b)

          Fig. 5. (a)Intensity Converted Image. (b)Downsampled Image

          The median filter is used to eliminate the speckle noise and also the small image variation in the retinal blood vessels. The main idea of the median filter is to run through the pixel entry by entry, replacing each entry with the median of neighboring entries. The pattern of neighbors is called the window or template, which slides, over the entire signal or image. The pixel at the center will be replaced by the median of all pixel values inside the window. The operation of a median filter is explained in Fig. 6. For every input image the binary template as like in Fig. 7(a) is created for its own in order to avoid complications. Thus the window is applied to every pixel of image. All the pixels of the window are arranged in ascending order the middle value of sequence is replaced to the window. Thus median operation of an image is performed.

          Fig. 6.Representation of median filter

          The obtained median filtered image is free from speckle noise and low frequency blood vessel variation as in Fig. 7(b).

          1. (b)

            Fig. 7.(a)Binary template for Fig. 1(a). (b)Median Filtered Image

            Inorder to reduce the search space for the optic disc detection , the optic disc probability map was obtained. Here the brightest 20% pixels was extracted for OD detection and segmentation. The methodology is projection of image variation and image intensity along the horizontal and vertical direction.

            HPM (x)=( HG (x,y) VG(x,y)) . I(x,y) (2)

            VPM(y)=(HG (x,y) + VG(x,y)) . I(x,y) (3)

            where HG(x,y) represents the image gradient along the horizontal direction, VG(x,y) represents the image gradient along the vertical direction, I(x,y) represents the intensity image, R represents the number of rows in image, C represents the number of columns image. And finally the probability along the horizontal and vertical directions are combined to form the optic disc probabilty map.

            OPM(x,y) = HPM(x). VPM(y) (4)

            In reference with Mahfouzs method [13] the brightest 20% pixels are extracted for optic disc detection is shown in Fig. 8.

            Fig. 8.Optic Disc Probability Map

    2. Radial line operator

      Here the OD assumption is that, it is circular region. The circular transform uses the multiple oriented radial lines segments as in Fig. 9 applicable for all pixels of the image to find image variation along those radial lines. The maximum image variation pixels (PMs) along each radial line are taken, the number of radial line segments used are 180. As each of the radial line segments operate on pixels present on their own line segment and performs the variation difference on the pixels which are present adjacent. The maximum image variation of the pixels along each line segment are marked as PMs. This work uses 180 radial line segments and thereby 180 PMs are marked.The PMs with zero and negative variation are eliminated because they are not belonging to OD. In first filtering stage, the

      PMs with zero and negative variations are eliminated because they represent the macula and the retinal blood vessel. In second filtering stage, distance transform is used. The remaining pixels are filtered based on distance transform.

      Fig. 9. Representation of radial line operator

      In radial line representation n is the number of radial lines and p is the length of each radial lines. The image variation along each radial line is calculated by subtracting the each pixel with its neighboring. The image variation is obtained using the formula,

      IV(xi,j, yi,j) = I(xi,j-1 , yi,j-1) – I(xi,j+1, yi,j+1) (4) where, i = 1,2,, n , j = 1,2,., p.

      The pixel (xi,j , yi,j ) has its neighbouring position pixels are

      (xi,j-1 ,yi,j-1) and (xi,j+1 ,yi,j+1) . These image varaition will be positive because the OD will be brighter than surrounding regions. The position of the PMs for a pixel at (x0,y0) along the n evenly oriented line segments are indexed by a vector as,

      M(x0,y0) = [m1,..mi,..,mn] (5)

      where mi indicates the position of the PM along the ith radial line segment. The maximum image variation along the radial line segments can be denoted as,

      IV(x0,y0) = [ iv(x1,m1, y1,m1), , iv(xi,mi, yi,mi), .,

      iv(xn,mn, yn,mn)] (6)

      For a pixel at (x0,y0), the distance of the PMs are determined using the vector as,

      S(x0, y0) = [s1 (x0 , y0),. si (x0, y0),. sn (x0, y0)](7)

      and the di (x0 , y0) is the distance of (x0 , y0) to PM of the ith radial line segment at (xi,mi , yi,mi) can be obtained using the formula ,

      si(x0,y0)=(((xi,mix0)2, (yi,mi-y0)2)1/2 (8)

      Some pixels may be lie outside the OD boundary due to retinal vessels or presence of abnormalities, so they have to be eliminated based on OD constraints. The PMs with the zero and negative variations are to be eliminated. After that distance threshold is made to minimize the number of pixel. The final OD map is obtained by,

      OD(x,y)=(IV(x,y))/((S(x,y))(Sµ(x,y))2)1/2 (9)

      where IV (x,y) is maximum variation and S (x,y) is maximum distance of PMs. With these PMs, the pixel at the global peak is the OD center and the remaining are used with fitting method for OD boundary extraction. The final optic disc map is obtained by combining,

      ODM=OPM.OD (10)

      The OD map is mapped into retinal image for extracting the optic disc boundary. The downsampled retinal input image with PMs for OD boundary are shown in Fig. 10(a). The PMs on the downsampled median filtered image are then marked on the original input retinal image. The obtained PMs are connected together to extract the optic disc boundary. The segmented optic disc boundary is shown in Fig. 10(b).

      1. (b)

        Fig. 10(a). Marked PMs on the median filtered image.(b) Segmented Optic disc using the PMs

    3. Feature Extraction

      The optic disc diameter and distance of macula were found as features. The average optic disc diameter(DD) obtained was 30 to 67 pixels. For segmentation of macula first the search area was defined. In standard retinal image the macula will be present at two times the optic disc diameter, so the width of search area is 2DD. In obtained search area the part with lowest intensity is taken as macula, macula present the darkest portion of image. The segmented macula is shown in Fig. 11.

      Fig. 11.Segmentation of optic disc and macula

      Table.1 OD Diameter and center of OD

      IMAGE

      OD DIAMETER

      OD CENTER

      (x, y co ordinates)

      1

      30

      377 ,249

      2

      37

      300,228

      3

      60

      352,224

      4

      65

      344,224

      5

      61

      323,245

      6

      58

      359,221

      7

      54

      114,251

      8

      66

      143,200

      9

      69

      236,254

      10

      64

      162,252

      11

      55

      179,281

      12

      58

      102,198

      13

      64

      358,200

      14

      64

      364,230

      15

      67

      204,223

      16

      55

      209,223

    4. Classification of Optic Disc

          1. Support Vector Machine : A support vector machine (SVM) is a supervised type of learning methodology that classifies the set of input data by

            analyzing their features.The SVM classifier is trained with the features extracted to classify the optic disc in retinal images. On the basis of prediction SVM classifies which input data set belongs to which class. In this work the classes defined were the segmented part is optic disc or not. The decision making function is,

            f(x)=sign((w.x)+b) (11) where w represents the weight of the hidden nodes and b represents the bias of the hidden nodes.

          2. Extreme Learning Machine : Extreme Learning Machine is the feed forward network, which consists of three layers. This is similar to SVM the only difference is, in ELM the input weights and hidden biases are randomly generated instead of tuned. Thereby the nonlinear system is converted to a linear system

      H=T (12)

      Where -Weight vector between hidden layer neurons and the output layer neuron,T – Target vector for training dataset, H hidden layer output matrix.

      H={hij}(i=1,2,.,N)(j=1,2,.,K) (13)

      h=g(w.x+b) (14)

      Where g(x) is the activation function used. In ELM the hidden elements are independent from the training data and target functions, so the training time for classification is less compared with SVM. Fig. 12. represents the time comparison of ELM and SVM.

      FIG. 12.Time comparison of SVM and ELM

  3. Conclusion

    Thus the OD center and OD boundary was obtained simultaneously using the single method. This technique can applied to the image having lesions and

    with imaging artifacts such as illusions, hazing etc and can accurately detect OD with radial line function. The radial line operator is used to locate the OD more accurately than the methods using the anatomical structures such as blood vessels and optic nerves.

  4. References

  1. S.Tamura,Y.Oka moto and K. Yanashima,1998 Zero crossing interval correction in tracking eye- fundus blood vessels, in Proc. Int. Conf.Pattern Recognit., pp. 227233.

  2. Z. Liu, O. Chutatape, and S. M. Krishnan, 1997Automatic image analysis of fundus photograph, in Proc. Conf. IEEE Eng. Med. Biol. Soc., pp.524-525, (Nov 1997).

  3. M. Goldbaum, S. Moezzi, A. Taylor, S. Chatterjee,

    J. Boyd, E. Hunter, and R. Jain, 1996Automated diagnosis and image understanding with object extraction, object classification, and inferencing in retinal images, in Proc. Conf. IEEE Int. Conf. Image Process., pp. 695698, (Sep 1996).

  4. C. Sinthanayothin, J. F. Boyce, H. L. Cook, and T. H.Williamson,1999 Automated location of the optic disk, fovea, and retinal blood vessels from digital color fundus images, Br. J. Ophthalmol., vol. 83, pp. 902 910, (Aug. 1999).

  5. H. Li and O. Chutatape,2004Automatic feature extraction in color retinal by a model based approach, IEEE Trans. Biomed Eng., vol. 51, no. 2, pp. 246254( Feb. 2004).

  6. A. W. Reza, C. Eswaran, and S. Hati, 2008 Automatic tracing of optic disc and exudates from color fundus images using fixed and variable thresholds, J. Med. Syst., vol. 33, pp. 7380,.

  7. S. Tamura, Y. Okamoto and K. Yanashima, 1988 Zero-crossing Interval Correction in Tracking Eye- fundus Blood Vessels, Pattern Recognition, Vol. 21, No.3, pp.227-233,.

  8. S. Sekhar, W. Al-Nuaimy, and A. K. Nandi, 2008 Automated localisation of retinal optic disk using Hough transform, in Proc. Int. Symp. Biomed. Imag.: Nano Macro, pp. 15771580.

  9. A. Youssif, A. Z. Ghalwash, and A. Ghoneim, 2008Optic disc detection from normalized digital fundus images by means of a vessels directionmatched filter, IEEE Trans. Med. Imag., vol. 27, no. 1, pp. 11 18, (Jan. 2008).

  10. A. Hoover and M. Goldbaum, 2003Locating the optic nerve in a retinal image using the fuzzy convergence of the blood vessels, IEEE Trans. Med.Imag., vol. 22, no. 8, pp. 951958, (Aug. 2003).

  11. Guang-Bin Huang,Extreme Learning Machine for Regression and Multiclass Classification, IEEE Trans. on systems, man, and cyberneticspart b: cybernetics, vol. 42, no. 2, Apr. 2012.

[12]A.Bharathi et.al, Cancer Classification using Modified Extreme Learning Machine based on ANOVA Features, European Journal of Scientific Research ISSN 1450-216x, vol.58, no.2 (2011), pp.156-165.

[13] G.Subha Vennila et.al, Dermoscopic Image Segmentation and Classification using Machine Learning Algorithms, International Conference on Computing, Electronics and Electrical Technologies,2012.

Leave a Reply