Automatic Detection of Optic Disc and Optic Cup Segementation for Glaucoma Screening

DOI : 10.17577/IJERTCONV3IS16129

Download Full-Text PDF Cite this Publication

Text Only Version

Automatic Detection of Optic Disc and Optic Cup Segementation for Glaucoma Screening

  1. Divya Rekha

    (M.E Scholar)

    Department of Electronics &communication Eng.

    Excel college of Engg & Technology, Pallakkapalaym,Tamil nadu.

    1. Murugan

ME,(Phd)Associate professor Department of Electronics &communication Eng.

Excel College of Engg & Technology, Pallakapalayam, Tamil nadu.

Abstract: The vascular tree observed in a retinal fundus image can provide clues for Glaucoma diseases. Its analysis requires the identication of vessel bifurcations and crossovers Methods .We use a set of trainable key point detectors that we call Combination of Shifted Filter or morphological filer responses to automatically detect vascular bifurcations in segmented retinal images. We congure a set of lters that are selective for a number of prototype bifurcations and demonstrate that such lters can be effectively used to detect bifurcations that are similar to the prototypical ones. The automatic conguration of such a lter selects given channels of a bank of Gabor lters and determines certain blur and shift parameters. The response of a Morphological lter is computed as the weighted geometric mean of the blurred and shifted responses of the selected Gabor lters. The morphological approach is inspired by the function of a specic type of shape-selective neuron in area V4 of visual cortex.

Keywords: Glaucoma diseases , fundus, bifurcations , Morphological lter ,Gabor filter, visual cortex.

I INTRODUCTION

Diabetic retinopathy, hypertension, glaucoma, and macular degeneration are nowadays some of the most common causes of visual impairment and blindness. Early diagnosis and appropriate referral for treatment of these diseases can prevent visual loss. All of these diseases can be detected through a direct and regular ophthalmologic examination of the risk population. So, a system for automatic recognition of the characteristic patterns of these pathological cases would provide a great benefit. Regarding this aspect, optic disc (OD) segmentation is a key process in many algorithms designed for the automatic extraction of anatomical ocular structures ,the detection of retinal lesions, and the identification of other fundus features. First, the OD location helps to avoid false positives in the detection of exudates associated with diabetic retinopathy, since both of them are spots with similar intensity. Secondly, the OD margin can be used for establishing standard and concentric areas in which retinal vessel diameter measurements are performed by calculating

some important diagnostic indexes for hypertensive retinopathy, such as central retinal artery equivalent (CRAE) and central vein equivalent (CRVE). Thirdly, the relation between the size of the OD and the cup (cup-disc- ratio) has been widely utilized for glaucoma diagnosis. In addition, the relatively constant distance between the OD and the fovea is useful for estimating the location of the macula, area of the retina related to fine vision.

  1. EXISTING SYSTEM

    Glaucoma as a neurodegeneration of the optic nerve is one of the most common causes of blindness. Because revitalization of the degenerated nerve fibers of the optic nerve is impossible early detection of the disease is essential. This can be supported by a robust and automated mass-screening. A novel automated glaucoma detection system that operates on inexpensive to acquire and widely used digital color fundus images. After a glaucoma specific preprocessing, different generic feature types are compressed by an appearance-based dimension reduction technique. Subsequently, a probabilistic two- stage classification scheme combines these features types to extract the novel Glaucoma Risk Index (GRI) that shows a reasonable glaucoma detection performance An effect of Preprocessing Eye Fundus Images on Appearance Based Glaucoma Classification Early detection of glaucoma is essential for preventing one of the most common causes of blindness. The manual examination of optic disk (OD) is a standard procedure used for detecting glaucoma image information around each point of interest in multi- dimensional feature space to provide robustness against variations found in and around the OD region. DRIVE dataset to extract the histograms of each color component. Then, we calculate the average of histograms for each color as template for localizing the center of optic disc. The DRIVE, STARE, and a local dataset including 273 retinal images are used to evaluate the proposed algorithm. The success rate was 100, 91.36, and 98.9%, respectively. However, many glaucoma patients are unaware of the disease until it has reached its advanced stage. Current tests

    using intraocular pressure (IOP) are not sensitive enough for population based glaucoma screening. Optic nerve head assessment in retinal fundus images is both more promising and superior.

  2. PROPOSED SYSTEM

    One of the unique features of retina is the Bifurcation point. It is basically a junction on a vessel from where two other child nerves are generated. On the other hand, when two vessels or branches of two vessels meet at a point then it is termed as Crossover Statistical analysis has shown that in a retinal vascular structure, bifurcation points are more distinctive than crossovers.The proposed algorithm has been attempted for pointing out the prominent bifurcation points on the retinal blood vessels. The scheme takes vessel segmented binary image as input.The vessels can be extracted from the fundus image by various image filtering techniques .Our algorithm focuses on determining the potential bifurcation points by analyzing local neighborhood connectivity around junction points on blood-vessels.The scheme starts by selecting an arbitrary point on the vessel of the image. At each node, it considers a branching point surrounding the chosen point for analysis of the bounded region and then slides the window by half the window-size in row-major order to consider next candidate point.

    We propose three methods to detect glaucoma:

    1. Assessment of Eye image Segmented with Morphological Filter.

    2. Assessment of Bifurcation Algorithm.

    3. Detection of Optic Disc.

    Glaucoma as a neurodegeneration of the optic nerve is one of the most common causes of blindness. Because revitalization of the degenerated nerve fibers of the optic nerve is impossible early detection of the disease is essential. After a glaucoma specific preprocessing, different generic feature types are compressed by a morphological based dimension reduction technique. Subsequently, a probabilistic two-stage classification scheme combines these features types to extract the novel Glaucoma Risk Index (GRI) that shows a reasonable glaucoma detection performance. On a sample set of 575 fundus images classification accuracy have been achieved. The GRI gains a competitive area under area of 88% compared to the established topography-based glaucoma probability score. The proposed color fundus image-based GRI achieves a competitive and reliable detection performance on a low-priced modality by the statistical analysis of entire images of the optic nerve head.

    Fig 1.Block diagram of the proposed system

  3. IMAGE SEGMENTATION USING MORPHOLOGICAL:

    Image segmentation is the process of partitioning a digital image into multiple segments (sets of pixels, also known as super pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easierto analyze. From fig4.2a image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image (see edge detection). Each of the pixels in a region is similar with respect to some characteristic or computed property, such as color, intensity, or texture. Adjacent regions are significantly different with respect to the same characteristic(s).When applied to a stack of images, typical in medical imaging, the resulting contours after image segmentation can be used to create 3D reconstructions with the help of interpolation algorithms like Marching cubes.

    Segmentation

    The circular Hough transform can be employed to deduce the radius and centre coordinates of the pupil and iris regions. By using edge map in Fig 4.3derivatives in the horizontal direction used for detecting the eyelids, and derivatives in the vertical direction is used to detecting the outer circular boundary of the iris.

    Fig 2. a) an eye image b) corresponding edge map c) edge map with only horizontal gradients d) edge map with only vertical gradients.

    There are a number of problems with the Hough transform method. First of all, it requires threshold values to be chosen for edge detection, and this may result in critical edge points being removed, resulting in failure to detect circles/arcs. Secondly, the Hough transform is computationally intensive due to its brute-force approach.

    Optic Disc Segmentation:

    The cup-to-disc ratio is an important statistic in diagnosing glaucoma. However segmentation must be done manually which is quite tedious and time-consuming. Also, there is large variability between specialists. This research is an attempt to apply graph search techniques to the segmentation problem in an attempt to improve upon current pixel classification techniques.

    Morphological filter:

    With the segmentation, objects of interest from image are extracted. Various techniques discovered till now for segmentation, here watershed algorithm is used. Watershed is also based on morphology. It is a region based algorithm having low computational complexity and high efficiency. It provides complete division of the image. Besides, all these advantages, it has a major drawback; it suffers from over segmentation. Due to this, image content is distorted completely. So, some modifications are required to remove the problem of over segmentation. Morphological image processing is a collection of non- linear operations related to the shape or morphology of features in an image. Morphological operations rely only on the relative ordering of pixel values, not on their numerical values, and therefore are especially suited to the processing of binary images. Morphological operations can also be applied to grayscale images such that their light transfer functions are unknown and therefore their absolute pixel values are of no or minor interest.Erosion and dilation are the basic operations in Morphological filters. Erosion with small (e.g. 2×2 – 5×5) square structuring elements shrinks an image by stripping away a layer of pixels from both the inner and outer boundaries of regions. The holes and gaps between different regions become larger, and small details are eliminated.

    Preprocessing of Images

    Each component of each color image was normalized to the range [0, 1] by dividing by 255, the maximum possible value in the original 8-bit representation. Each image was converted to the luminance component Y, given as

    where R, G, and B are the red, green, and blue components, respectively, of the color image. The effective region of the image was threshold using the normalized threshold of 0.1. The artifacts present in the images at the edges were removed by applying morphological erosion with a disk- shaped structuring element of diameter 10 pixels.

    In order to prevent the detection of the edges of the effective region in subsequent steps, each image was extended beyond the limits of its effective region. First, a 4-pixel neighborhood was used to identify the pixels at the outer edge of the effective region. For each of the pixels identified, the mean gray level was computed over all pixels in a 21 × 21 neighborhood that were also within the effective region and assigned to the corresponding pixel location. The effective region was merged with the outer edge pixels, forming an extended effective region. The procedure was repeated 50 times, extending the image by a ribbon of width 50 pixels. After preprocessing; a 5 × 5 median filter was applied to the resulting image to remove outliers. Then, the maximum intensity in the image was calculated to serve as a reference intensity to assist in the selection of the ONH from candidates detected in the subsequent steps.

    Detection of Blood Vessels Using Gabor Filters

    The methods proposed in the present work for the detection of the ONH rely on the initial detection of blood vessels. We have previously proposed image processing techniques to detect blood vessels in images of the retina based upon Gabor filters, which are used in the proposed method. Gabor functions are sinusoidal modulated Gaussian functions that provide optimal localization in both the frequency and space domains; a significant amount of research has been conducted on the use of Gabor functions or filters for segmentation, analysis, and discrimination of various types of texture and curvilinear structures. The basic, real, Gabor filter kernel oriented at the angle = /2 may be formulated as:

    (1)

    where x and y are the standard deviation values in the x and y directions and fo is the frequency of the modulating sinusoid. Kernels at other angles are obtained by rotating the basic kernel. In the proposed method, a set of 180 kernels was used with angles spaced evenly over the range [/2, /2].Gabor filters may be used as line detectors The parameters in Eq.1, namely, x, y, and fo, need to be specified by taking into account the size of the lines or curvilinear structures to be detected. Let be the thickness of the line detector. This parameter is related to x and fo as follows: The amplitude of the exponential (Gaussian) term in Eq.1 is reduced to one half of its maximum at x = /2 and y = 0; therefore,

    . The cosine term has a period of ; hence, fo = 1/. The value of y could be defined as y = lx where l determines the elongation of the Gabor filter along its orientation with respect to its thickness. The value of could be varied to prepare a bank of filters at different scales for multiresolution filtering and analysis however, in the present work, a single scale is used with = 8 pixels and l = 2.9.

    The Gabor filter designed as above can detect piecewise linear features of positive contrast, i.e., linear elements that are brighter than their immediate background. In the present work, the Gabor filter was applied to the inverted version of the preprocessed Y component. Blood vessels in the retina vary in thickness in the range 50200 µm with a median of 60 µm. Taking into account the size (565 × 584 pixels) and the spatial resolution (20 µm/pixel) of the images in the DRIVE database, the parameters for the Gabor filters were specified as = 8 pixels and l = 2.9 in the present work. For each image, a magnitude response image was composed by selecting the maximum response over all of the 180 Gabor filters for each pixel. An angle image was prepared by using the angle of the filter with the largest magnitude response. The magnitude response and angle images were filtered with a Gaussian filter having a standard deviation of 7 pixels (a description of the procedure for filtering of orientation fields is given by Ayres and Rangayyan and down sampled by a factor of 4 for efficient analysis using phas portraits in the subsequent step.

    K-dictionary using histogram equalization

    A common setup for the dictionary learning problem starts with access to a training set, a collection of training vectors, each of length N. This training set may be finite, and then the training vectors are usually collected as columns in a matrix X of size NxL, or it may be infinite. For the finite case the aim of dictionary learning is to find both a dictionary, D of size NxK, and a corresponding coefficient matrix W of size KxL such that the representation error, R=X-DW, is minimized and W fulfill some sparseness criterion.

    Histogram equalization

    Fig 3 Histogram Equalization

    Let us denote the inverse transformation by r = T -1(s) . We assume that the inverse transformation also satisfies the above two conditions. We consider the gray values in the input image and output image as random variables in the interval [0, 1]. Let pin(r) and pout(s) denote the probability density of the Gray values in the input and output images. If pin(r) and T(r) are known, and r = T -1(s) satisfies condition 1, we can write (result from probability theory

    .One way to enhance the image is to design a transformation T(.) such that the gray values in the output is uniformly distributed in [0, 1], i.e. pout (s) = 1, 0 £ s £1

    . In terms of histograms, the output image will have all gray a value in equal proportion .This technique is called histogram equalization. Next we derive the gray values in the output is uniformly distributed in [0, 1]. Consider the transformation

    The histogram equalization is an approach to enhance a given image. The approach is to design a transformation T(.) such that the gray values in the output is uniformly distributed in [0, 1].The moment that the input image to be enhanced has continuous gray values, with r = 0

    pout

    (s) p

    in

    r

    1. dr

      ds r T 1 ( s )

      representing black and r = 1 representing white. We need to design a gray value transformation s = T(r), based on the histogram of the input image, which will enhance the image. T(r) is a monotonically increasing function for 0 £

      s T (r) 0

      pin (w)dw,

      0 r 1

      (3)

      r £ 1 (preserves order from black to white). T(r) maps [0,1] into [0,1] (preserves the range of allowed Gray values).

      Note that this is the cumulative distribution function (CDF) of pin (r) and satisfies the previous two conditions. From the previous equation and using the fundamental theorem

      of calculus histogram is given by

      ds p dr in

      (r)

      .Therefore, the output

      out in

      out in

      p (s) p (r) 1

      1r T 1 ( s) 1,

      0 s 1

      pin (r) r T 1 (s)

      (4)

      The output probability density function is uniform, regardless of the input. Thus, using a transformation

      function equal to the CDF of input gray values r, we can obtain an image with uniform gray values. This usually results in an enhanced image, with an increase in the dynamic range of pixel values. To implement histogram equalization

      Step 1: For images with discrete gray values, compute:

      n

      this method is the fact that the new histogram will have nearly the uniform distribution.

      The different point types in the retinal images are of types such as bifurcation point, dummy bifurcation point, Trifurcation point, along with real life retinal vasculature and Vessel centerlines. The bifurcation structure is composed of a master bifurcation point and three connected neighbors. It is invariant against

      pin

      (rk ) k

      n

      0 rk 1

      (5)

      translation, rotation, scaling, and even modest distortion. It can deal with the registration of retinal images when vasculature-like pattern is identifiable, even partially. The

      L: Total number of gray levels ,nk: Number of pixels with gray value rk ,n: Total number of pixels in the image

      Step 2: Based on CDF, compute the discrete version of the previous transformation

      k

      simplicity and efficiency of the proposed method make readily to be applied alone or incorporated with other existing methods to formulate a hybrid or hierarchy scheme. The illustration of feature based methods which are point matching based on branching angles and structure matching based on bifurcation structures are introduced briefly.

      sk T (rk ) pin (rj )

      j 0

      0 k L 1

      (6)

      Fig 4. The shape of a histogram for contrast enhancement

  4. BIFURCATION ALGORITHM

    By using Bifurcation algorithm, the nodule joints point is detected. The bifurcation algorithms are used to detect regions of multiple steady states and delineate regions of qualitatively different behaviors. The algorithms currently implemented include zeroth order, first order, and pseudo arc-length continuation algorithms, , a turning point bifurcation tracking algorithm, pitchfork bifurcation tracking algorithm, and a Hop tracking algorithm. channel of the given image, because the blood vessels have the highest contrast on it.Automatic contrast adjustment. Despite the use of green channel, blood vessels may have low contrast in some images, due to poor quality of these images. In this case, it is necessary to adjust contrast. Commonly used methods, such as Histogram Equalization, are not very suitable for our case. Manual contrast adjustment has mostly the best results, but unfortunately it cannot be applied. The method called Fast Gray Level Grouping gives satisfying results.The main advantage of

    Fig 5. Bifurcation Point Matching

    Bifurcation structure is one of the feature based methods in which the image registration results are obtained.

    Further the following figures illustrates the proposed structure matching based on which we can obtain the vascular trees of image pairs. Precondition to a bifurcation search is blood vessels detection. Several papers deal with these approaches. They can be divided into two groups. First group is based on morphological operation. Our approach is based on the other group of papers. Match filtering is used for blood vessels detection in those approaches. All following operations are applied on the green

    III. ALGORITHM DESIGN

      1. Initialize the feature points from the binary image.

      2. display the detected points using a rectangular grid

      3. Link to the current point and its neighbour points

        The structure of node: current point, number of neighbors, neighbor 1,

        linkpoint, neighbour 2, linkpoint,…

      4. compute the bifurcation angle

      5. display some points together with its neigbours.

      6. compute the angle of seed from the start point.

      7. extract the bifurcation coordinate from the node

      8. search the linked neighbor of the seed

        Node structure: seed, neighbor numbers, neigp, link1,neigp,…

        Featureless structure: lengtp, angs1(1:4), lengtp,..

      9. verify the correspondence and find the best matched pair

      10. call sub functions to generate the feature data

      11. selects the optimal bifurcation points in the binary image the radius of search window for each candidates

    Num-angle: the number of local point-branch angle within the region

    1: terminal, 2: branch, 3: bifurcation. Match the best point in the feature matrix.

    Process flow

    Matched filter. The most important part of blood vessels detection process is the vessels segmentation from the image background. We use 2D filter response.. The obtained filter is 12 times rotated (each time for 15degrees). All twelve filters are applied on an image, and their responses are added with weighting.

    Thresholding. The threshold is computed for every input image. We proceed with the assumption that blood vessels take up approximately 3-10% of retina image (according to a specific type of fundus camera). We apply morphological peration close after thresholding. This removes small holes and increases the connectivity of vessels.

    Thinning. Last step, preceding the bifurcation localization, is thinning. It is essential, that thinned vessel must lie strictly in the middle of the original vessel. The thinning is executed from four directions, to ensure position of thinned vessel in the middle of the original one.

    Bifurcation localization: Now, we can finally localize bifurcations. This is made by checking 8-neighbor of every positive pixel. When there are three vessels, coming out from one point, they are marked as a bifurcation. The only problem is caused by short pieces at ends of vessels. These were created as a side effect of thinning. This problem is solved by the definition of minimal length of whole 3 vessels coming out from the point.

    Cup to disc ratio:However segmentation must be done manually which is quite tedious and time-consuming. Also, there is large variability between specialists. This research is an attempt to apply graph search techniques to the

    segmentation problem in an attempt to improve upon current pixel classification techniques.

  5. SIMULATION RESULTS

    The retinal eye image shows is taken for the forth coming process of morphological and scanning of the retinal eye image it s considered for input eye image.

    This is image obtained after morphological process where the RBG image is converted into grayscale image here we can clearly identify all the veins and all other retinal points.

    This is simulated image obtained after the extraction of the retinal eye image are such as sclera,choroid,retina etc,are shown by different colors is very useful for finding the affected potion easily.

    This image is called as shake of the image where we find the image of eye is of a normal person or a blind person if blind spot is found during shake of image then person is indentified.

    This is the final retinal output image obtained in simulation where the black spot seen in middle after many scanning of images the black spot is the portion affected by dieases.

  6. CONCLUSION

    The solution for glaucoma assessment which allows derivation of various geometric parameters of the optic disk. This is in contrast to earlier approaches which have largely focused on the estimation of CDR which varies considerably within normal both segmentation methods have been extensively evaluated on a dataset of size 138 images, with associated ground truth from 3 experts and compared against existing approaches. In cup segmentation, it is observed that boundary estimation errors are mostly in regions with no depth cues which is consistent with the high inter-observer variability in these regions. This signals the ambiguity in 2D information and the importance of 3D information in cup segmentation which will be investigated in our future work. Overall, the obtained result of the proposed method for glaucoma assessment and establishes the potential for an effective solution for glaucoma screening.

  7. REFERENCE

  1. Abràmoff.M,Lee.Kand Garvin.M.K(2010), Automated segmentation of neural canal opening and optic cup in 3-d spec- tral optical coherence tomography volumes of the optic nerve head, Invest. Ophthalmol. Vis. Sci., vol. 51, pp. 57085717.

  2. Bock.R,.G.Michelson,Nyl.L.G,and Hornegger.J,(2007)Classifying glaucoma with image-based features from fundus pho- tographs, Proc. 29th DAGM Conf. Pattern Recognit. , pp. 355364.

  3. Harizman.N,Oliveira.C, Chiang.A, (2006)The ISNT rule and differentiation of normal from glaucomatous eyes, Arch. Ophthalmol., vol. 124, pp. 15791583.

  4. Joshi.G.D,Sivaswamy.J, and Krishnadas.S.R,(2011) Optic disk and cup segmentation from monocular color retinal images for glaucoma as- sessment, IEEE Trans. Med. Imag., vol. 30, no. 6, pp. 11921205

  5. Kim.C.Y,Fingert.J.K, and KwonY.H(2007), Automated segmentation of theopticdiscfromstereocolorphotographsusingphysiologically plausible features, Invest. Ophthalmol. Vis. Sci., vol. 48, pp. 1665 1673.

  6. Lee.K,Niemeijer.M, Greenlee.E,(2009), Automated segmentation of the cup and rim from spectral domain oct of the optic nerve head,

    Invest Ophthalmol Vis. Sci., vol. 50, pp. 57785784

  7. Meier.J, Bock.R, G.Michelson,Nyl.L.G,and Hornegger.J,(2007)Effects of preprocessing eye fundus images on appearance based glaucoma classication,inProc.12thInt.Conf.Comput.Anal.ImagesPatterns

  8. Michelson.G,Bock.R,(2010), Glaucoma risk index: Automated glaucoma detection from color fundus images, Med. Image Anal., vol. 14, pp. 471481.

  9. Quigley.H.A and Broman.A.T(2006), The number of people with glaucoma worldwide in 2010 and 2020,Br.J.Ophthalmol., vol. 90, no. 3, pp. 262267

  10. Xu.J, Chutatape.O,Sung.E,(2009)Optic disk feature extraction via modied deformable model technique for glaucoma analysis, Pattern Recognit., vol. 40, pp. 20632076.

  11. Zhang.Z,Yin.F,Wong.T.Y,(2010)Origalight: a nonlinear retinalfundusimagedatabase forglaucomaanalysisandresearch,inInt.Conf.IEEEEng.Med.Biol.

Soc, pp. 30653068.

Leave a Reply