Diagnosis of Diabetic Retinopathy using Segmentation of Blood Vessels and Optic Disk in Retinal Images

Download Full-Text PDF Cite this Publication

Text Only Version

Diagnosis of Diabetic Retinopathy using Segmentation of Blood Vessels and Optic Disk in Retinal Images

Pedda Shubham Sangappa

MTech Student

Dept. of Instrumentation and Electronics Engineering Dayananda Sager College Of Engineering

Bangalore, India

Rajashekar.J S

HOD & Associate Professor

Dept. of Instrumentation and Electronics Engineering Dayananda Sagar College Of Engineering

Bangalore, India

Abstract Medical image analysis is one of the research areas that are currently attracting intensive interests of scientists and physicians. Retinal image analysis is increasingly prominent as a nonintrusive diagnosis method in modern ophthalmology. Because retina is the only place in the body where blood vessels can be viewed directly and examined for pathological changes, such as those that occur with hypertension, diabetic retinopathy, glaucoma.

The objective of paper is to segment blood vessels and optic disk in the fundus retinal images to recognise the various six stages of diabetic.

The Markov random field (MRF) image reconstruction method and compensation factor method is used for segmentation of optic disk. Thus the acquired retinal image from database is pre-processed for extraction of retinal landmark and abnormal feature are analysed for detection of diabetic stage to achieve a reduction in the percentage of visual impairment caused by diabetes. Thus leading cause of blindness for people can be minimized.

Keywords – Segmentation, Retina images, Optic disc, MRF image reconstruction, NDPTR, PDR, Diabetic Retinopathy, Fundus Images.


    Diabetes is a disease which occurs when the pancreas does not secrete enough insulin or the body is unable to process it properly. As diabetes progresses, the disease slowly affects the circulatory system including the retina and occurs as a result of long term accumulated damage to the blood vessels, declining the vision of the patient leading to diabetic retinopathy. After 15 years of diabetes about 10% of people become blind and approximately 2% develop severe visual impairment.

    In its early stages, DR is usually local. It doesn't affect the whole retina, and therefore it causes gradual vision impairments. Consequently, the risk of visual disabilities and blindness due to DR could be greatly minimized by early diagnosis and effective treatments that inhibit the progression of the disease. However, patients suffering from DR usually do not notice any visual imperfections until the disease has affected a large area on the retina. The need for mass- screening of diabetic patients' eyes is clearly a vital concern.

    Also with regard to a limited medical staff, an automated system can significantly decrease the manual labour involved in diagnosing large quantities of retinal images. While this represents an obvious and significant gain, there is a larger, logistical need for automated and immediate diagnoses in rural settings. With an automated system, the doctor or local health worker can be made aware of the diabetic retinopathy problem during a single session with a patient. This enables the medical personnel to immediately and visually demonstrate the existing problem to the patient which makes it easier to convince them of the urgency of their situation. They can immediately schedule appointments for the patients without further delay for continued diagnosis and follow-up visits at a regular hospital. An automated system also helps local health workers to detect serious diabetic retinopathy cases without the need for local ophthalmology experts.

    And with the new advances in digital modalities for retinal imaging, an ophthalmologist needs to examine a large number of retinal images to diagnose each patient. Although manual diagnosis of this large number of retinal images is possible, the process is prohibitively cumbersome and limits the mass-screening process.

    Therefore, there is a significant need to develop Computer- Assisted Diagnostic (CAD) tools for automatic retinal image analysis. Developing CAD systems raised a progressive need of image processing tools that provide fast, reliable, and reproducible analysis of major anatomical structures in retinal Fundus images. Segmentation of these retinal anatomical structures is the first step in any automatic retina analysis system.


    Diabetic retinopathy can lead to poor vision and even blindness. Most of the time, it gets worse over many years. At first, the blood vessels in the eye get weak. This can lead to blood and other liquid leaking into the retina from the blood vessels. This is called nonproliferative retinopathy. And this is the most common retinopathy. If the fluid leaks into the centre of your eye, you may have blurry vision. Most people with nonproliferative retinopathy have no symptoms.

    If blood sugar levels stay high, diabetic retinopathy will keep getting worse. New blood vessels grow on the retina. This may sound good, but these new blood vessels are weak. They can break open very easily, even while you are sleeping. If they break open, blood can leak into the middle part of your eye in front of the retina and change your vision. This bleeding can also cause scar tissue to form, which can pull on the retina and cause the retina to move away from the wall of the eye (retinal detachment). This is called proliferative retinopathy. Sometimes people don't have symptoms until it is too late to treat them. This is why having eye exams regularly is so important.

    Fig 1. Examples of colour fundus images. The first two images show healthy eyes while the last two images contain exudates, manifestations of retinopathy.

    1. Detecting Diabetic Retinopathy

      As per the National Institute of Health (NIH 2009), diabetic retinopathy is the most common diabetic eye disease that is caused by changes in the blood vessels of the retina. In some people with diabetic retinopathy, blood vessels may swell and leak fluid. In other people, abnormal new blood vessels grow on the surface of the retina. As illustrated in Figure 1, images of patients with diabetic retinopathy patients can exhibit red and yellow spots which are problematic areas indicative of haemorrhages and exudates. In many retinal images, such as image (C) in Figure 1, a central dark spot represents the macula of the eye. The presence of haemorrhages and exudates in this region is indicative of a serious diabetic retinopathy condition that can soon lead to blindness.

    2. In brief, diabetic retinopathy has four stages

    1. Mild Non-proliferative Retinopathy: At this early stage, micro-aneurysms may occur. These manifestations of the disease are small areas of balloon-like swelling in the retinas tiny blood vessels.

    2. Moderate Non-proliferative Retinopathy: As the disease progresses, some blood vessels that nourish the retina are blocked.

    3. Severe Non-proliferative Retinopath: Many more blood vessels are blocked, depriving several areas of the retina with their blood supply. These areas of the retina send signals to the body to grow new blood vessels for nourishment.

    4. Proliferative Retinopathy: At this advanced stage, the signals sent by the retina for nourishment trigger the growth of new blood vessels. These new blood vessels are abnormal and fragile. They grow along the retina and along the surface of the clear, vitreous gel that fills the inside of the eye. By themselves, these blood vessels do not cause symptoms or vision loss. However, they have thin, fragile walls. If they leak blood, severe vision oss and even blindness can result.


    The application of Markov random fields (MRF) for images was suggested in early 1984 by German and German. Their strong mathematical foundation and ability to provide global optima even when defined on local features proved to be the foundation for novel research in the domain of image analysis, de-noising and segmentation. MRFs are completely characterized by their prior probability distributions, marginal probability distributions, cliques, smoothing constraint as well as criterion for updating values. The criterion for image segmentation using MRFs is restated as finding the labelling scheme which has maximum probability for a given set of features. The broad categories of image segmentation using MRFs are supervised and unsupervised segmentation.

      1. Supervised Image Segmentation using MRF and MAP

    In terms of image segmentation, the function that MRFs seek to maximize is the probability of a identifying a labelling scheme given a particular set of features is detected in the image. This is a restatement of the Maximum a posterior estimation method.

    Fig 2. MRF neighbourhood for a chosen pixel

    The generic algorithm for image segmentation using MAP is given below:

    1. Define the neighbourhood of each feature (random variable in MRF terms). Generally this includes 1st order or 2nd order neighbours.

    2. Set initial probabilities p(f) for each feature as 0 or 1, where fi of the set containing features extracted for pixel i and define an initial set of clusters.

    3. Using the training data compute the mean li) and variance ( li) for each label. This is termed as class statistics.

    4. Compute the marginal distribution for the given labelling scheme P(fi|li) using Bayes' theorem the class statistics calculated earlier. A Gaussian model is used for marginal distribution. And is given by

    5. Calculate the probability of each class label given the neighbourhood defined previously. Clique potentials are used to model the social impact in labelling.

    6. Iterate over new prior probabilities and redefine clusters such that these probabilities are maximized. This is done using a variety of optimization algorithms.

    7. Stop when probability is maximized and labelling scheme does not change. The calculations can be implemented in log likelihood terms as well.


    The optic disk segmentation starts by defining the location of the optic disk. This process used the convergence feature of vessels into the optic disk to estimate its location. The disk area is then segmented using two different automated methods (MRF image reconstruction and compensation factor). Both methods use the convergence feature of the vessels to identify the position of the disk. The MRF method is applied to eliminate the vessel from the optic disk region. This process is known as image reconstruction and it is performed only on the vessel pixels to avoid the modification of other structures of the image. The reconstructed image is free of vessels and it is used to segment the optic disk via graph cut. In contrast to MRF method, the compensation factor approach segments the optic disk using prior local intensity knowledge of the vessels. Fig. Shows the overview of both the MRF and the compensation factor methods[1].

    Fig.3 Segmentation Methods a) MRF image reconstruction method diagram

    (b) compensation factor method diagram.

      1. Optic Disk Location

        The vessel image is pruned using a morphological open process to eliminate thin vessels and keep the main arcade. The centroid of the arcade is calculated using the following Formulation:

        ROI is set to a square of 200 × 200 pixels concentric with the detected optic disk centre. Then, an automatic initialization of seeds (Fg and Bg) for the graph is performed. A neighbourhood of 20 pixels of radius around the centre of the optic disk area is marked as the Fg pixels, and a band of pixels around the perimeter of the image are selected as the Bg seeds.

      2. Optic Disk Segmentation with MRF Image Reconstruction

        The high contrast of blood vessels inside the optic disk presented the main difficulty for its segmentation as it misguides the segmentation through a short path, breaking the continuity of the optic disk boundary.

        1. (b)

          1. (d)


            Where xi and yi are the coordinates of the pixel in the binary image and K is the number of pixels set to 1 (pixels marked as blood vessels) in the binary image. Given the gray scale intensity of a retinal image, we select 1% of the brightest region. The algorithm detects the brightest region with the most number of pixels to determine the location of the optic disk with respect to the centroid point (right, left, up, or down). The algorithm adjusts the centroid point iteratively until it reaches the vessel convergence point or the centre of the main arcade (centre of the optic disk) by reducing the distance from one centroid point to next one in the direction of the brightest region, and correcting the central position inside the arcade accordingly. Fig. shows the process of estimating the location of the optic disk in a retinal image. It is important to notice that the vessel convergence point must be detected accurately, since this point is used to automatically mark Fg seeds. A point on the border of the optic disk may result in some false Fg seeds. After the detection of the vessel convergence point, the image constrained a region of interest (ROI) including the whole area of the optic disk to minimize the processing time. This


            Fig. 4. Optic disk detection. (a) Original Image, (b) Greyscale image , (c) binary segmented blood vessel after pruning (d) sequence of point from the centroid to vessel convergence point (optic disk location). (e) ROC.

            1. (b)


          Fig.5 Optic disk detection. (a) Optic Disk (b) Optic cup (c) Disk and Cup location

          Fig.6 MRF reconstruction applied to retinal images. Top: original gray scale images. Bottom: reconstructed images using the MRF-based method.

          To address this problem, the MRF based reconstruction method presented is adapted .We have selected this approach because of its robustness. The objective of our algorithm is to find a best match for some missing pixels in the image; however, one of the weaknesses of the MRF-based reconstruction is the requirement of intensive computation. To overcome this problem, we have limited the reconstruction to the ROI, and using prior segmented retina vascular tree, the reconstruction was performed in the ROI. An overview diagram of the optic disk segmentation with the MRF image reconstruction is shown in Fig 5.

          Let us consider a pixel neighbourhood w(p) defined as a square window of size W, where pixel p is the centre of the neighbourhood. I is the image to be reconstructed and some of the pixels in I are missing. Our objective is to find the best approximate values for the missing pixels in I. So, let d(w1, w2) represent a perceptual distance between two patches that defines their similarity. The exact matching patch corresponds to d(w_, w(p)) = 0. If we define a set of these patches as (p) = {_ I : d(_, (p)) = 0}, the probability density function of p can be estimated with a histogram of all centre pixel values in (p). However, since we are considering a finite Neighbourhood for p and the searching is limited to the image area, there might not be any exact matches for a patch. For this reason, we find a collection of patches, whose match falls between the best match and a threshold. The closest match is calculated as best = arg min d((p), ) I. All the patches with d((p), ) < (1 +

          _)d((p), best) are included in the collection _. d(w_, w(p)) is defined as the sum of the absolute differences of the intensities between patches, so identical patches will result i d(w_, w(p)) = 0. Using the collection of patches, we create a histogram and select the one with the highest mode. Fig.6 shows sample results of the reconstruction. The Fgs and the Bgs seeds are initialized in the reconstructed image, which are then used in graph cut formulation to segment the optic disk. Similar to Fig. 4, the initialization of the Fgs and Bgs seeds is performed using the reconstructed image.

          The graph cut algorithm is used to separate the Fg and the Bg by minimizing the energy function over the graph and producing the optimal segmentation of the optic disk in the image. The energy function of the graph in consists of regional and boundary terms. The regional term (likelihoods of Fg and Bg) is calculated using , while the boundary term (relationship between neighbouring pixels) is derived using. A grid of 16 neighbours N is selected to create links between pixels in the image Im. The max-flow algorithm is used to cut the graph and find the optimal segmentation.

          Fig. 7. Process flow diagram

      3. Optic Disk Segmentation with a Compensation Factor

        In contrast to the MRF image reconstruction, we have incorporated the blood vessels into the graph cut formulation by introducing a compensation factor V ad. This factor is derived using prior information of the blood vessel. The energy function of the graph cut algorithm generally comprises boundary and regional terms. The boundary term is used to assign weights on the edges (n-links) to measure the similarity between neighbouring pixels with respect to the pixel proprieties (intensity, texture, and colour). Therefore, pixels with similar intensities have a strong connection. The regional term is derived to define the likelihood of the pixel belonging to the Bg or the Fg by assigning weights on the edges (t-link) between the image pixels and the two terminals.

        Fig.8 Optic disk segmentation with the compensation factor V ad method:

        (a) V ad = 20, (b) V ad = 100, (c) V ad = 150, and (d) V ad = 250.

        Bg and Fg seeds. In order to incorporate the blood vessels into the graph cut formulation, we derived the t-link as follows:

        Slink = n Pr (Ip\ Fgseeds) if p vessel n Pr (Ip\ Fgseeds) + V ad if p = vessel

        1. (b)

    Tlink =

    ln Pr (Ip\Bgseeds) if p = vessel

    ln Pr (Ip\Bgseeds) if p = vessel

    where p is the pixel in the image, Fg seeds is the intensity distribution of the Fg seeds, Bg seeds represents the intensity distribution of the Bg seeds, and V ad is the compensation factor given as

    V ad = maxp vessel{ ln Pr (Ip\Bgseeds)}.

    The intensity distribution of the blood vessel pixels in the region around the optic disk makes them more likely to belong to Bg pixels than the Fg (or the optic disk pixels). Therefore, the vessels inside the disk have weak connections with neighbouring pixels making them likely to be segmented by the graph cut as Bg. We introduce in a compensation vector to all t-links of the Fg for pixels belonging to the vascular tree to address this behaviour. Consequently, vessels inside the optic disk are classified with respect to their neighbourhood connections instead of their likelihood with the terminals Fg and Bg seeds. The segmentation of the disk is affected by the value of V ad, and the method achieves poor segmentation results for low value of V ad. However, when the value of the V ad increases, the performance improves until the value of V ad is high enough to segment the rest of the vessels as Fg.


Fig. 9 (a) and (c) retinal images, (b) and (d) segmentation results.


Fig10. Compensation Factor method (a)Original Image (b) and (c) Detected Optic Disk

Fig. 11 : Blood vessel output (a)PDR (b)NDPR (c) Advanced Diabetic Eye


  1. Ana Salazar-Gonzalez, Djibril Kaba, Yongmin Li, and Xiaohui Liu Segmentation of the Blood Vessels and Optic Disk in Retinal Images Ieee journal of biomedical and health informatics, vol. 18, no. 6, november 2014

  2. Nathan Silberman, Kristy Ahlrich, Rob Fergus and Lakshminarayanan Subramanian Case for Automated Detection of Diabetic Retinopathy .

  3. en.m.wikipedia.org/wiki/Image_segmentation

  4. Detection and Classification of Diabetic Retinopathy using Retinal Images

  5. Kanika Verma, Prakash Deep and A. G. Ramakrishnan, Senior Member, IEEE Segmentation of the Blood Vessels and Optic Disk in Retinal Images


  6. Optic Nerve Head Segmentation


    FEBRUARY 2004

  7. Review of automated diagnosis of diabetic retinopathy using the support vector machine


  8. Retinal image analysis using Morphological process and clustering Technique

    Signal & image processing : an international journal (sipij) vol.4, no.6, december 2013

  9. en.wikipedia.org/wiki/Cup-to-disc_ratio

Leave a Reply

Your email address will not be published.