Optic Disk and Fovea Localization in Fundus Retinal Images

DOI : 10.17577/IJERTCONV3IS04050

Download Full-Text PDF Cite this Publication

Text Only Version

Optic Disk and Fovea Localization in Fundus Retinal Images

S. M. Ramya

PG student

Parisutham Institute of Technology and Science, Thanjavur, Tamilnadu,

India1.

G. Jayanthi

Asst. Prof, ECE Dept,

Parisutham Institute of Technology and Science,

Thanjavur, Tamilnadu,2

Abstract: Medical image analysis and processing has great significance in the field of medicine and clinical study. The retinal fundus photograph is widely used in the diagnosis and treatment of various eye diseases such as diabetic retinopathy and glaucoma. A computer-aided fundus image analysis could provide an immediate detection and characterization of retinal features prior to specialist inspection. Specifically, localization of Optic Disk and fovea center is a substantial problem in ophthalmic image processing research area. This paper presents a novel method using morphological techniques and Fuzzy C-means clustering (FCM) to detect Optic Disk (OD) and fovea center respectively. Here, we take the input images as green channel of RGB fundus retinal images which are resized to reduce the processing time. Resized images are preprocessed to remove noise and non- uniform illumination and morphological operations are applied to obtain the brightest part i.e OD. Fovea can be localized using the combined effect of sliding window and Fuzzy C-means Clustering techniques.

Keywords- Morphological operations, Fuzzy C-Means clustering (FCM), optic disk (OD), fovea, sliding window techniques

  1. INTRODUCTION

    The fundus images are most commonly used by ophthalmologists to monitor the severity of disease. They are captured using devices called ophthalmoscopes. Normally these images are manually graded by specially trained clinician which is a time consuming and resource intensive process. A typical retinal fundus image with its features labeled is shown in figure 1.

    Fig 1: Retinal Fundus Image

    Diabetic retinopathy (DR) is the commonest complication of diabetes and is one of the leading causes of blindness. Therefore, early detection of retinopathy is an essential part of diabetes care. In the eye, Optic disk and fovea center are the two vital parts of eye that shows the status of diabetes [9].

    The optic disc represents the beginning of the optic nerve and is the point where the axons of retinal ganglion cells come together. The optic disc is also the entry point for major blood vessels that supply blood to the retina. There are no light sensitive rods or cones to respond to a light stimulus at this point. This causes a break in the visual field called "the blind spot". A pale disc is an optic disc which varies in color from a pale pink or orange color to white. A pale disc is an indication of a disease condition.

    The fovea is located in the center of the macula region of the retina. It is responsible for sharp central vision (also known as foveal vision), which is necessary for humans to read, drive, and any activity where visual detail is of primary importance. Fovea size is relatively small when compared to the rest of retina, but the fovea is very important for seeing fine detail and color. Usually this fovea zone is approximated to a circle of diameter 400 micron. The central region called the macula is a circular area measuring about 4 to 5mm in diameter. A small depression in the centre of the macula is called fovea.

    II RELATED WORKS

    1. Optic Disc Segmentation Based On Watershed Transform

      In [2], PCA is applied on the RGB fundus image in order to obtain a grey image in which the different structures of the retina, such as vessels and OD, are differentiated more clearly to get a more accurate detection of the OD. Then, the vessels are removed through morphology technique to make the segmentation task easier. Variant of the watershed transformation, the stochastic watershed transformation, followed to a stratified watershed, are implemented on a region of the original image. Finally, it must be discriminated which of the obtained watershed regions belong to the optic disc and which ones are not. A

      geodesic transformation and a further threshold are used to achieve that purpose.

      The algorithm is fully automatic, so process is speeded up and user intervention is avoided. Moreover, the method provides robustness in each processing step. i.) It is independent of the database used. ii.) It employs the grey- image centroid as initial seed so that not only the pixel intensity is taken into account. iii.) It makes use of the stochastic watershed in order to avoid sub-segmentation problems

      The real watershed transform of a gradient represent many catchment basins. Each catchment basin corresponds to a minimum of the gradient. These minima are produced by small variations, mainly due to noise, in the grey values.

    2. Segmentation of Blood Vessels and Optic Disc in Retinal Images

      This method [4] initially extracts the retina vascular tree using the graph cut technique. The blood vessel information is then used to estimate the location of the optic disc. The optic disc segmentation is performed using two alternative methods. i.) Markov Random Field (MRF) image reconstruction method and ii.) Compensation Factor method

      Markov Random Field (MRF) image reconstruction method: The objective of the algorithm is to find a best match for some missing pixels in the image, however one of the weaknesses of MRF based reconstruction is the requirement of intensive computation. To overcome this problem, we have limited the reconstruction to the region of interest (ROI) and using prior segmented retina vascular tree, the reconstruction was performed in the ROI.

      Compensation factor method: It incorporates the blood vessels into the graph cut formulation by introducing a compensation factor Vad. This factor is derived using prior information of blood vessel. The segmentation of the disc is affected by the value of Vad, the method achieves poor segmentation results for low value of Vad. However when the value of the Vad increases, the performance improves until the value of V ad is high enough to segment the rest of the vessels as foreground.

    3. Automatic detection of optic disc based on PCA and stochastic watershed

      An automatic method [12] to detect the optic disc is presented. It is focused on using stochastic watershed transformation on a fundus image to obtain the optic disc contour. Pre-processing of the original RGB image is required.

      Pre-processing consists of applying PCA to transform the input image to grey scale. This technique combines the most significant information of the three components RGB in a single image so that it is a more appropriate input to

      the segmentation method. Mathematical morphology is a non-linear image processing methodology based on minimum and maximum operations whose aim is to extract relevant structures of an image. Watershed transformation algorithm is a powerful segmentation method whenever the minima of the image represent the objects of interest and the maxima are the separation boundaries between objects. Due to this fact, the input image of this method is usually a gradient image. Once the region of interest has been obtained by the watershed transformation, the result must be adjusted to eliminate false contours, which are detected due generally to the blood vessels that pass through the OD. A final OD approximation is usually done through ellipses or circles. The fit is performed by means of Kasas method which lets calculate the center and the radius of the circle that better is adapted to a binary region through least squares.

      Instatistics, PCA is a method for simplifying a multidimensional dataset to lower dimensions for analysis, visualization or data compression. The price to be paid for PCAs flexibility is in higher computational requirements as compared to, e.g., the fast Fourier transform

    4. An automatic screening method to detect Optic disc in the retina

      OD detection helps the ophthalmologists to find whether the patient is affected by diabetic retinopathy or not. The proposed technique is to use line operator which gives higher percentage of detection than the already existing methods.

      The method [10] starts with converting the RGB image input into its LAB component. (The technique makes use of the circular brightness structure of the OD, the lightness component of a retinal image is first extracted. We use the lightness component within the LAB color space, where the OD detection usually performs the best). This image is smoothed using bilateral smoothing filter. (A bilateral filter is a non-linear, edge-preserving and noise-reducing smoothing filter for images. The intensity value at each pixel in an image is replaced by a weighted average of intensity values from nearby pixel). Filtering is carried out using line operator (to detect circular regions that have similar brightness structures as OD). After which gray orientation and binary map orientation is carried out and then with the use of the resulting maximum image variation the area of the presence of the OD is found. The portions other than OD are blurred using 2D circular convolution. On applying mathematical steps like peak classification, concentric circles design and image difference calculation, OD is detected.

      The bilateral filter in its direct form can introduce several types of image artifacts:

      • Staircase effect – intensity plateaus that lead to images appearing like cartoons

      • Gradient reversal – introduction of false edges in the image

    Preprocessing

    i.) Resizing

    ii.) Median filter

    Preprocessing

    i.) Resizing

    ii.) Median filter

    Use sliding window to select fovea region

    Use sliding window to select fovea region

  2. PROPOSED SYSTEM

Retinal image

Separation of green channel

Retinal image

Separation of green channel

FCM to locate the fovea center

FCM to locate the fovea center

Figure 2: Block diagram of Proposed system

Morphological operations

  1. Detection of OD:

    1. Separation of green channel

      The optic disc and fovea center are most prominent in the green layer of the intensity image of retina and the blood vessels can be easily identified in the green layer of the retinal images. Therefore, the input images are green channels of intensity images.

      An RGB image, sometimes referred to as a truecolor image, is stored as an m-by-n-by-3 data array that defines red, green, and blue color components for each individual pixel. RGB images do not use a palette. The color of each pixel is determined by the combination of the red, green, and blue intensities stored in each color plane at the pixel's location. Graphics file formats store RGB images as 24-bit images, where the red, green, and blue components are 8 bits each. The three color components for each pixel are stored along the third dimension of the data array. For example, the red, green, and blue color components of the pixel (10,5) are stored in RGB(10,5,1), RGB(10,5,2), and RGB(10,5,3), respectively.

      ii.) Preprocessing

      Resizing: To standarize the retinal size, all input images are resized to have a retinal diameter, DFOV (Field of View) of 300 pixels. Thus, a drastic reduction in computational time is provided.

      iii. Morphological operations: The OD may be distinguished in eye fundus images as a rounded shape having high intensity values (see Fig. 1). In order to locate OD exploiting these features, a morphological processing is performed to obtain a smoothed image where the largest bright region is enhanced. Specifically, a set of morphological opening and closing operations are iteratively applied four times to green channel of retinal image [5].

      Opening:

      The opening of an image f by a structuring element s (denoted by f s) is an erosion followed by a dilation. Opening is so called because it can open up a gap between objects connected by a thin bridge of pixels

      The basic effect of an opening is somewhat like erosion in that it tends to remove some of the foreground (bright) pixels from the edges of regions of foreground pixels.

      However it is less destructive than erosion in general. As with other morphological operators, the exact operation is determined by a structuring element. The effect of the operator is to preserve foreground regions that have a similar shape to this structuring element, or that can completely contain the structuring element, while eliminating all other regions of foreground pixels.

      Closing:

      Median filter: An original RGB retinal image is processed in pre-processing stage. A median filter is applied to the image to remove some noises (noisy pixels as outliers) which came from digital fundus camera or some other sources.

      Median filter helps us by erasing the black dots, called the Pepper, and it also fills in white holes in the image, called Salt " impulse noise .It's like the mean filter but is better in 1-Preserving sharp edges 2-The median value is much like neighborhood pixels and will not affect the other pixels significantly -this means that the mean does that.

      The closing of an image f by a structuring element s (denoted by f s) is a dilation followed by an erosion Closing is so called because it can fill holes in the regions while keeping the initial region sizes.

      Closing is similar in some ways to dilation in that it tends to enlarge the boundaries of foreground (bright) regions in an image (and shrink background color holes in such regions), but it is less destructive of the original boundary shape. As with other morphological operators, the exact operation is determined by a structuring element. The effect of the operator is to preserve background regions that have a similar shape to this

      structuring element, or that can completely contain the structuring element, while eliminating all other regions of background pixels.

      The combination of opening and closing operations provides the brighter region that is OD. The selected structural element is a disk whose radius r increases in each iteration (r = 2; 4; 6; 8). Small bright structures that may appear in the fundus image (i.e., hard exudates), are progressively removed. The brightest region can be assumed to be belonging to OD region. Thus, the pixels with the highest intensity are selected as potential OD pixel candidates. The OD is finally located through the centroid of this set of pixels.

  2. Detection of fovea:

We use Fuzzy C-means clustering as a technique for the detection of Fovea where each data point belongs to more than one class.

Once the center and boundary of the OD is found the diameter (d) of the optic disk is found. And then a point Q is located at a distance 2.5 * d from the optic disk center towards the left if OD center is in the right side and towards the right if OD center is in the left side. Fovea is localized by using sliding window technique and Fuzzy C means Clustering algorithm [15].

Sliding window of small size at the point Q is taken and a small region belonging to the fovea region is identified. The features such as intensity value, orientation, spatial connectivity etc of each pixel that are inside the window are extracted and a feature set is formed. FCM is applied to the feature set to separate the fovea pixels. The centroid of the fovea pixels represents the fovea center. The accuracy and complexity of FCM increases with the number of features selected. Hence there is a tradeoff exists between complexity and acuracy of separation. Here, features such as orientation and spatial connectivity are used.

This algorithm works by assigning membership to each data point corresponding to each cluster center on the basis of distance between the cluster center and the data point. More the data is near to the cluster center more is its membership towards the particular cluster center. Clearly, summation of membership of each data point should be equal to one. After every iteration, membership and cluster centers are updated according to the formulae given below.

Fuzzy C means Clustering Algorithm:

1 Initialize U=[uij] matrix, U(0)

2From the partition matrix, generate centroid for each cluster

m expo whose value must be greater than 1. By adjusting the value of m, error can be minimized, where error(obj.fn) is the difference between the two successive partition matrix.

3 Update U(k) , U(k+1)

4If || U(k+1) – then STOP; otherwise return to step 2.

In our is used as min_impro, which is equal to 1e-8. Smaller the value of , higher the accuracy.

Fuzzy C means Clustering Algorithm:

1 Initialize U=[uij] matrix, U(0)

2From the partition matrix, generate centroid for each cluster

m expo whose value must be greater than 1. By adjusting the value of m, error can be minimized, where error(obj.fn) is the difference between the two successive partition matrix.

3 Update U(k) , U(k+1)

4If || U(k+1) – then STOP; otherwise return to step 2.

In our is used as min_impro, which is equal to 1e-8. Smaller the value of , higher the accuracy.

Fig 3: (a-f) Process of Fuzzy C-means clustering

IV. RESULTS AND DISCUSSION

  1. Test Measures:

    Three selected metrics are applied to evaluate quantitatively the performance of proposed system: True Positive Rate, False Positive Rate (FPR), and Accuracy (Accu). In the scope of information retrieval, a contingency table that involves the true positives (TP), true negatives (TN), false positives (FP) and false negatives (FN) is given below.

    Actual

    Predicted

    Positive

    Negative

    Positive

    True positive (TP)

    False negative (FN)

    Negative

    False positive (FP)

    True negative (TN)

    Table 1: Confusion matrix representation

    Some pixels belong to OD region and are tested as OD pixels. They are called true positives(TP). Some pixels are OD pixels, but the test claims they don't. They are called false negatives(FN). Some are not belong to OD region,

    and the test says they don't – true negatives(TN). Finally, some pixels are non-OD pixels but the test says they belong. They are called as false positives (FP).Thus, the number of true positives, false negatives, true negatives, and false positives add up to 100% of the set.

    True Positive Rate:

    It is also known as recall, is the proportion of pixels that tested positive (TP) of all the pixels that actually are positive (TP+FN). It can be seen as the probability that the test is positive given that the pixel is non-OD pixel. With higher sensitivity, fewer actual cases of pixels go undetected.

    False Positive Rate:

    The calculation involves also false positive values (i.e., FPs), but the FP values indicate only the number of non-OD region pixels that are incorrectly identified as OD region pixels. Hence, FPR is the number of negative instances (i.e., non-OD pixels) that were erroneously reported as being positive (i.e., OD pixels).

    Accuracy:

    Accuracy measures the fraction of all instances that are correctly categorized. It is the ratio of the number of correct classifications to the total number of correct or incorrect classifications.

    TPR =

    FPR =

    Accuracy =

    ROC Plot and ROC Area:

    • Developed to statistically model false positive and false negative detections of radar operators

    • Standard measure in medicine and biology

      Properties of ROC

    • ROC Area:

    • 1.0: perfect prediction

    • 0.9: excellent prediction

    • 0.8: good prediction

    • 0.7: mediocre prediction

    • 0.6: poor prediction

    • 0.5: random prediction

    • <0.5: something wrong!

    Fig 4: Separation of RGB planes

    Fig 5: Morphological operations

    Fig 6. Location of optic disk

    Fig 7. Selection of Fovea region

    Fig 8: Location of fovea

    For the purpose of testing, we have taken 20 images randomly from each of the two databases: DRIVE and DIARETDB1 databases.

    TABLE I. PERFORMANCE MEASUREMENTS FOR DRIVE DATABASE

    For each retinal image, we measure the three parameters: TPR, FPR and accuracy and finally found the average TPR, FPR and accuracy.

    DIARETDB1 database

    TPR

    FPR

    Accuracy

    image050

    0.9343

    0.0983

    0.9991

    image052

    0.9951

    0.1479

    0.9992

    image053

    0.9747

    0.0905

    0.9992

    image054

    0.9912

    0.0012

    0.9987

    image055

    0.9258

    0.0611

    0.9984

    image056

    1

    0.0033

    0.9967

    image060

    0.9429

    0.0015

    0.9984

    image061

    0.959

    0.0031

    0.9965

    image062

    0.974

    0.0902

    0.9993

    image066

    0.9455

    0.003

    0.9965

    image070

    0.9597

    0.0024

    0.9974

    image074

    0.9878

    0.0041

    0.9958

    image075

    0.9921

    0.0068

    0.9932

    image076

    0.9259

    0.0029

    0.9969

    image078

    0.9236

    0.0033

    0.9957

    image079

    0.9588

    0.0021

    0.9976

    image080

    0.9466

    0.0013

    0.9981

    image081

    0.9781

    0.0985

    0.9993

    image082

    0.9635

    0.0985

    0.9992

    image083

    0.9663

    0.0012

    0.9985

    Average

    0.9622

    0.0361

    0.998

    /tr>

    DIARETDB1 database

    TPR

    FPR

    Accuracy

    image050

    0.9343

    0.0983

    0.9991

    image052

    0.9951

    0.1479

    0.9992

    image053

    0.9747

    0.0905

    0.9992

    image054

    0.9912

    0.0012

    0.9987

    image055

    0.9258

    0.0611

    0.9984

    image056

    1

    0.0033

    0.9967

    image060

    0.9429

    0.0015

    0.9984

    image061

    0.959

    0.0031

    0.9965

    image062

    0.974

    0.0902

    0.9993

    image066

    0.9455

    0.003

    0.9965

    image070

    0.9597

    0.0024

    0.9974

    image074

    0.9878

    0.0041

    0.9958

    image075

    0.9921

    0.0068

    0.9932

    image076

    0.9259

    0.0029

    0.9969

    image078

    0.9236

    0.0033

    0.9957

    image079

    0.9588

    0.0021

    0.9976

    image080

    0.9466

    0.0013

    0.9981

    image081

    0.9781

    0.0985

    0.9993

    image082

    0.9635

    0.0985

    0.9992

    image083

    0.9663

    0.0012

    0.9985

    Average

    0.9622

    0.0361

    0.998

    TABLE II. PERFORMANCE MEASUREMENTS FOR DIARETDB1 DATABASE

    DRIVE database

    TPR

    FPR

    Accuracy

    01_dr

    0.9786

    0.0054

    0.9941

    01_g

    0.9661

    0.0065

    0.9931

    02_dr

    0.9365

    0.0064

    0.9925

    03_g

    0.9796

    0.01

    0.9896

    03_h

    0.9809

    0.0111

    0.9885

    04_g

    1

    0.0077

    0.9924

    04_h

    0.9793

    0.0039

    0.9953

    05_g

    1

    0.0059

    0.9942

    05_h

    0.9381

    0.0042

    0.9944

    06_dr

    0.9884

    0.0143

    0.9858

    06_g

    0.9449

    0.0034

    0.9933

    06_h

    0.9686

    0.0105

    0.9889

    07_h

    0.9756

    0.0085

    0.9908

    08_h

    0.9606

    0.0051

    0.9941

    09_h

    0.9688

    0.0102

    0.9889

    10_g

    0.9858

    0.0071

    0.9927

    10_h

    0.9905

    0.0041

    0.9958

    11_dr

    0.9817

    0.0062

    0.9934

    13_g

    0.9829

    0.0029

    0.9966

    15_g

    0.941

    0.004

    0.9907

    Average

    0.9724

    0.00687

    0.9922

    Fig 9 : ROC curve

    As our ROC curve area exists above 0.9, our proposed method provides excellent prediction.

    TABLE 3. COMPARATIVE ANALYSIS:

    Techniques used

    Database used

    TPR

    FPR

    Accuracy

    Multi-level Otsus thresholding.[1]

    DRIVE

    99.91%

    43.25%

    98.83%

    DIARETDB1

    99.90%

    43.11%

    99.20%

    MRF image reconstruction [3]

    DRIVE

    75.12%

    3.16%

    94.12%

    Matched filter with first-order derivative of Gaussian [16]

    DRIVE

    71.20%

    2.76%

    93.86%

    DIARETDB1

    71.66%

    3.27%

    94.39%

    Threshold probing of a matched filter response [7]

    DIARETDB1

    67.36%

    5.28%

    92.11%

    Proposed method

    DRIVE

    97.24%

    0.687%

    99.22%

    DIARETDB1

    96.22%

    3.61%

    99.8%

    s

    IV. CONCLUSION

    This project analyses the method to detect the Optic disk and fovea region of the eye. Our system performed well with images of the DRIVE and DIARETDB1 databases. These three databases are compounded of more diverse and realistic set of images, regarding intra- and inter-variations and the existence of lesions. The OD detection results of the system in terms of average True Positive Rate (TPR), False Positive Rate (FPR) and accuracy for DRIVE database are 97.24%, 0.687%, and 99.27% and for DIARETDB1 database are 96.22%, 3.61%, and 99.80% respectively. Future scope of this project is to authenticate persons with blood vessels intersection points as a feature for identification as it varies from one person to other.

    ACKNOWLEDGEMENT

    I would like to thank my guide Mrs.G.Jayanthi Asst. Prof., Electronics and communication engineering Department, Parisutham Institute of Technology and Science, Thanjavur for her help and guidance to enable us to propose this system.

    REFERENCES

    1. Bahadir Karasulu An Automatic Optic Disk Detection and Segmentation System using Multi-level Thresholding

      Advances in Electrical and Computer Engineering

    2. Amandeep Kaur & Reecha Sharma 2014Optic Disc Segmentation Based On Watershed Transform, International Journal of Advanced Research in Computer Science & Software Engineering http://www.ijarcsse.com

    3. Amen Dehghani et al 2012, Optic disc localization in retinal images using Histogram matching, EURASIP Journal on Image and VideoProcessing http://jivp.eurasipjournals.com/ content/2012/1/9

    4. Ana Salazar-Gonzalez, DjibrilKaba, Yongmin Li & Xiaohui Liu 2014, Segmentation of Blood Vessels and Optic Disc in Retinal Images IEEE Journal of Biomedical and Health Informatics.

    5. Angel Suero 2013, Locating the Optic Disc In Retinal Images Using Morphological Techniques, IWBBIO. Proceedings Granada.

    6. Barrigab S, Agurtoa C, Echegarayb S, Pattichisa M, Zamorab G, Baumanc W & Solizb P Fast Localization of Optic Disc and Fovea in Retinal Images for Eye Disease Screening.

    7. Hoover A, Kouznetsova V, and Goldbaum M, Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response, IEEE Transactions on Medical Imaging.

    8. Kaba. D, Salazar-Gonzalez A G, Li Y, Liu Y & Serag A 2013, Segmentation of retinal blood vessels using gaussian mixture models and expectation maximisation, in Health Information Science.

    9. Kavitha.K & Malathi.M 2014 Optic Disc and Optic Cup Segmentation for Glaucoma Classification International Journal of Advanced Research inComputer Science & Technology (IJARCST)

    10. Murugan.R & Dr.ReebaKorah 2014 An automatic screening method to detect Optic disc in the retina International Journal of Advanced Information Technology (IJAIT)

    11. Rashid Jalal Qureshi, Laszlo Kovacs, Brigitta Nagy BalazsHarangi, AndrasHajdu 2012 Automatic Detection of the Fovea and Optic Disk in Digital Retinal Images by Combining Algorithms

    12. Sandra Morales, Valery Naranjo, David Perez, AmparoNavea & Mariano Alcaniz2012 Automatic detecion of optic disc based on PCA and stochastic watershed European Signal Processing Conference.

    13. Soumitra Samanta, Sanjoy Kumar Saha & BhabatoshChandaA Simple and Fast Algorithm to Detect the Fovea Region in Fundus Retinal Image

    14. Thomas Fuhrmann & Andreas Uh Doubts about the Usefulness of Retina Codes in Biometric Recognition

    15. Varalekshmi V.R & Janardhanan.P, Detection of fovea region in fundus retinal image using wavelet transform and fuzzy c- means clustering.

    16. Zhang B, Zhang L, and Karray F, 2010 Retinal vessel extraction by matched filter with first-order derivative of gaussian, Computers in biology and medicine.

    17. Yu*a,b, S Barrigab, C Agurtoa,b, S Echegarayb, M Pattichisa, G Zamorab, W Baumanc and P Solizb Fast Localization of Optic Disc and Fovea in Retinal Images for Eye Disease Screening

    18. Yu.H, Fast Localization and Segmentation of Optic Disk in Retinal Images Using Directional Matched Filtering And Level Sets, IEEE transactions on information technology in biomedicine, vol. 16, no. 4, july 2012

    19. Galway-Heath.D.F, Measurement of optic disc size: equivalence of Methods to correct for ocular magnification, Downloaded from bjo.bmj.com on February 5, 2014 –

      Published by group.bmj.com

    20. Siddalingaswamy P. C, Gopalakrishna Prabhu .K Automatic Localization and Boundary Detection of Optic Disc Using Implicit Active Contours,International Journal of Computer Applications, 2010

BIOGRAPHY

S.M.RAMYA1 received degree B.E (ECE) from Anjalai Ammal Mahalingam Engineering College, Tiruvarur, Tamilnadu, India, affiliated to Anna University Chennai in 2013. She is currently persuading

M.E Communication Systems in Parisutham Institute of Technology and Science, Thanjavur, Tamilnadu, India, (affiliated to Anna University Chennai).

Leave a Reply