A Survey To Increase Usability Of Images By Removing Red Eye

DOI : 10.17577/IJERTV2IS4925

Download Full-Text PDF Cite this Publication

Text Only Version

A Survey To Increase Usability Of Images By Removing Red Eye

Swati Deshmukh M.E.(IT)

Sipna COET, Amravati

Dr. A. D.Gawande Sipna COET, Amravati

Prof. A.B. Deshmukh Sipna COET,Amravati

Abstract

The red eye effect in photograph is the common problem. It occurs when using a photographic flash very close to the camera lens as with most compact cameras, in ambient low light. The effect appears in the eyes of humans and animals that have tapetum lucidum. It can be removed by software available in market but most of them are manual. So here is proposed work which allow to correct image without human efforts. This work will help to remove red eye in photograph and also to correct it in proper manner automatically. Here three steps are used for correction of red eye. First is to detect face followed by red eye detection followed by correction task of red eye.

Keywords: tapetum lucidum, red eye, red detection, red eye correction.

  1. Introduction

    The light of the flash occurs too fast for the pupil to close, much of the very bright light from the flash passes into the eye through the pupil, reflect off the fundus at the back of the eyeball and out through the pupil. The camera records this reflected light. The main cause of the red color is the ample amount of blood in the choroid which nourishes the back of the eye and is located behind the retina. The blood in the retinal circulation is far less than in the choroid, and plays virtually no role. The eye contains several photostable pigments that all absorb in the short wavelength region, and hence contribute somewhat to the red eye effect.[1] The lens cuts off deep blue and violet light, below 430 nm (depending on age), and

    macular pigment absorbs between 400 and 500 nm, but this pigment is located exclusively in the tiny fovea. Melanin, located in the retinal pigment epithelium (RPE) and the choroid, shows a gradually

    increasing absorption towards the short wavelengths. But blood is the main determinant of the red color, because it is completely transparent at long wavelengths and abruptly starts absorbing at 600 nm. The amount of red light emerging from the pupil depends on the amount of melanin in the layers behind the retina. This amount varies strongly between individuals. Light skinned people with blue eyes have relatively low melanin in the fundus and thus show a much stronger red-eye effect than dark skinned people with brown eyes. The same holds for animals. The color of the iris itself is of virtually no importance for the red-eye effect. This is obvious because the red-eye effect is most apparent when photographing dark adapted subjects, hence with fully dilated pupils. Photographs taken with infra-red light through night vision devices always show very bright pupils because, in the dark, the pupils are fully dilated and the infra-red light is not absorbed by any ocular pigment. Following is the Fig 1. Shows Light reflecting from eye.

    Fig. 1 Light reflecting from eye.

    When such light reflected from eye it causes red pupil to appear. Such red pupil make image useless. The red-eye effect can be prevented in a number of ways.[2]Using bounce flash in which the flash head is aimed at a nearby pale colored surface such as a ceiling or wall or at a specialist photographic reflector. This both changes the direction of the flash and ensures that only diffused flash light enters the eye. Placing the flash away from the camera's optical axis ensures that the light from the flash hits the eye at an oblique angle. The light enters the eye in a direction away from the optical axis of the camera and is refocused by the eye lens back along the same axis. Because of this the retina will not be visible to the camera and the eyes will appear natural. Taking pictures without flash by increasing the ambient lighting, opening the lens aperture, using a faster film or detector, or reducing the shutter speed. Using the

    red-eye reduction capabilities built into many modern cameras. These precede the main flash with a series of short, low-power flashes, or a continuous piercing bright light triggering the pupil to contract. Having the subject look away from the camera lens. Photograph subjects wearing contact lenses with UV filtering. Increase the lighting in the room so that the subject's pupils are more constricted.

    If direct flash must be used, a good rule of thumb is to separate the flash from the lens by 1/20 of the distance of the camera to the subject. For example, if the subject is 2 meters (6 feet) away, the flash head should be at least 10 cm (4 inches) away from the lens

    .Professional photographers prefer to use ambient light or indirect flash, as the red-eye reduction system does not always prevent red eyes for example, if people look away during the pre-flash. In addition, people do not look natural with small pupils, and direct lighting from close to the camera lens is considered to produce unflattering photographs.

    Here we are using three main steps to remove red eye. First step is to detect face in image. Face detection is foremost the difficult task. Here face detection is done by using Viola Jones Face detection

    effective algorithm. Once the face is detected eye is detected in detected face and then eye is removed by using in painting method.

  2. Previous works done

    A method was presented by M. Gaubatz and R. Ulichney [1] to automatically detect and correct redeye in digital images in their work Automatic red- eye detection and correction, They told that first, faces are detected with a cascade of multi-scale classifiers. The red-eye pixels are then located with several refining masks computed over the facial region. The masks are created by thresholding per- pixel metrics, designed to detect red-eye artifacts. Once the redeye pixels have been found, the redness is attenuated with a tapered color desaturation. But the face detector in their work only returned one false positive non-face, and red-eye detector found no red- eye this non-face region. In addition, the red-eye detector produced only one false positive red-eye detection. Most of the artifacts missed by the system occurred in very small faces, which are often in the background. Because the detection algorithm is flexible, performance can be improved with the addition of a metric tailored for smaller images.

    1. Schettini, F. Gasparini, and F. Chazli, [2] in A modular procedure for automatic redeye correction in digital photos, used an adaptive color cast algorithm to correct the color photo. The phase used not only facilitates the subsequent of processing but, also improves the overall appearance of output image. A multi-resolution neural network approach has been used for mapping of candidate faces. They have reduced search space by using information about skin and face distribution. The overall performance of this method would be improved by increasing the efficiency of face detector and by introducing most geometric constraints.

      Huitao Luo, Jonathan Yen and Dan Tretter[4] in work An Efficient Automatic Redeye Detection and Correction Algorithm said that Adaboost has used to simultaneously select features and train the classifier. A new feature set is designed to address the orientation-dependency problem associated with the Haar-like features commonly used for object detection design. For each detected redeye, a correction algorithm is applied to do adaptive desaturation and darkening over the redeye region. In their work, the verification classifiers Ire trained in two stages: a single eye verification stage and a pairing verfication

      stage. Adaboost [1] has used to train the classifiers because of its ability to select relevant features from a large number of object features. This is partially motivated by the face detection work of Viola and Jones [5]. However, in comparison to their work, their contributions come in three aspects. First, in addition to grayscale features, their detection algorithm utilizes color information by exploring effective color space projection and space conversion in designing object features. Second, he design a set of non-orientation- sensitive features to address the orientation-sensitivity problem associated with the Haarlike features used in Viola and Jones work. Third, their algorithm uses not only Haar-like rectangle features, but also features of different semantics such as object aspect ratio, percentage of skin tone pixels, etc.

      The proposed redeye removal system by [4] contains two steps:

      The red eye detection and the red eye correction. The detection step contains three modules: initial candidate detection, single eye verification and pairing verification. Among them, the initial candidate detection is a fast processing module designed to find all the red oval regions that are possibly red eyes. The single eye verification module verifies the redeye candidates using various object features, and eliminates many candidate regions corresponding to false alarms. Pairing verification further verifies the remaining redeye candidates by grouping them into pairs.

      Jutta Willamowski, Gabriela Csurka [5] in work

      Probabilistic Automatic Red Eye Detection and Correction uses contrast approach, their work does not require face detection, and thus, enables the correction of red eyes located on faces that are difficult to detect. A possible drawback is that it detects red eye candidates that do not correspond to real eyes. However, their method was able to reject or to assign a low probability to most of these locations which avoids introducing damaging artifacts. Some existing approaches use learning based red eye detection methods. Learning based methods rely on the availability of a representative training set.

      In S. Ioffe, Red eye detection with machine learning, ICIP and L. Zhang, Y. Sun, M. Li, and H. Zhang, Automated Red-Eye Detection and Correction in Digital Photographs, ICIP 2004 this training set has to be manually collected, and the eyes properly cropped and aligned. Similarly, during detection or verification, test candidate patches or the test images have to be tested at varying size and orientation. In

      their approach, similarly to they constitute the training set automatically through the initial candidate detection step. It only requires labeling the detected candidates as red eyes or non red eyes. This has the advantage of concentrating the learning step on the differences between true and false positive patches. In contrast to H. Luo, J. Yen and D. Tretter, An Efficient Automatic Redeye Detection and Correction Algorithm, ICPR 2004, they introduced an additional distinction between false positives on faces and on background. The major advantage of their approach resides in combining probabilistic red eye detection and soft red eye correction. Most previous approaches adopt in the end a hard yes/no decision and apply either no or the maximal possible correction. In difficult cases, hard approaches make significant mistakes completely missing certain red eyes or introducing disturbing artifacts on non red eye regions. Their correction is often unnatural, e.g. resulting in a remaining reddish ring around the central corrected part of an eye. To obtain a more natural correction [5] introduces softness blurring the detected red eye region.

      F. Volken, J. Terrier, and P. Vandewalle [6] in

      Automatic red-eye removal based on sclera and skin tone detection, used the basic knowledge that an eye is characterized by its shape and the white color of the sclera. Combining this intuitive approach with the detection of skin around the eye, they obtain a higher success rate than most of the tools. Moreover, their algorithm works for any type of skin tone. Further work is oriented towards improving the overall quality of the correction. It would be interesting to address the problems encountered for people with glasses, and to study more natural correction methods.

      R.Ulichney and M.Gaubatz, in [8] Perceptual- based correction of photo red-eye, Presented a brief overview of facial image processing techniques it is given that the original pupil color of a subject is often unrecoverable a simple chrominance desaturation effectively removes the red hue from the artifact pixels. These are in turn combined with a fully automated procedure designed to minimize intrusive effects associated with pixel re-coloration.

  3. Proposed Work

    This work proposes a red -eye removal algorithm using in-painting and eye-metric information. For red- eye detection, a face detection stage is included [9]. It is necessary to limit the candidate region by using face detection, and detect red-eye using multiple cues redness, shape, and color information. After detecting red-eyes, the red-eye regions is to be expanded by region growing and morphology operations so that they become fit to proposed correction algorithm. In correction part, red-eyes are filled with textures of surrounding iris by an exemplar-based in-painting method [10]-[11]. Finally, pupils, whose size is computed along with eye size information, are painted and highlight of eye is added for natural appearance.

    A face detection algorithm is used to limit the candidate region for red-eye detection. In this work, Viola and Jones algorithm is used to detect faces [9].

    The features employed by the detection framework universally involve the sums of image pixels within rectangular areas. As such, they bear some resemblance to Haar basis functions, which have been used previously in the realm of image-based object detection.[3] However, since the features used by Viola and Jones all rely on more than one rectangular area, they are generally more complex.. The value of any given feature is always simply the sum of the pixels within clear rectangles subtracted from the sum of the pixels within shaded rectangles. As is to be expected, rectangular features of this sort are rather primitive when compared to alternatives such as steerable filters. Although they are sensitive to vertical and horizontal features, their feedback is considerably coarser. However, with the use of an image representation called the integral image, rectangular features can be evaluated in constant time, which gives them a considerable speed advantage over

    their more sophisticated relatives. Because each rectangular area in a feature is always adjacent to at least one other rectangle, it follows that any two- rectangle feature can be computed in six array references, any three-rectangle feature in eight, and any four-rectangle feature in just nine.

      1. Learning algorithm

        The speed with which features may be evaluated does not adequately compensate for their number,

        however. For example, in a standard 24×24 pixel sub- window, there are a total of 45,396 possible features, and it would be prohibitively expensive to evaluate them all. Thus, the object detection framework employs a variant of the learning algorithm AdaBoost to both select the best features and to train classifiers that use them.

      2. Cascade architecture

    The evaluation of the strong classifiers generated by the learning process can be done quickly, but it isnt fast enough to run in real-time. For this reason, the strong classifiers are arranged in a cascade in order of complexity, where each successive classifier is trained only on those selected samples which pass through the preceding classifiers. If at any stage in the cascade a classifier rejects the sub-window under inspection, no further processing is performed and continue on searching the nex sub-window . The cascade therefore has the form of a degenerate tree. In the case of faces, the first classifier in the cascade called the attentional operator uses only two features to achieve a false negative rate of approximately 0% and a false positive rate of 40%. The effect of this single classifier is to reduce by roughly half the number of times the entire cascade is evaluated.

    Eyes are located in specific area of the face region by the attribute of the face detection algorithm that is to be used. When the face region is dived into 20 cells, in four rows and five columns, it is observe that two eyes are mostly placed in the second-row. As shown in following Fig 2 face can be detected.

    Fig. 2 Face Detection example

    Most of faces are detected unless the face is occluded by other objects. And if face regions are

    found correctly, eyes are detected in the specific area. Red-eyes are detected by using features of red-eye extracted from the eye region.

    The proposed correction algorithm uses a digital in-painting algorithm to correct the red-eye region while other algorithm simply desaturates the red color components. Size of iris depends on person and scale of an input image. Since it is difficult to detect iris directly, result of red-eye detection can be used. Radius of iris is the distance between the center of the redeye region and the boundary of iris. Therefore, it is sum of red-eye radius and search distance from the red-eye boundary to the iris boundary. It is important to calculate the accurate size of pupil in order to correct the expanded pupil in dark environment. Pupil size changes according to light conditions, while iris size is almost constant.

    In-painting is a restoring process of damaged parts of an image. In this work, an exemplar-based in- painting algorithm to fill the red-eye regions can be used [10]-[11]. Here, red-eye regions can be thought of as damaged regions or missing regions. The regions lost their own color and texture in part by red-eye effect and expanded pupil in dark environment. By using an in-painting algorithm, these missing regions should be in-painted with iris texture naturally. In an exemplar-based in-painting method, filling order is important and affects the performance. In general, the priority of filling becomes high when the surrounding region includes more source region and there are strong edges which are normal to boundary of the region.

    Following fig. 3(a) shows an image having red eye and it is being removed in fig. 3(b).

    Fig. 3(a) Image with red eye

    Fig. 3(b) Image after removal of red eye.

  4. APPLICATION

    Redeye is a common problem in consumer photography. Currently, many image processing software applications in the market offer redeye removal solutions but they are manual one so this work can be replacement for it which is fully automatic not manual. Today digital photographs are used everywhere, so it is important to have a photograph which do not have red eye. This work can be used to detect red eye and correct it efficiently to have better results. It can be used at all places where there is a need of digital photograph such as online application form submission, security system using face recognition or iris recognition where the photo taken should always mach for access of that application . It can be used for banking, identification proof, driving license, PAN card, voters ID card etc where digital photograph is require.

  5. Conclusion

    Here, we propose an efficient red-eye removal Work that uses an in-painting method and biometric information. The distorted eye region is filled with natural texture by an in-painting method and biometric information makes it possible to find more accurate size of the original eyes. It also help to have a phograph without red eye.

  6. References

1[1] M. Gaubatz and R. Ulichney, Automatic red-eye detection and correction, in Proc. IEEE Int. Conf. Image Processing, vol. 1, pp. 804807, Rochester, NY, Sep. 2002.

  1. R. Schettini, F. Gasparini, and F. Chazli, A modular procedure for automatic redeye correction in digital photos, in Proc. SPIE Conf. Color Imaging: Processing, Hardcopy, and Application, vol. 5293, pp. 139147, San Jose, CA, Jan. 2004.

  2. S. Ioffe, Red eye detection with machine learning, in Proc. IEEE Int.Conf. Image Processing, vol. 2, pp. 871874, Barcelona, Spain, Sep. 2003.

  3. H. Luo, J. Yen, and D. Tretter, An efficient automatic redeye detection and correction algorithm, in Proc. IEEE Int. Conf. Pattern Recognition, vol. 2, pp. 883 886, Cambridge, UK, Aug. 2004.

  4. J. Willamowski and G. Csurka, Probabilistic automatic red eye detection and correction, in Proc. IEEE Int. Conf. Pattern Recognition, vol. 3, pp. 762765, Hong Kong, China, Aug. 2006.

  5. F. Volken, J. Terrier, and P. Vandewalle,

    Automatic red-eye removal based on sclera and skin tone detection, in Proc. European Conf. Color in Graphics, Imaging and Vision, pp. 359364, Leeds, UK, June 2006.

  6. L. Zhang, Y. Sun, M. Li, and H. Zhang,

    Automated red-eye detection and correction in digital photographs, in Proc. IEEE Int. Conf. Image Processing, vol. 4, pp. 23632366, Singapore, Oct. 2004.

  7. R. Ulichney and M. Gaubatz, Perceptual-based correction of photo red-eye, in Proc. Int. Conf. Signal and Image Processing, pp. 526 531, Honolulu, HI, Aug. 2005.

  8. P. Viola and M. Jones, Rapid object detection using a boosted cascade of simple features, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 511518, Kauai, HI, Dec. 2001.

  9. A. Criminisi, P. Perez, and K. Toyama, Region filling and object removal by exemplar-based image inpainting, IEEE Trans. Image Processing, vol. 13, no. 9, pp. 12001212, Sep. 2004.

  10. B. Li, Y. Qi, and X. Shen, An image inpainting method, in Proc.IEEE Int. Conf. Computer Aided Design and Computer Graphics, vol. 6, pp. 6066, Hong Kong, China, Dec. 2005. [12] J. J. de Dios and N. Garcia, Face detection based on a new color space

YCgCr, in Proc. IEEE Int. Conf. Image Processing, vol. 3, pp. 902 912, Barcelona, Spain, Sep. 2003.

  1. J. J. de Dios and N. Garcia, Fast face segmentation in component color space, in Proc. IEEE Int. Conf. Image Processing, vol. 1, pp. 191194, Singapore, Oct. 2004.

  2. R. Jain, R. Kasturi, and B. G. Schunck, Machine Vision, New York, McGraw-Hill, 1995.

  3. J. E. Richman, K. G. McAndrew, D. Decker, and

    1. C. Mullaney, An evaluation of pupil size standards used by police officers for detecting drug impairment, Optometry, vol. 75, no. 3, pp. 175182, Mar. 2004.

  4. D. K. Martin and B. A. Holden, A new method for measuring the diameter of the in vivo human cornea, Am. J. Optometry Physiological Optics, vol. 59, no. 5, pp. 436441, May 1982.

  5. Adobe Photoshop CS2, San Jose, CA: Adobe, 2005.

  6. STOIK RedEye AutoFix 3.0, Russia: STOIK Imaging, 2007.

  7. R. Ulichney, M. Gaubatz, and JM Van Thong,

    Redbot a tool for improving red-eye correction, in Proc. S&T/SID Eleventh Color Imaging Conference: Color Science and Engineering System,Technologies, Applications, Scottsdale, AZ, Nov. 2003.

  8. Red-Eye Detection and Correction Using Inpainting

in Digital Photographs Seunghwan Yoo and Rae-Hong Park, Senior Member IEEE Transactions on Consumer Electronics, Vol. 55, No. 3, AUGUST 2009

Leave a Reply