Shadow Detection And Removal From Real And VHR Images

DOI : 10.17577/IJERTV2IS60723

Download Full-Text PDF Cite this Publication

Text Only Version

Shadow Detection And Removal From Real And VHR Images

Rakesh. B. N Final year M. Tech student

Dept. of Electronics and communication Engineering

Reva ITM,Bangalore, India

Dr. Bharathi. S. H Professor

Dept. of Electronics and communication Engineering

Reva ITM,Bangalore, India

Abstract

Shadow detection and removal in images is always a challenging but yet intriguing problem. In contrast with the rapidly expanding and continuous interests on this area, it is always hard to provide a robust system to eliminate shadows in images. A shadow appears on an area when the light from a source cannot reach the area due to obstruction by an object. The shadows are sometimes helpful for providing useful information about objects. However, they cause problems in computer vision applications, such as segmentation, object detection and object counting. Thus shadow detection and removal is a pre-processing task in many computer vision applications. This paper aimed to give a comprehensive method to remove the shadows in two types of images namely real images and Very High Resolution (VHR) images.Seperate methods are used for individual type of images.Experimental results for both the methods have been reported.

Key Terms Shadow detection, Shadow removal, Shadow model, Paired regions, Support vector machines(SVMs),Self shadows, Cast shadows.

  1. Introduction

    Shadows in images have long been disruptive to computer vision algorithms. They appear as surface features, when in fact they are caused by the interaction between light and objects. This may lead to problems in scene understanding, object segmentation, tracking and recognition. Because of the undesirable effects of shadows on image analysis, much attention was paid to the area of shadow removal over the past decades and covered many specific applications such as

    traffic surveillance, face recognition, image segmentation and so on[1] [2] [3].

    A shadow occurs when an object partially or totally occludes direct light from a source of illumination. Shadows can be divided into two sets: self shadows and cast shadows. A self shadow occurs in the portion of an object whereas a cast shadow is the dark area projected by the object. Cast shadows can be further classified into umbra and penumbra region, which is a result of multi-lighting. One crucial difference between these shadows is their contrast to the background. Usually, self shadows are vague shadows which gradually change intensity and have no clear boundaries. Cast shadows are, hard shadows with sharp shadow boundaries.

    Figure below shows different type of shadows.

    Figure 1. Illustration of cast and self shadows

  2. Problem Formulation

    The presence of shadows in images may completely destroy the information contained in those images.For example, In VHR optical images, particularly in urban areas, the presenceof shadows may completely destroy the information contained in those images. Information missing in shadow areas directly influences common processing and analysis operations, such as the generation of classification maps[4]. Also the presence of shadows in images may lead to problems in scene understanding, object segmentation, tracking and recognition.

  3. Proposed Method

    In the related work two different methods are used to detect and remove the shadows. That means one method is carried out on Real images and other method is carried out on VHR images.Clearly the work is distinguished between two types of images.

    The proposed algorithms for each type of images are discussed one after the other in the following sections.

    1. For Real Images:

      Figure 2 shows block diagram of the proposed method for detecting and removing shadows in real images.

      To detect the shadows, we consider the appearance of the local and surrounding regions. Shadowed regions tend to be dark, with little texture, but some non-shadowed regions also may have similar characteristics.

      We first segment the input image using the mean shift algorithm[5].Then,by using a trained classifier, we estimate the confidence that each region is in

      shadow. We also find same illumination pairs and

      Figure 2. Proposed method of shadow detection and removal from real images using paired regions.

      Where is the single region classifier confidence weighted by region area; {, } are different illumination {, } are

      same illumination pairs; and are

      different illumination pairs of regions, which are confidently predicted to correspond to the same material and have either similar or different illumination, respectively. We construct a relational graph using a sparse set of confident illumination pairs. Finally,we solve for the shadow labels y={- 1,1} that maximize the following objective:

      the area-weighted confidences of the pairwise classifiers;

      1 and 2 are parameters; and 1(:) is an indicator function.

      …(1)

      1. Single Region Classification

        When a region becomes shadowed, it becomes darker and less textured . Thus, the color and texture of a region can help predict whether it is in shadow. We train our classifier from manually labeled regions using an SVM with a kernel (slack parameter C = 1). We define as the output of this classifier times ,the pixel area of region i.

      2. Pairwise Region Classification

        The presence of shadows in a region cannot be determined by considering only its internal appearance. The region must be compared with other regions with same material. In particular,

        we want to find same illumination pairs, regions that are of the same material and illumination, and different illumination pairs, regions that are of the same material but different illumination. Differences in illumination can be caused by direct light blocked by other objects or by a difference in surface orientation. In this way, we can account for both shadows and shading. If shadows appear in ambiguous condition then the solution to detect ambiguous appearance of shadows is by paired regions.

        We train classifiers (SVM with RBF kernel; C = 1,

        =1) to detect illumination pairs based on comparisons of their color and texture histograms, the ratio of their intensities, their chromatic alignment, and their distance in the image. These features encode the intuition that regions of the same reflectance share similar texture and color distribution when viewed under the same illumination; when viewed under different illuminations, they tend to have similar texture but differ in color and intensity.

      3. Shadow removal

        Our shadow removal approach is based on a simple shadow model where lighting consists of directed light and environment light. We try to identify how much direct light is occluded for each pixel in the image and relights the whole image using that information.

        3.1.3.1 Shadow model

        In our illumination model, there are two types of light sources: direct light and environment light. Direct light comes directly from the source (e.g., the sun), while environment light is from reflections of surrounding surfaces. Non-shadow areas are lit by both direct light and environment light, while for shadow areas, part or all of the direct light is occluded. The shadow model can be represented by the formula below.

        ..(2)

        where Ii is a vector representing the value for the i-th pixel in RGB space. Similarly, both Ld and Le are vectors of size 3, each representin the intensity of the direct light and environment light, also measured in RGB space. Ri is the surface reflectance of that pixel, also a vector of three dimensions, each corresponding to one channel. is the angle between the direct lighting direction and the surface norm, and

        is a value between [0; 1] indicating how much direct light gets to the surface.

        When = 1, the pixel is in a non-shadow area, and when ti = 0, the pixel is in an umbra, otherwise, the

        area is in a penumbra (0 < < 1). For an shadow- free image, every pixel is lit by both direct light and environment light and can be expressed as:

        ..(3)

    2. For VHR Images:

      Figure 3 shows a flowchart with the principal steps of the proposed methodology.

      Figure 3. Flowchart of the proposed method for VHR images

      Breifly, let us consider a VHR image I of dimensions m × n, composed of N bands and characterized by the presence of shadow areas.

      1. Mask Construction

        The shadow versus non-shadow mask is created in two steps, namely, binary classification followed by a post-processing.

        1. Binary Classification: The binary classification procedure [see M1 in Fig. 1] is implemented in a supervised way by means of a support vector machine (SVM), which proved its effectiveness in the literature of remote sensing data classification[6] [7]. The feature space where the classification task is performed is defined by the original image bands and features extracted by means of the wavelet transform. In particular, a one-level stationary wavelet transform is applied on each spectral band, thus obtaining for each band four space-frequency features. The symlet wavelet is adopted in order to maximize the sparseness of the transformation (most of the coefficients are near 0) while enforcing texture areas (wavelet coefficients are of high value on the presence of singularities). For an original image I composed of B spectral bands, the resulting feature space thus consists of B × (1 + 4) dimensions.

        2. Postprocessing: The binary image M1 may be characterized by a salt and pepper effect due to the

        presence of noise in the image. An opening by reconstruction, followed by a closing by reconstruction, is applied on M1 to attenuate this potential problem[8]. The choice of morphological filters to deal with this problem is motivated by their effectiveness and better shape preservation capability as shown in the literature and by the possibility to adapt them according to the image filtering requirements as is the case in the border creation. Both morphological operators are needed in order to remove isolated shadow pixels in a nonshadow area and also isolated nonshadow pixels in a shadow area.

      2. Border Creation:

        The transition in between shadow and nonshadow areas can raise problems such as boundary ambiguity, color inconstancy, and illumination variation[9]. Indeed, the presence of the penumbra induces mixed pixels which are difficult to classify. The penumbra is a region where the light source is only partially obscured. For this reason, a border between the shadow and nonshadow classes is defined in order to appropriately handle the border pixels. These lasts are not processed within the shadow reconstruction procedure as is, but separately. The border region is constructed by means of morphological operators. The mask c_imgB2 is dilated () and eroded (). Then, the difference between these two images is computed to form the border image B.

        ..(4)

        The final mask image becomes

        ….(5)

      3. Shadow Reconstruction

      Image reconstruction is one of the most important steps in our methodology. For the sake of getting a simple but satisfactory reconstruction model, we assume that the underlying relationship between the nonshadow class (Y ) and the corresponding shadow classes (X) is of the linear type[10]. We have

      empirically observed that shadow classes and the corresponding nonshadow classes reasonably exhibit a linear relationship.

      Regarding the statistical model of the classes, three estimation ways may be envisioned:

      Histogram estimation by box counting, Kernel density estimation,& parametric estimation. In our case, we will adopt the last method by assuming that the classes follow a Gaussian distribution. Although it can be expected that such a hypothesis does not always hold, it is however useful to get a simple and fast solution to the reconstruction problem. Indeed, denoting the shadow class as and the corresponding nonshadow class as

      ,the reconstruction of the shadow class will be reduced to a simple random variable transformation-

      where and stand for the mean and covariance matrix, respectively. Since the two distributions are assumed linearly correlated, x and y may be linked by

      ..(7)

      where K is a transformation matrix, KT is its transpose, and c a bias vector. To estimate K and c, the Cholesky factorization is applied

      …..(8)

      where and are the lower and upper triangular Cholesky matrices related to the nonshadow and shadow classes,respectively. Once K and c are estimated, equation applied to compensate the pixels of the shadow class. Note that this process needs to be carried out for each couple of shadow and nonshadow classes.

  4. Simulation Results

    Both the methods described on detecting and removing shadows from images were implemented on MatLab v7.9.

    A) Real Image Simulation Results:

    Test Image 1:

    Test Image 2: B) VHR Image Simulation Results: Test Image 1:

    The VHR test image was Quick Bird satellite image, representing a part of the coastal region of Boumerdes (Algeria).

  5. Drawbacks and Limitations

    The proposed method for detecting and removing of shadows described in section 3.1 may fail in some conditions such as presence of darker regions(not shadows) in the image will be detected as shadows and will be removed in the process.

    Secondly,in the proposed method for detecting and removing shadows in Very High Resolution(VHR)images described in section 3.2,in shadow reconstruction part,shadows are not completely removed instead they are partially removed.This is due to lack of obtaining original database of high resolution images.As the proposed algorithm works only for very high resolution images,it is impossible to get the desired output.

  6. Concusion

This paper has dealt with the important and challenging problem of reconstruction of two types (Real &VHR) images obscured by the presence of shadows.

For Real images, for detecting shadows, we have shown that pairwise relationship between regions provides valuable additional information about illumination condition of regions, compared with simple appearance-based models.

For VHR images, the proposed methodology is supervised. The shadow areas are not only detected but also classified so as to allow their customized compensation. The classification tasks are implemented by means of the state-of-the-art SVM approach.

Acknowledgement

We take this opportunity to sincerely thank all the faculty members of ECE Department, Reva Institute of Technology and Management. Also deepest gratitude to Dr.Bharathi.S.H, Professor, Dept of ECE, Reva ITM, who guided me throughout this work and motivated me.

References

  1. J.M. Wang, Y.C. Chung, C.L. Chang, S.W. Chen, Shadow Detection and Removal for Traffic Images, Proc. IEEE International Conference on Net working, Sensing and Control, vol. 1, pp. 649 654, 2004.

  2. T. Chen, W. Yin, X.S. Zhou, D. Comaniciu, and

    T.S. Huang, Illumination Normalization for Face Recognition and Uneven Background Correction Using Total Variation Based Image Models, Proc. CVPR, vol. 2, pp. 532-539, 2005.

  3. Y. Adini, Y. Moses, and S. Ullman, Face recognition: The problem of compensating for changes in ilumination direction, IEEE Trans. Pattern Anal. Machine Intell. 19(7):721732, 1997.

  4. T. Kasetkasem and P. K. Varshney, An optimum land cover mapping algorithm in the presence of shadow, IEEE J. Select. Topics Signal Process., vol. 5, no. 3, pp. 592605, Jun. 2011

  5. D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis. PAMI, 24(5):603619, 2002.

  6. N. Ghoggali and F. Melgani, Genetic SVM approach to semisupervised multitemporal classification, IEEE Geosci. Remote Sens. Lett., vol. 5, no. 2, pp. 212216, Apr. 2008.

  7. T. Habib, J. Inglada, G. Mercier, and J. Chanussot, Support vector reduction in SVM algorithm for abrupt change detection in remote sensing, IEEE Geosci. Remote Sens. Lett., vol. 6, no. 3, pp. 606610, Jul. 2009.

  8. P. Soille, Morphological Image Analysis. New York: Springer-Verlag, 1999.

  9. V. Tsai, A comparative study on shadow compensation of color aerial images in invariant color models, IEEE Trans. Geosci. Remote Sens., vol. 44, no. 6, pp. 16611671, Jun. 2006.

  10. S. Wang and Y. Wang, Shadow detection and compensation in high resolution satellite images based on retinex, in Proc. 5th Int. Conf. Image Graph., 2009, pp. 209212.

Leave a Reply