Cloud Processing for NRSC Imagery to Acquire Reservoir Details

DOI : 10.17577/IJERTV4IS040212

Download Full-Text PDF Cite this Publication

  • Open Access
  • Total Downloads : 178
  • Authors : Selva Balan, Gino James, Akshay Kapadia, Ameya Deshpande
  • Paper ID : IJERTV4IS040212
  • Volume & Issue : Volume 04, Issue 04 (April 2015)
  • DOI :
  • Published (First Online): 18-04-2015
  • ISSN (Online) : 2278-0181
  • Publisher Name : IJERT
  • License: Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License

Text Only Version

Cloud Processing for NRSC Imagery to Acquire Reservoir Details

Selva Balan

Central Water & Power Research Station Pune, India

Gino James

Electronics & Telecommunication Engineering Smt. Kashibai Navale College of Engineering Pune, India

Akshay Kapadia

Electronics & Telecommunication Engineering Smt. Kashibai Navale College of Engineering Pune, India

Ameya Deshpande

Electronics & Telecommunication Engineering Smt. Kashibai Navale College of Engineering Pune, India

AbstractA significant obstacle of extracting information using satellite imagery is the presence of clouds. Reservoir boundaries cannot be extracted as the images are covered with clouds. The problem can be mostly resolved by mosaicking the cloud areas with the cloud free areas in other temporal images. In this project, a complete approach, including image enhancement, cloud detection and cloud areas mosaicking, is proposed to generate cloud free images from multi-temporal satellite images.

IndexTermsMosaicking, multi-temporal, main image, reference image.


    Instrumentation division of CWPRS is involved in assessing sediment volume based on integrated hydrographic survey techniques. In case of large reservoirs the contour extracted from this satellite imagery at different levels are very much essential to limit the survey boat movement within the required data login (collection) area. However, most of the times the contours could not be extracted as the images are covered with clouds. The variously dated scenes or cloud-free scene parts that might compose an image mosaic will differ in atmospheric conditions, sun-target-sensor geometry, sensor calibration, soil moisture, and vegetation phenology. These differences cause the relationships between land-cover classes and pixel brightness values to vary across space over a mosaic period, which refers to the time period spanning the cloud-free scenes or scene parts that compose an image mosaic.

    A significant obstacle for extracting information from remote sensing imagery is the missing information caused by clouds and their shadows in the images. There are some radar satellites that do not have cloud contamination problems because they operate in the microwave range of the electromagnetic spectrum. It is possible to obtain microwave imagery information from some of the satellites that goes back in time to 1991. But these kinds of images cannot replace information provided by optical remote sensing data. The emitted radiation in microwave range is very low while in the visible range the maximum energy is emitted. Consequently,

    in order to obtain imagery in the microwave region and measure these signals, which are weak, large areas are imaged. This results in relatively poor spatial resolution. On the other hand, by contrast, images in the visible range have a high resolution.

    FENG Chun [1] put forward a method based on statistical characters of image information, an improved homomorphism filtering. Instead of the filtering in frequency field, it isolates the low frequency component of the image representing cloud information with calculating neighborhood average in spatial field. But this method applies only to the images having thin cloud cover. Surfaces under thick clouds has to be retrieved by using patches from multi-temporal images, which it doesnt considers at all.

    Bin WANG [2] uses the image fusion technique to automatically recognize and remove contamination of clouds and their shadows, and integrate complementary information into the composite image from multi-temporal images. The cloud regions are detected on the basis of the reflectance differences with the other regions. Based on the fact that shadows smooth the brightness changes of the ground, the shadow regions are detected successfully by means of wavelet transform. Further, an area-based detection rule is developed in this paper and the multispectral characteristics of Landsat TM images are used to alleviate the computational load.

    Chao-Hung Lin [3] proposed a patch-based approach that mathematically formulates the reconstruction problem as a Poisson equation and then solve this equation using a global optimization process. In the optimization, the selected cloud- free patches are globally and consistently cloned in the corresponding cloud-contaminated region. This process potentially results in good cloud removal results in terms of radiometric accuracy and consistency. But their approach is semi-automatic. Users have to manually refine the cloud detection results through an interface of selection and erase operations.

    Tapasmini Sahoo [4] used an image fusion technique to remove clouds from satellite images. The proposed method involves an auto associative neural network based PCAT (principal component transform) and SWT (stationary wavelet

    transform) to remove clouds recursively which integrates complementary information to form a composite image from multi-temporal images. Some evaluation measures are suggested and applied to compare their method with those of covariance based PCAT fusion method and WT-based one. The PSNR and the correlation coefficient value indicate that the performance of their method is better than others. It also enhances the visual effect.


    1. RGB to grey conversion

      Processing the coloured images directly is computationally time consuming and processor heavy task. Also the threshold value can be easily selected from a grey image rather than a coloured image. Hence the images are converted to grayscale.

      RGB values of images are converted to grayscale values by forming a weighted sum of the R, G, and B components:

      Grey value=0.2989 * R + 0.5870 * G + 0.1140 * B

    2. Correction of brightness

      The main image and the reference image defined in this paper are two images which were observed at different times but cover the same region over the ground. Due to the different solar irradiance and atmospheric effects, it is necessary to correct the brightness of the two images before the image fusion. The correctness of brightness can be performed using

      , = × , (1)

      Where , is the old brightness value of a pixelof the reference image and , is its new value, k is any real number greater than zero.


      x 10



      x 10


      6 6

      5 5

      4 4

      3 3

      2 2

      1 1


      0 50 100 150 200 250 300


      0 50 100 150 200 250 300

      Fig. 1. Main and reference images with their histograms.

    3. Detection of clouds

      In general, clouds reflect the solar radiation in the visible and infrared spectra to a much higher degree than the ground.

      By setting a threshold C1, we can distinguish the cloud regions from the ground regions, if

      , > 1


      , > 1 (2)

      Intensity level slicing image enhancement transfer function S=T(R)


      Enhanced (thresholded) image S-->

      Without enhancement With enhancement







      0 50 100 150 200 250 300

      Original image R-->

      Fig. 2. Graph of intensity level slicing with C1=120

      where , is the brightness value of a pixel of the main image, it can be assumed that there is a cloud in the main image or the reference image at the location (i,j). The threshold value can easily be determined by investigating the histogram of the image or by trial and error with different values to get the best result. The cloudy ara is then replaced with a new pixel having value let say 1 and saved as another image ( , , , )so as to mark/tag the

      detected areas of both the main and reference images. The following equations explains this:

      If , > 1 then , =1 else , = ,


      If , > 1 then , =1 else , = , (3)

    4. Restoration of land features

      Fig. 3. Main and reference thresholded images.

      , = ,


      Some objects on ground like snow covered mountains, have fairly approximate reflectance with the clouds and hence might get tagged as clouds, but they cannot move as the clouds do. This property is used to restore the land features. The algorithm used is as follows:

      If , is equal to 1 and , is also equal to 1 then it is either a common cloud covered area or a land feature. Hence it is restored,

      , = , . (4)

      Fig. 4. Main thresholded and restored image.

    5. Mosaicking the cloudy areas

    Mosaicking is different from restoration in the sense that in mosaicking an image patch from a different image (reference image) is used whereas in restoration the image patch is from the same image (that has not been thresholded). The algorithm is:

    If , is equal to 1

    then , = , (5)

    The final cloud-free image is obtained in colour by using the intermediate grey images as a binary map/marker for cloudy areas. All the processing like brightness correction, thresholding restoring the land features, are first performed on the grey images. Only brightness correction and mosaicking is performed on the colour image based on the grey images.

    Fig.5 Final cloud-free colour image





    Number of pixels


    Main image Cloud count



    Reference image Cloud count



    Restored cloud/land count



    Actual cloud count in main image



    Final(cloud-free) image mosaicked area pixel count



    Remaining cloud covered area pixel count in final image


    1. The above pixel counts are for threshold value C1=120


In this paper we presented a scheme to automatically detect and remove clouds. The entire algorithm is mainly composed of four stages: brightness correction, detection of clouds, restoration of land features, image mosaicking. This scheme has a lower computational complexity as the entire processing of image is done in spatial domain and not in frequency domain. In addition, this algorithm can also be used to detect and remove fog, mist and haze contamination. Further, we believe that this automated removal of clouds can be considered as a kind of preprocessing before quantitative study, and should be very useful for many practical applications such as environment monitoring.


  1. FENG Chun,MA Jian-wen, DAI Qin, CHEN Xue,An Improved Method for Cloud Removal in ASTER Data Change Detection, 0- 7803-8742-2/04, pp. 3387-3388, 2004.

  2. Bin WANG, Atsuo ONO, Kanako MURAMATSU, and Noboru FUJIWARA, Automated Detection and Removal of Clouds and Their Shadows from Landsat TM Images, IEICE trans. inf. & syst., Vol.E82-D, No.2, February 1999.

  3. Chao-Hung Lin, Po-Hung Tsai, Kang-Hua Lai, and Jyun-Yuan, Cloud Removal from Multi-temporal Satellite Images Using Information Cloning, IEEE transactions on geoscience and remote sensing, 2011.

  4. Tapasmini Sahoo, Cloud Removal from Satellite Images using Auto Associative Neural Network and Stationary Wavelet Transform IEEE 978-0-7695-3267-7/08, 2008.

Leave a Reply