Certain Explorations On Removal Of Rain Streaks Using Morphological Component Analysis

DOI : 10.17577/IJERTV2IS2067

Download Full-Text PDF Cite this Publication

Text Only Version

Certain Explorations On Removal Of Rain Streaks Using Morphological Component Analysis

Jaina George1, S.Bhavani2, Dr.J.Jaya3

  1. PG Scholar, Sri.Shakthi Institute of Engineering and Technology, Coimbatore, Tamil Nadu.

  2. Head of the Department/ECE, Sri.Shakthi Institute of Engg and Tech, Coimbatore

  3. Principal, Akshaya College of Engineering and Technology, Coimbatore.

    Abstract

    Rain is a complex dynamic noise that hampers feature detection and extraction from images. The visual effects of rain are complex and degrades the performance of outdoor vision system. In this work, we propose a single-image based rain removal framework by formulating rain removal as image decomposition problem using morphological component analysis. Instead of applying a conventional image decomposition technique, the proposed method will initially smoothen the image using bilateral filter and then splits the image into Low Frequency and High Frequency components. The high frequency portions then undergo Morphological Component Analysis. High frequency portions are decomposed into rain component and non-rain component based on patch extraction, dictionary learning and dictionary partitioning. As a result, the rain component can be successfully removed from the image while preserving most of the original image details.

    Keywords Morphological Component Analysis, High frequency, Image Decomposition Technique , Dictionary learning, Dictionary partitioning.

    1. Introduction

      The visual effects of rain are complex. Now a day, outdoor vision-based detection systems are used for various applications. Most of the times these systems are designed to work in clear weather condition which is obviously not always the case.

      Fig 1:Block diagram of the proposed rain streak removal method.

      Different weather conditions such as rain, snow or fog will cause complex visual effects of spatial or temporal domains in images. Such effects may significantly degrade the performances of outdoor vision systems. In a scene, rain produces a complex set of visual effects. Due to high speed of rain falling, it is usually impossible to distinguish each raindrop by a camera. Except when using very high speed camera, raindrops appear as streaks (or ripples).

      Dynamic weather effects such as rain cause rapid, distracting motion in a video in a video sequence. If these streaks are removed, then the tracker can work with greater accuracy.

    2. FILTERING

      Filteringis perhaps the most fundamental operation of image processing and computer vision. In the broadest sense of the term filtering, the value of the filtered image at a given location is a function of the values of the input image in a small neighbourhood of the same location. In a wide variety of image processing applications, it is necessary to smooth an image while preserving its edges. The gray levels often overlap that makes any post-processing task such as segmentation, feature extraction and labelling more difficult. Filtering is perhaps the most fundamental operation in many biomedical image processing applications, where it reduces the noise level and improves the quality of the image. In general, the problem of how to select a suitable de-noising algorithm is dependent on the specific targeted application.

      1. BILATERAL FILTERING

        A bilateral filter is an edge-preserving and noise reducing smoothing filter. Bilateral filtering is a non-linear filtering technique.It extends the concept of Gaussian smoothing by weighting the filter coefficients with their corresponding relative pixel intensities. This weight is based on Gaussian distribution. Pixels that are very different in intensity from the central pixel are weighted less even though they may be in close proximity to the central pixel. This is effectively a convolution with a non-linear Gaussian filter, with weights based on pixel intensities. This is applied as two Gaussian filters at a localized pixel neighbourhood, one in the spatial domain, named the domain filter, and one in the intensity domain, named range filter. Bilateral filters assume an explicit notion of distance in the domain and range of the image function; they can be applied to any function for which these two distances can be defined. It replaces the pixel value with an average of similar and nearby pixel values.This preserves sharp edges by systematically looping through each pixel and according weights to the adjacent pixels accordingly.

        1. Range Filter

          Range filters are nonlinear because their weights depend on image intensity. Range filter measures the photometric similarity between the pixels at the neighbourhood center. Range filters are contrast dependent. Coefficients will be assigned for each pixel.

          The original brightness functions of an image which maps the coordinates of a pixel (x, y) to a value in light intensity. Then for any given pixel a at (x, y) within a neighborhood of size n, which has a0 as its centre, its coefficient assigned by the range filter r (a) is determined by the following function:

          [f a 1 f a 0 ]

          i p

          i p

          r a = e 2 2

        2. Domain Filter

          Domain filters works in spatial domain. It is size dependent. Domain measures the geometric closeness between the neighbourhood centers in the image. Domain filtering based upon possible image locations.

          Similarly, coefficient assigned by the domain filter g (a) is determined by the closeness function.

          x 2+y 2

          g x, y; t = e 2t

          Where t is the scale parameter.

    3. DISCRETE WAVELET TRANSFORM (DWT)

    Wavelet transform has received considerable attention in the field of image processing due to its flexibility in representing non-stationary image signals and its ability in adapting to human visual characteristics. Wavelet transform are most powerful and the most widely used tool in the field of image processing. Their inherent capacity for multi- resolution representation to the operation of human visual system motivated a quick adoption and widespread use of wavelets in image processing applications.

    Discrete wavelet transform is any wavelet transform for which the wavelets are discretely sampled. As with other wavelet transforms, a key advantage it has over Fourier transform is temporal resolution: it captures both frequency and time information.

    DWT is obtained by filtering the signal through a series of digital filters at different scales. The scaling operation is done by changing the resolution of the signal by the process of sub- sampling. Input image is decomposed into low pass and high pass sub-bands. Each consisting of half the number of samples in the original image.

    In our method, the filtered rain image is first roughly decomposed into the low-frequency (LF) part and the high-frequency (HF) part using the

    discrete wavelet transform, where the most basic information will be retained in the LF part while the rain streaks and the other edge/texture information will be included in the HF part of the image.

  4. MORPHOLOGICAL COMPONENT ANALYSIS

    The key idea of MCA is to utilize the morphological diversity of different features contained in the data to be decomposed and to associate each morphological component to a dictionary of atoms. Here the conventional MCA- based image decomposition approaches, sparse coding, dictionary learning and dictionary partitioning techniques are briefly introduced. A method called Morphological Component Analysis (MCA) has been proposed to separate the textue from the natural part in images. MCA relies on an iterative thresholding algorithm, using a threshold which decreases linearly towards zero along the iterations.

    Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such images admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. Sparse coding is a technique of finding a sparse representation for a signal with a small number of nonzero or significant coefficients corresponding to the atoms in a dictionary.

    1. Dictionary Learning

      In this step, we extract from IHF a set of overlapping patches as the training exemplars yk for learning dictionary DHF. We formulate the dictionary learning problem as,

    2. Dictionary Partitioning

      The atoms constituting DHFcan be roughly divided into two clusters (sub-dictionaries) for representing the geometric and rain components of IHF. Intuitively, the most significant feature of a rain atom can be extracted via image gradient. In the proposed method, the HOG descriptor is used to describe each atom in DHF.

      1. Histogram of Gradient

        The basic idea of HOG is that local object appearance and shape can be usually well characterized by the distribution of local intensity gradients or edge directions, without precisely knowing the corresponding gradient or edge positions to extract the HOG feature from an image. The image can be divided into several small spatial regions or cells. For each cell, a local 1-D histogram of gradient directions or edge orientations over the pixels of the cell can be accumulated. The combined histogram entries of all cells from the HOG representation of the image.In our implementation, the size of a local image patch/dictionary atom is chosen to be 16×16, which leads to reasonable computational cost in dictionary partition (involving HOG feature extraction).

      2. K-Means Clustering

        After extracting the HOG feature for each atom in DHF , we then apply the K-means algorithm to classify all of the atoms in DHF into two clusters D1 and D2 based on their HOG feature descriptors. The procedure is to identify which cluster consists of rain atoms and which cluster consists of geometric or non-rain atoms. First, we calculate the variance of

        min

        P

        1 (1

        yk DHF k 2

        gradient direction for each atom in clusters. Then we

        calculate the mean for each cluster. Based on the fact

        DHF

        Rn×m , kRm P

        2

        k=1

        2

        + k 1)

        that the edge directions of rain streaks in an atom are usually consistent, i.e., the variance of gradient direction for a rain atom should be small, we identify

        Where k denotes the sparse coefficients of yk with respect to DHF, and is a regularization parameter.

        We find that the atoms constituting DHFcan be roughly divided into two clusters (sub- dictionaries) for representing the geometric and rain components of IHF. Intuitively, the most significant feature of a rain atom can be extracted via image gradient. In the proposed method, we uses the HOG descriptor to describe each atom in DHF.

        the cluster with the smaller as rain sub-dictionary and the other one as geometric(non-rain sub dictionary).

      3. Orthogonal Matching Pursuit (OMP)

Matching pursuit is a type of numerical technique which involves finding the best matching projections of multidimensional data onto an over- complete dictionary. Given a fixed dictionary, matching pursuit will first find the one atom that has the biggest inner product with the signal, and then subtract the contribution due to that atom, and repeat

the process until the signal is satisfactorily decomposed. The main difference from MP is that after every step, all the coefficients extracted so far are updated, by computing the orthogonal projection of the signal onto the set of atoms selected so far.

5 EXPERIMENTAL RESULTS

The step by step results of proposed rain streaks removal process based up on dictionary learning and dictionary portioning based upon morphological component analysis are shown below.

(a)

(b) (c)

  1. (e)

    (f)

    Fig 2: Step-by-step results of the proposed rain streak removal process: (a) original image; (b) lower part of the image; (c) HF part of the image; (d) rain pixels;

  2. dictionary trained on patches from images; (f) noiseless image

PSNR VALUE (BEFORE PROCESSING)

PSNR VALUE (AFTER PROCESSING)

6.5669 dB

20.4826 dB

Table 7.1 PSNR value

MSE (BEFORE PROCESSING)

MSE (AFTER PROCSSING)

2761.62

585.7467

Table 7.1 MSE value

6 CONCLUSION

Inthis paper, a single image based rain streak removal framework by formulating rain removal as an MCA-based image decomposition problem solved by performing dictionary learning, dictionary partitioning and sparse coding algorithms.The

dictionary learning of the proposed method is automatic and self-contained, where no extra training samples is required in the dictionary learning stage. Our experimental results show that the proposed scheme can effectively remove rain streaks without significantly blurring the original image. Peak signal to noise ratio of the rain streaks removed image is much better compared with the original image.

REFERENCES

[1]. Aharon M., Elad M., and Bruckstein A. M. (2006), The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation, IEEE Trans. Signal Process., vol. 54, no. 11, pp. 43114322.

[2]. Barnum P. C., Narasimhan S. and Kanade T. (2010) Analysis of rain and snow in frequency space, Int. J. Comput. Vis., vol. 86, no. 2/3, pp. 256274.

[3]. Bobin J., Starck J. L., Fadili J. M., Moudden Y., and Donoho D. L. (2007),

Morphological component analysis: An adaptive thresholding strategy, IEEE Trans. Image Process., vol. 16, no. 11, pp. 2675

2681.

[4]. Bossu J., Hautière N., and Tarel J. P. (2011),

Rain or snow detection in image sequences through use of a histogram of orientation of streaks, Int. J. Comput. Vis., vol. 93, no. 3, pp. 348367.

[5]. Brewer N. and Liu N. (2008), Using the shape characteristics of rain to identify and remove rain from video, Lecture Notes Comput. Sci., vol. 5342/2008, pp. 451458.

[6]. Buades A., Coll B., and Morel J. M.(2005),

A review of image denoising algorithms, with a new one, Multisc. Model. Simul., vol. 4, no. 2, pp.490530.

[7]. Dalal N. and Triggs B. (2005), Histograms of oriented gradients for human detection, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., San Diego, CA, vol. 1, pp. 886 893.

[8]. Duarte-Carvajalino J. M. andSapiro G. (2009), Learning to sense sparse signals: Simultaneous sensing matrix and sparsifying dictionary optimization, IEEE Trans. Image Process., vol. 18, no. 7, pp. 13951408.

[9]. Elad M. and Aharon M. (2006), Image denoising via sparse and redundant representations over learned dictionaries, IEEE Trans. Image Process., vol. 15, no. 12, pp. 37363745.

[10]. Fadili J. M., Starck J. L., Bobin J., and Moudden Y. (2010), Image decomposition and separation using sparse representations: An overview, Proc. IEEE, vol. 98, no. 6, pp. 983994.

[11]. Fu Y. H., Kang L. W., Lin C. W.,

and Hsu C. T.(2011), Single-frame-based rain removal via image decomposition, in Proc. IEEE Int. Conf. Acoust., Speech Signal Process., Prague, Czech Republic, pp. 14531456.

[12]. Garg K. and Nayar S. K. (2004),

Detection and removal of rain from videos, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., vol. 1, pp. 528535.

[13]. Itti L., Koch C., and Niebur E. (1998), A model of saliency-base visual attention for rapid scene analysis, IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 11, pp. 12541259.

[14]. Li-Wei Kang, Chia-Wen Lin and Yu-Hsiang Fu.(2012) Automatic Single- Image- Based Rain Streaks Removal via Image Decomposition, IEEE transactions on image processing, Vol.21, No.4.

[15]. Ludwig O., Delgado D., Goncalves V., and Nunes U. (2009), Trainable classifier-fusion schemes: An application to pedestrian detection, in Proc. IEEE Int. Conf. Intell. Transp. Syst., St. Louis, MO, pp. 16.

[16]. Mairal J., Bach F., and Ponce J.(2012), Task-driven dictionary learning, IEEE

  1. Trans. Pattern Anal. Mach. Intell., to be published

[17]. Olshausen B. A. and Field D. J.(1996), Emergence of simple-cell receptive field properties by learning a sparse code for natural images, Nature, vol. 381, no. 6583, pp. 607609.

[18]. Roser M. and Geiger A. (2009),

Video-based raindrop detection for improved image registration, in IEEE Int. Conf. Comput. Vis.Workshops, Kyoto, pp. 570577.

[19]. Starck J. L., Elad M., and Donoho

D. L. (2005), Image decomposition via the combination of sparse representations and a variational approach, IEEE Trans. Image Process., vol. 14, no. 10, pp. 15701582.

[20]. Tomasi C. and Manduchi R. (1998), Bilateral filtering for gray and color images, in Proc. IEEE Int. Conf. Comput. Vis., Bombay, India, pp. 839846.

Leave a Reply