An Efficient Rain Detection and Removal from Videos using Rain Pixel Recovery Algorithm

DOI : 10.17577/IJERTCONV3IS15007

Download Full-Text PDF Cite this Publication

Text Only Version

An Efficient Rain Detection and Removal from Videos using Rain Pixel Recovery Algorithm

J. Ramya[1], S. Dhanalakshmi[2], Dr. S. Karthick[3]

UG Scholar, Associate professor, Prof and Dean, Dept. of CSE SNS College of Technology, Coimbatore, India

Abstracrt – Image segmentation is aprocess that subdivide an image into constituent parts or object. Rain removal is a very useful and important technique in segmentation.The visual effect of rain is more complex and it is motion blurred. Rain at high velocities, it is consists of spatially distributed drops, it provides a high intensities in image due to high velocities and videos, the photometric and chromatic technique is applied to remove the rain effect in background subtraction method but the performance is poor. In this paper, an efficient Analysis and synthesis algorithm was developed which is based on motion segmentation of dynamic scene is applied to increase the performance of rain removal effect. Rain removal filters and DWT technique is used to decompose the image into high frequency and low frequency.

Keyword: Image segmentation, Image processing, Analysis and filter algorithm, Threshold, Rain removal filter, Discrete wavelet transform.

1. INTRODUCTION

Image processing is a technique to enhance raw image received from camera placed on satellite or the picture taken in normal day to day life for various application. Image processing is the form of signal processing in which the input is given as image and output can be image or set of characters. Mostly the image will be in two dimensions. Different algorithm can be applied to the input data. There are two types of image Digital image has a finite set of digital values called picture element or pixel. Digital image focus on two major tasks. They are Improvement of pictorial information for interpretation and processing the image data for storage transmission and representation. Another type is Analogy images, it is physical images that are created when the film in a camera is exposed to light. There are different techniques in image processing segmentation ,stegnography, watermarking, recognition, image enhancement, compression, retrieval .The segmentation is the process of partition the image into unique separable based on the intensity of image. Before performing segmentation the image which taken as input should be denoised by passing through some high pass or low pas filter. If an image denoised then it is converted into an binary image after that thresholding is performed .After thresholding the segmentation is done by clustering approach. Before recovering the rain effect ,the image is

segmented. In this paper, an efficient Analysis and synthesis algorithm was developed in order to separate the low frequency and high frequency.

Fig 1: Recovery of an image from rain pixel recovery

  1. MOTION SEGMENTATION

    Motion segmentation in which it is used to segment dynamic scene [3] The pixel intensity variation of a rainy scene is caused by rain and object motion. The variation caused by rain need to be removed, and the ones caused by object motion need to keep it as it is. Thus motion field segmentation naturally becomes a fundamental procedure of these algorithm. Proper threshold [6] value is set to detect the intensity variation caused by rain. After applying photometric and chromatic constraints for rain detection, rain removal filters are applied on pixels such that their dynamic property as well as motion occlusion clue are considered; both spatial and temporal information are then adaptively use during rain pixel recovery. These algorithm gives better performance over others for rain removal in highly dynamic scenes with heavier rainfall.

  2. CONSTRAINTS APPLIED FOR RAIN DETECTION

    They first develop a photometric model that describes the intensities produced by individual rain streaks and then develop a dynamic model that captures the spatiotemporal properties of rain [7] Together, these models describe the complete visual appearance of rain. Using these model they develop a algorithm for rain detection and removal. The temporal property states that an image pixel is never always

    covered by rain throughout the entire video. The chromatic property states that the changes of R, G, and B values of rain affected pixels are approximately the same. The algorithm can detect and remove rain streaks in both stationary and dynamic scenes, by using both temporal and chromatic properties which are taken by stationary cameras. But it gives wrong result for those scenes which are taken by moving cameras. To handle these situation the video can be stabilized for rain removal, and destabilized to restore camera motion after rain removal. It can handle both light rain and heavy rain conditions. This method is only applicable with static background, and it gives out false result for particular foreground colours. To overcome this problem after applying photometric and chromatic property, the analysis and synthesis algorithm has been used in order remove the rain pixel in image or videos.

  3. TYPES OF FILTERS

    The filter can be applied to eliminate the pixel in image which is not significant. After applying photometric and chromatic constraints for rain detection, rain removal filters can be applied on pixels by considering both dynamic property as well as motion of an image[5]. Filter can be applied based on the nature of image, it can be applied after detection or before detection There are different types of filter used for rain recovery of an image. Temporal filtering methods are not effective in removing rain since they are spatially invariant and hence degrade the quality of the image in regions without rain. median filtering over time for rain reduction. Median filtering is used to remove some rain but it also alters the signal due to motion. But it is hard to track individual raindrops or the appearance of rain. In this paper bilateral filter has been applied. Instead of directly applying a conventional image decomposition technique, this method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a rain component and a non-rain component by performing dictionary learning and sparse coding based on MCA (morphological component analysis). These is first method which remove rain streak while preserving geometrical details in a single frame[8]where no temporal or motion information among successive images is required. In these method decomposing rain steaks from an image is fully automatic and self-contained, where no extra training samples are required

    5. ANALYSIS AND SYNTHESIS ALGORITHM

    The important in outdoor surveillance vision system is detection and removal of rain, the appearance of rain strikes degrades the performance of various vision-based applications is poor [9]. A new approach for rain detection and removal of video-based rain removal framework via properly formulating rain removal as in video decomposition problem based on Analysis and Synthesis algorithm (A&S). Analysis-synthesis filter are often implemented with hierarchical sub-sampling, leading to a pyramid. The Laplacian pyramid of a sub sampled system

    with analysis and synthesis filters. The analysis filters are band pass, and the synthesis filters are low pass. Thus the synthesis filters can remove high frequency artifacts introduced by nonlinear processing, but not low frequency artifacts. When nonlinearities introduce distortions that show up in low frequencies, the synthesis filters cannot remove them. In spite of these problems, we can ge fairly good results with the Laplacian pyramid when we compute smooth gain maps.

    Instead of directly applying conventional image decomposition (DWT) technique, we first decompose an image into the low-frequency and high-frequency parts using a bilateral filter. The high-frequency part is then decomposed into rain component and non-rain component by performing dictionary learning and sparse coding.

    (a) (b) (c) (d) (e)

    Fig 2: Video sequence comparism a) Original video frames. (b) photometric model . (c) chromatic model (d) spatial-temporal model (e) After applying Analysis and synthesis algorithm ,the performance has been improved High dynamic range (HDR) imaging is an area of increasing importance of display devices still have limited dynamic range (LDR). Multiscale Decomposition image processing techniques have a reputation of causing halo artifacts when used for range compression. The synthesized SDR image contains much more scene details than any of the captured SDR image. Moreover, the scheme also functions as the tone mapping of an HDR image to the SDR image, and it is superior to both global and local tone mapping operators. A proposed method of balanced analysis-synthesis filters, and applies local gain control to the sub-bands systems for decomposing and reconstructing images. Gradient-domain algorithm based on the properties of HVS for high dynamic range compression. Experimental results on real images demonstrate that our algorithm is especially effective at preserving or enhancing local details.

    1. CONCLUSION

      Compared to analysis and synthesis algorithm, the other algorithms perform poorly in highly dynamic scenes, serious pixel corruptions often occur in motion intensive areas, which is caused by ignoring motion occlusions during pixel recovery. Based on the motion segmentation scheme the method recovers the rain pixels such that each pixels dynamic property as well as motion occlusion clue is considered both spatial and temporal information are

      adaptively exploited during rain pixel recovery. Experiment results show that our algorithm performs better in highly dynamic scenarios.

    2. REFERENCE

  1. Jie Chen and Lap-Pui Chau[2] Garg.K and Nayar.S.K, A Rain Pixel Recovery Algorithm for Videos With Highly Dynamic ScenesIEEE transactions on image processing, vol. 23, no. 3, march 2014.

  2. Garg.K and Nayar.S.K, Detection and removal of rain from videos,in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., vol. 1,pp. 528535, Jul. 2004.

  3. Garg.K and Nayar.S.K, Vision and rain, International Journal Computer Vision,vol.75, no. 1, pp. 327, 2007.4. Horn.B. K and Schunck.B. G. , Determining optical flow, Artif. Intell.,vol. 17, pp. 185203, Jan. 1981.

  1. Li-Wei Kang ,Automatic Single-Image-Based Rain Streaks Removal via Image Decomposition, IEEE Transactions On Image Processing, vol. 21, no. 4, april 2012.

  2. Ogale.A,Fermuller.C, and Aloimonos.Y, Motion segmentation using occlusions, IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 6,pp. 988992, Jun. 2005

  3. Robin Kalia and Amol Jaikar, . Rain Removal From Videos Using The Temporal-Spatial Statistical Properties,commuication paper,2011.

  4. Starik.S and Werman.M, Simulation of rain in videos, in Proc. ICCV Texture Workshop, vol. 2, 2003, pp. 1318.

  5. ShaikNasreen, Detection and Removal of Rain in Videos Using Modern Approach,International Journal of Engineering Research Volume No.3 Issue No: Special 2, pp: 60-63 22 March 2014.

  6. Zhang.X,Li.H, Qi.Y, Leow.W. K and Ng.T. K, Rain removal in video by combining temporal and chromatic properties, in Proc. IEEE Int. Conf. Multimedia Expo, pp. 461464,Jul. 2006.

Leave a Reply