Rain Pixel Recovery for Videos in Dynamic Scenes

DOI : 10.17577/IJERTV3IS100437

Download Full-Text PDF Cite this Publication

Text Only Version

Rain Pixel Recovery for Videos in Dynamic Scenes

Pooja Rade Pratibha pansar e

Sandhya Kawale Jyoti Jadhav

Department of Computer Engineering

Sandip Institute of Technology and Research Centre, Nasik.

Abstract Security surveillance and movie editing are such an applications in which Rain removal is an important and also very useful technique. By considering certain properties like photometric, chromatic and probabilistic properties of the rain several rain removal algorithms where proposed to detect and remove the rainy effect.

The existing algorithm work well with light rain and static scenes while dealing with heavy rain and dynamic scenes, these algorithm have poor visual results. The proposed algorithm is based on motion segmentation & rain detection for dynamic scene, in this algorithm after applying photometric and chromatic constraints for rain detection, rain removal filters are applied on pixels such that their dynamic property as well as motion occlusion clue are considered; both spatial and temporal information are then adaptively exploited during rain pixel recovery. The result shows that the proposed algorithm has better performance than the current algorithms for rainy scene having large motion. Our proposed project is one of the beneficial purpose tools for movie editing or other investigation software or security surveillance. In proposed project there is use of some effective algorithms used by which the rainfall can be removed for both static as well as dynamic scenes in video.

KeywordsMotion segmentation, motion occlusion, dynamic scene, motion buffering, adaptive filters, rain removal

INTRODUCTION

Rain removal is a very tedious task. In some of videos there occurs fluctuations in the videos, these fluctuations are caused due to dynamic objects or the camera motion object motion, etc, due to this fluctuations the pixel intensity gets changed through its original values also due to rain there is fluctuations in the pixels to remove this rain and the fluctuation there are many methods proposed in that the main need was to detect the rain and replace it by its original value of the pixels.

First the analysis was done by Garg and Nayar in account to photometric [1][2] and the physical properties. They detect the intensity and temporary constraints by there observations but they could work for the uniform velocities and directions of the rain drop was limited.

Zang proposed further methods in which the chromatic properties [3] where took into considerations .In this the R,G,B intensity changes for those objects which are in motion. This algorithm worked only for static scenes and also for only certain color backgrounds.

Tripathi proposed probabilistic spatial-temporal model [4][5] where they detect fluctuations and the intensity range this

model was applicable for both static and the dynamic background but this method did not work for the heavy rain and speedily moving objects.

The short coming of the existing methods is the prediction of rain covered pixel's original value and also the rain detection. Due to this the areas having motion is affected and important information are erased a ghost effect is observed.

The proposed algorithm is based on motion segmentation of dynamic scene. After applying photometric and chromatic constraints for rain removal filter, rain detection are applied on pixels such that their dynamic property as well as motion occlusion clue are considered; both spatial and temporal information are then adaptively exploited during rain pixel recovery. Experiment results show that our algorithm outperforms existing ones in highly dynamic scenarios.

SYSTEM ARCHITECTURE

MOTION SEGMENTATION

By using the method of object motion retainment the rain and object motion can be removed which are caused due to pixel fluctuation of rainy scene.

The motion segmentation is divided in certain parts like:

  • Estimation of motion field

  • Including local properties

  • Combination of motion and locality cues

Motion field are 2-D vector which are projected from 3- D vector of a dynamic scene. The moving object could be recognized using the motion segmentation. The moving

objects is evaluated with the help of optical flow. For constraint of intensity conversation the chain rule for differentiation is applied. To predict the relative displacement between adjacent frames of objects optical flow gives the accurate output. The local properties like pixel location and chromatic values into the segmentation the pixel's colour information is formed.

RAIN DETECTION

Rain detection contains two main types:

  1. Differencing and thresholding

  2. Applying photometric & chromatic constraints

  3. Motion exclusion

    The difference between two frames are calculated and threshold by using grey scale intensity. The intensity fluctuations which are caused due to rain can be detected by setting the threshold value.

    I diff = _1 IN IN1 D th

    0 IN IN1 < D th

    Firstly photometric constraints are calculated by applying the conditions. The constraints of intensity fluctuation are applies on the pixels, by considering the speed of rain and the camera motion also the dimension of the camera in which it is moved. After applying the photometric and the chromatic constraint if the pixels fail in the differencing then constraints are removed from the rain steak.

    The objects which are in motion and the objects covered by rain are treated separately, and they are divided into two frames and these frames are considered differently.

    FRAME,RAIN AND MOTION BUFFER

    The three buffers are created for rain removal:

    • Video frames buffer

    • Rain buffer

    • Motion buffer

      In these three buffer each buffer is having three parameter like length, width, Depth of buffer. Each layer of video frame buffer has one video frame and new fame is pushed on top of buffer and oldest frame is moved out from bottom. The rain buffer records binary rain map for corresponding video frame in Video buffer and motion buffer records the corresponding binary motion.

      SCENE RECOVERY

      This algorithm works on central frame ,as both past and future information In video, rain and motion buffer could be

      retrieved for better scene recover performance.

    • Rain covered pixels in static scene Background

    • Rain covered pixel in motion object

    • Pixel uncovered by rain

The filter coefficients are set, which makes the filter shaped as Gaussian with variance along time axis but no special neighbour values are used, those are temporally assigned with highest weights, according to when a certain pixel is covered by rain, camera motion or background lighting pixels are close to the original values.

For pixel which are not covered by rain their values are simply kept unchanged. The fact that pixel values changed fast when it belongs to motion object and it also has 2- Dgaussian shaped distribution which is further calculated.

MATHEMATICAL MODEL

M=(V,A,R,VF,O,I)

Where,

V=Rainy Video A=Algorithm used R=Rain Pixel VF=Video Frames O=Optical flow

I=Intensity(Image Brightness) V={VF1,VF2,VF3,..,VFn}

Where,

VF1,VF2,VF3,..,VFn are different frames in

video

N=Number of algorithms used for resultant video A={MS,RD,SR}

Where,

MS=Motion Segmentation RD=rain detection SR=Scene Recovery

  1. Motion Segmenttion:

    In motion segmentation optical flow is used to evaluate the existence of motion.

    I/x x/t + I/y y/t + I/t=0 ———————(1)

    I(x, y ,t)=Image brightness at pixel p(x, y) at time t. Let,

    u=dx/dt , v=dy/dt , Ex=I/x , Ey=I/y , Et=I/t this equation can be written as,

    Exu + Eyv + Et=0 ———————–(2)

    u, v=optical flow velocities 2)Rain Detection:

    Intensity fluctuation caused by rain can be detected by setting the threshold value as follows,

    Idiff={1 , IN- IN-1 >=Dth

    0 , IN – IN-1 < Dth

    Where,

    Idiff=binary difference map

    IN – IN-1=Gray scale Intensity differences between two successive frames.

    Dth=threshold value.

    • Motion Exclusion:

Rain pixels within the motion object and the background need to be treated separately

Irain is divided into two sets : Sm and Sb Sm={I(x, y)|Irain(x ,y)=1 & BM(x, y ,n)=1}

Sb={I(x, y)|Irain(x, y)=1 & BM(x, y ,n)=0} Sp=Sc-Sm-Sb

Where,

Sm=rain candidate pixels in the motion target area. Sb=rain candidate pixels in the background area. Sp=pixel that are not covered by rain.

Sc=complete set of frame pixel. BM=motion buffer.

  1. Scene Recovery:

    Three buffer for separating the frames for rain removal, BI(len,wid,stk)=Video frame buffer

    BR(len,wid,stk)=Rain buffer BM(len,wid,stk)=motion buffer

    Len*wid=Video frame size Stk=Depth of buffer

    CONCLUSION

    In this paper the rain pixel removal technique is carried out. This is obtain by the motion segmentation, estimation of motion fields and including the local properties. The methods which were proposed earlier lack to work for the dynamic scene and heavy rain. The proposed system worked for highly dynamic scenes and heavy rain also rain streaks are as well removed, our method motion segmentation scheme, our method recovers the rain pixels such that each pixels dynamic property as well as motion occlusion clue is considered; both spatial and temporal information are adaptively exploited during rain pixel recovery

    REFERENCES

    1. K. Garg and S. K. Nayar, Detection and removal of rain from videos, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., vol. 1, Jul. 2004, pp. 528535.

    2. K. Garg and S. K. Nayar, Vision and rain, Int. J. Computer Vis., vol. 75, no. 1, pp. 327, 2007.

    3. X. Zhang, H. Li, Y. Qi, W. K. Leow, and T. K. Ng, Rain removal in video by combining temporal and chromatic properties, in Proc. IEEE Int. Conf. Multimedia Expo, Jul. 2006, pp. 461464.

    4. A. K. Tripathi and S. Mukhopadhyay, A probabilistic approach for detection and removal of rain from videos, IETE J. Res., vol. 57, no. 1, pp. 8291, Mar. 2011.

    5. A. Tripathi and S. Mukhopadhyay, Video post processing: Low- latency spatiotemporal approach for detection and removal of rain, IET Image Process., vol. 6, no. 2, pp. 181196, Mar. 2012.

    6. A. Verri and T. Poggio, Motion field and optical flow: Qualitative properties, IEEE Trans. Pattern Anal. Mach. Intell., vol. 11, no. 5,

      pp. 490498, May 1989.

    7. A. Ogale, C. Fermuller, and Y. Aloimonos, Motion segmentation using occlusions, IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 6,

      pp. 988992, Jun. 2005.

    8. J. P. Koh, Automatic segmentation using multiple cues classification,

      M.S. dissertation, School Electr. Electron. Eng., Nanyang Technol. University, Singapore, 2003.

    9. B. K. Horn and B. G. Schunck, Determining optical flow, Artif. Intell., vol. 17, pp. 185203, Jan. 1981.

    10. M. Shen and P. Xue, A fast algorithm for rain detection and removal from videos, in Proc. IEEE Int. Conf. Multimedia Expo, Jul. 2011, pp. 16.

Leave a Reply