Rain and Snow Removal Using Multi-Guided Filter from a Single Color Image

DOI : 10.17577/IJERTCONV7IS10008

Download Full-Text PDF Cite this Publication

Text Only Version

Rain and Snow Removal Using Multi-Guided Filter from a Single Color Image

Irappa Belagali

Department of Electronics & Communication Engineering, SDM College of Engineering and Technology,

Dharwad, India

Prof. Sumangala N B

Assistant Professor

Department of Electronics & Communication Engineering, SDM College of Engineering and Technology

Dharwad, India

Abstract – In this paper, we are introducing a rain and snow removal method through using low frequency part of a single image. It is mainly a platform for future applications such as image identification,where some objects cannot be recognized in heavy rain and snow by rador,so this application is helful mainly in this area. It is based on a key difference between clear background edges and rain streaks or snowflakes, the low frequency part can distinguish the different properties of them. Low frequency part is nothing but the non-rain and non-snow component. We modify it as a guidance image, here the high frequency part is nothing but input image to guided filter, so we get a non-rain or non-snow component of high frequency part and then add the low frequency part so we are going to get restored image. We make it more clear then based on the properties of clear background edges. And it shows the good rain and snow removal result.

  1. INTRODUCTION

    Consider an image that photo taken in the rainy or snowy time, so it is mainly or lightly covered with bright streaks. Because recognition, tracking, retrieving and so on. Bad weather which consequently degrades the performance of many image processing and computer vision algorithms such as object detection, tracking, recognition, and surveillance. A study by Garg and explains that this snow and rain belong to the dynamic weather, and they contain constituent particles of relatively large in area, thats why they can be captured easily by cameras. But this not the case when we are considering haze particles because they are small,and can hardly be filmed. This results in, rain or snow leads to complex pixel variations and obscures the information that is conveyed in the image or video

    .Earlier days a correlation model based on photometry(science of measurement of light based on brightness perceived to the human eyes) capturing the dynamics of rain and a physics-based motion blur(apparent streaking of moving object) model explaining the photometry of rain.Then Zhang proposed a detection method which explains about combining temporal and chromatic properties of rain and snow. Fu et al proposed a model to remove rain by image decomposition.on the basis of dictionary learning also we can remove rain and snow pixels.

    Then juing worked by applying guided filter to remove rain streaks or snow streaks, and then improved the performance by refining the guidance image.In this paper, a method is proposed based on the key difference between clear background edges and rain streaks or snow streaks. It mainly uses the guided filter to remove rain streaks or snowflakes.

  2. LITERATURE SURVEY

    1. Detection and removal of rain from videos

      Rain removal algorithms needs some number of consecutive frames for reducing the visibility of rain from videos. These algorithms are designed for the rain videos captured by the fixed camera. it is demonstrated that using global motion compensation all the rain removal algorithms developed for fixed camera can be used for moving camera too. Qualitative analyses show that the proposed algorithm removes rain effectively under constrained buffer size and delay in comparison with most of the competing rain removal algorithms.

    2. Single image haze removal using dark channel prior

      The quality of images taken outsides will be impaired severely in bad weather conditions, such as haze, mist, fog or rain. In this paper, to improve the visibility with only single hazy image, a kind of haze removal algorithm is proposed. Firstly, the raw atmospheric transmission map is estimated by use of the dark channel prior. And then, Fields of Experts model is adopted to amend the raw atmosphere transmission map. Finally, the scene albedo is\ restored based on the atmosphere scattering model. Experiments on a variety of outdoor hazy images verify the feasibility and validity of our algorithm.

    3. The distribution of raindrops with size

    The diurnal variability of raindrop size distribution (DSD) in precipitating clouds over Kototabang, West Sumatra, Indonesia (0.20°S, 100.32°E), is studied using three types ofDoppler radars, operated at VHF (47 MHz), UHF (1.3 GHz), and X band (9.4 GHz) frequencies. Two precipitation events from 5 to of the first Coupling Processes in the Equatorial Atmosphere (CPEA-I) project reveal a difference between clouds precipitating in the early afternoon and clouds precipitating in the nighttime. In the early afternoon, the

    precipitating clouds were dominated by shallow convective types with high rainfall rate at the surface. In the nighttime, precipitating clouds were dominated by stratiform types with small rainfall rate at the surface. A diurnal variation of horizontal wind was observed over this area. The westerly in the lower troposphere and the easterly in the middle troposphere began to be enhanced in the afternoon (1400 1700 LT). DSD parameters were retrieved from VHF band Doppler radar data. A modified gamma distribution was used to model DSD parameters. The shape parameter () was larger during stratiform precipitation than during shallow convective precipitation events, as shown by previous studies. During stratiform rain events o,the median volume diameter (D0) was dominantly greater than 1 mm, which is larger than D0 during shallow convective rain events. Results presented in this paper indicate that DSD has a diurnal cycle over the mountainous region of Sumatra.

  3. METHODOLOGY

    1. Clear background and rain or snow steaks

      Usually the rain and snow steaks are the part of high frequency, because they have random changes in pixel values by comparing to neighboring pixel values, and this high frequency part also contains some background edges.then the remaining part of image obviously fall under low frequency. Due to the size and the speed of raindrop or snowflake, they are imaged in form of bright and blurry streaks.

    2. Rain and snow removal block diagram

    rain or snow image high freq

    EDGE

    EDGE

    GUIDED FILTER

    GUIDEDFILT

    D. Spatio temporal frequency analysis for removing rain and +

    snow from videos

    In this paper, we propose an efficient algorithm to remove rain or snow from a single color image. Our algorithm takes advantage of two popular techniques employed in image processing, namely, image decomposition and dictionary learning. At first, a combination of rain/snow detection and a guided filter is used to decompose the input image into a complementary pair: 1) the low-frequency part that is free of rain or snow almost completely and 2) the high-frequency part that contains not only the rain/snow component but also some

    refined

    low freq part

    MIN

    MIN

    clear rec image

    recovered image

    WEIGHTED SUMMATION

    or even many details of the image. Then, we focus on the extraction of image's details from the high-frequency part. To this end, we design a 3-layer hierarchical scheme. In the first layer, an overcomplete dictionary is trained and three classifications are carried out to classify the

    high-frequency part into rain/snow and nonrain/ snow components in which some commo characteristics of rain/snow have been utilized. In the second layer, another combination of rain/snow detection and guided filtering is performed on the rain/snow component obtained in the first layer. In the third layer, the sensitivity of variance across color channels is computed to enhance the visual quality of rain/snow-removed image. The effectiveness of our algorithm is verified through both subjective (the visual quality) and objective (through rendering rain/snow on some groundtruth images) approaches, which shows a superiority over several state-of-the-art works.

    At the end we comes to using of guided filter and some other filters like laplacian, and some edge detection filters,guided filters because they have high accuracy in operation and helps in smoothing the image, that smoothing is very much important here in image processing.and some edges are smoothened so that some edge detecting filters are further used. On studying the properties of these rain and snow we can increase and decrease the rain and snow as well.

    recovered image

    GUIDED FILTER

    GUIDED FILTER

    Fig 1.Rain and snow removal in a colour image

    1. Guided filter

      The guided filter has better edge preserving smoothing and gradient preserving property.the guided filter is effectual in a variety of computer vision and computer graphics applications.

    2. Conceptual definition of guided filter

      Let us assume that I bea guidance image and p be an input image.the guidance filter is modeled as a local linier model between the guidance I and the filtering output q.for any pixel I ,the linear transform q of I in a window wk centered at the pixel k.

    3. Guided filter algorithm

    1. Firstly read the guidance image and the input image.

    2. Enter the values of r and epsilon,where r is the local window radius and epsilon is the blur degree of filter.

    3. Calculate the values: mean of I, variance of I, mean of p, average cross product of I and p.

    4. Compute the values of linear coefficients. a=(cross_IP-mean_I.*mean_P)./(var_I+epsilon) b=mean_P-a.*mean_I

    5. compute the mean of a and b

    6. thenobtained the filter output image Q using mean of a and b,

    Q=mean_a.*I + mean_b

    The guided filter is having some following aspects in image processing

    1. Edge preserving

    2. Image denoising

    3. Structure trancefer

    This guided image is more like low pass filter but high accuracy and more smoothing and speed in its operation.

    In this paper we commonly used mathematical model of rainy or snowy image, that is, input image can be decomposed into two components those are clean background image and a rain or snow component.

    Iin = Ib + Ir

    Here different textures of rain or snow and background, firstly, an image that is feed as input image is decomposed into low frequency part as well as high-frequency part by using guided image filter. All the rain and snow streaks are comes under the high-frequency part The low frequency part is non-rain or nonsnow component, which also has non-rain or non-snow textures. So formula above equation can be changed to

    Iin=Ibl + Ibh + Irh = Iguide

    in the equation Ibl is the low-frequency part of background. Ibh is the high-frequency part of background. Irh denotes the rain or snow in the high-frequency part. And the Iguide in means the transformation of Iin by using guided filter.

    in +Ibh + Ir Iin = ILF + IHF

    the low frequency part is non-rain part, and contains the edges of the background. So we get a non-rain or non-snow guidance image. By using it we can remove rain and snow. The experiment results prove our idea. Block diagram of the proposed removal method is shown in Figure 1. It mainly describes the framework of our method. Initially the input

    image whatever we are going to feed is decomposed into low frequency part and high-frequency part by using guided filter. We introduce the low frequency part is not rain or snow part before. But due to the effect of guided filter, the edges of image become a little smooth. In order to make the existed edges more close to the edges of input image, we use edge enhancement to as follow expression:

    I*LF = ILF + ILF

    where ILF is the gradient of ILF and = 0.1 in the paper. the enhanced edges are more close to the edges of background, and all the enhanced edges are still background textures. So we get the more refined guidance image. We dont use input image but the high-frequency part as the input image of guided filter. Since there is a big difference between the pixel values of the low frequency image and the original input image, the restored image is easy to have unsmooth flakes. The high-frequency part is more close to low frequency part, which will get better performance. After using guided filter, high-frequency part remains the non-rain or non- snow component. Through adding the low frequency part, we can get a rough recovered image. On one hand, because we dont completely recover the edges, and we just make low value pixels edges become higher, recovered image is blurred. But the edges cant be rain or snow streaks, we dont want to change the low pixels value edges. On the other hand, using guided filter also makes recovered image blurred. So we change it to make recovered image clear as further

    Icr = min(Ir, Iin)

    Due to the effect of guided filter, the values of removing rain or snow part are a little higher than nearby pixel values. It is not good for our visual. So we take a weighted summation of Ir and Icr to get the refined guidance image, and then use guided filter once again to get the final result.

    Iref = Icr + (1 )Ir

    Where = 0.8 in rain removal and = 0.5 in snow removal in this paper.

    E. Software Environment: MATLAB

    MATLAB stands for Matrix Laboratory. According to The Mathworks, its producer, it is a "technical computing environment". We will take the more mundane view that it is a programming language.Matlab is a program that was originally designed to simplify the implementation of numerical linear algebra routines. It has since grown into something much bigger, and it is used to implement numerical algorithms for a wide range of applications. The basic language used is very similar to standard linear algebra notation, but there are a few extensions that will likely cause you some problems at first.So we are using matlab 2018b software as its having some inbuilt functions, so its helpul for our paper,its a latest version in matlab as it has guided filter as inbuilt function in it.

  4. EXPERIMENTAL RESULT

    Figure 1 shows the removal procedure and intermediate results proposed by this paper. Figure 2 is input image.We Figure 3 is high frequency part of input image. From the result, we can see that high frequency part has the information of background textures and rain streaks. Use guided filter in horizon(because the direction of rain streaks cant be in horizon) so that we can get low frequency content as in fig 4. It indeed doesnt contain rain information. And the rough recovered or min recovered image Figure 5 is not rain but blurred because of guided filter.then we go for weighted calculation image as shown in Figure 6.Figure 7 shows the best result of and our result. Visually it is obvious that our result is better than the result of the method. Our result is clearer and more effective in rain removal. By the way, our method and use guided filter, and guided filter is O(N) time algorithm.

    Fig 2.Original image

    Fig 3. High frequency image

    Fig 4.Low frequency image

    Fig 5.Min recovered image

    e Fig 6. Weighted Image

    Fig 7.Clear recovered image

  5. CONCLUSION

In this paper, we propose a new method for rain and snow removal of a single image. Through analysing the difference between clear background edges and rain or snow streaks, low frequency part can expres the different characteristics of them, and then a rain and snow removal method base on low frequency is proposed. The removal part is mainly made up of guided filter. The results show that our method is effective and efficient in rain removal and snow removal. Our method use guided filter to remove rain and snow streaks, but our method has better performance.

REFERENCE

  1. Garg, K., Nayar, S.K.: Photorealistic rendering of rain streaks. Proc. SIGGRAPH25(3), 9961002 (2006)

  2. Zhang, X., et al.: Rain removal in video by combining temporal and chromaticproperties. In: Proc. ICME, pp. 461464 (2006)

  3. Garg, K., Nayar, S.K.: When does a camera see rain? In: Proc. CVPR, vol. 2, pp.10671074 (2005)

  4. Barnum, P., Narasimhan, S., Kanade, T.: Analysis of rain and snow in frequencyspace. Proc. IJCV 86(2-3), 256274 (2010)

  5. Chen, D.Y., Chen, C.C., Kang, L.W.: Visual depth guided image rain streaksremoval via sparse coding. In: Proc. ISPACS, pp. 151156 (2012)

  6. Kang, L.W., Lin, C.W., Fu, Y.H.: Automatic Single-Image-Based Rain StreaksRemoval via Image Decomposition. IEEE Transactions on Image Processing 21(4),17421755 (2012)

Leave a Reply