Applying Deep Convolutional Neural Network for Removing Rain Component from A Single-Color Image

DOI : 10.17577/IJERTV10IS110040

Download Full-Text PDF Cite this Publication

Text Only Version

Applying Deep Convolutional Neural Network for Removing Rain Component from A Single-Color Image

Priti S. Gokhale, Prof. P. S. Malge

Department of Electronics Engineering, WIT, Solapur,India.

Abstract:- As we know bad weather, e.g., haze, rain, or snow affects severely the quality of the captured images or videos. Which degrades the image quality, which consequently degrades the performance of many image processing and computer vision system algorithms. These algorithms are used in various applications such as object detection, tracking, recognition, and surveillance also in navigation. Rain removal from a video or a single image has been an active research topic over the past decade. Today, it continues to draw attention in outdoor vision systems (e.g., surveillance) where the ultimate goal is to produce a clear and clean image or video. The most critical task here is to separate rain component from the other part. Here in this paper, we are proposing another efficient algorithm to remove rain from a color image using CNN.

Keywords:- Conditional Neural Network (CNN), rain-drop removal

  1. INTRODUCTION

    An image captured in the rainy time, so it is mainly or lightly covered with bright streaks. The impact of rain streaks/drops on images and video is often undesirable. The effect of rain can also severely affect the performance of outdoor vision systems. Effective methods for removing rain component are needed for a wide range of practical applications. However, when an object structure and orientation is like that of rain component i.e., rain streaks/drops. It is hard to simultaneously remove rain and preserve the structure.

    First, rain is semi-transparent. Because of this, the objects will not be occluded completely but some blurring may appear. Second, pixels with different intensities will be affected by rain differently. When the pixels primary intensity is relatively low, rain will enhance its intensity. When a high intensity pixel is affected by rain, its intensity will become lower. This is to say that rain-affected pixels tend to have the same intensity because the reflection of rain is dominating under this scenario. Images which are taken from an outdoor condition, bad weather like rain confuse human viewers alsobrings difficulty image processing and also the performance of computer vision algorithms decreases.

    In this paper, we address the problem by visually removing rain streaks, and thus transforming a rain streak degraded image into a clean one. The problem is intractable, since

    first the regions occluded by raindrops are not given. Second, the information about the background scene of the occluded regions is completely lost for most part. To resolve the problem, we apply convolutional neural network (CNN). Here we proposed a deep detail network to directly reduce the mapping range from input to output. To further improve the de-rained result, they used a priori image domain knowledge by focusing on high frequency detail during training, which removes background interference and focuses the model on the structure of rain in images.

  2. LITERATURE REVIEW

    In our previous work [1] where we have proposed an efficient algorithm to remove rain component from a color image. Here we have used a Generative Adversarial Network with adversarial training. Two important steps in generator network followed are:1) Creation of attention map using ResNet and LSTM 2) Contextual Autoencoder. We have used the attentive recurrent network is to find regions in the input image that need to get attention. These regions are mainly the raindrop regions and their surrounding structures that are necessary for the contextual auto encoder to focus on, so that it can generate better local image restoration.

    Xueyang et al [2] proposed new deep network architecture for removing rain streaks from individual images based on the deep convolutional neural network (CNN). They proposed a deep detail network to directly reduce the mapping range from input to output. To further improve the de-rained result, they used a priori image domain knowledge by focusing on high frequency detail during training, which removes background interference and focuses the model on the structure of rain in images.

    Wang et al [3] proposed an efficient algorithm to remove rain or snow from a single-color image using image decomposition and dictionary learning. At first, a combination of rain/snow detection and a guided filter is used to decompose the input image into a complementary pair: 1) the low-frequency part which is free from rain/snow and 2) the high frequency part that contains details of the image details with rain/snow component. Then, they focus on the extraction of images details from the high frequency

    part designing a 3-layer hierarchical scheme to remove rain component properly.

    He zhang [4] proposed an efficient algorithm for image de- raining using a conditional generative adversarial network. They investigated the use of generative modeling for synthesizing de-rained image from given input rainy image for improved stability in training and reducing artifacts introduced by GAN s in the output images they proposed the use of new refined loss function in the GAN optimization framework. In addition, a multiscale discriminator is proposed the new refined loss function in the GAN optimization framework a multiscale discriminator is proposed a leverage feature from different scales to determine whether the de-rained image is real or fake. They conducted extensive experiments on synthetic and real-world data set to evaluate the performance of the proposed method. Furthermore, experimental results are evaluated on object detection using faster RCNN demonstrated significant improvements in detection performance when IDCGAN method is used as a preprocessing step.

    Arti et al. [5] attempted to solve the rain removing problem from a single-color image by utilizing the common characteristics of rain. We acquired the low and high frequency parts by implementing a rain/snow detection and applying a guided filter. For the high frequency part, a dictionary learning and three classifications of dictionary atoms are implemented to decompose it into non dynamic components and dy0namic (rain or snow) components, where some common characteristics of rain/snow they need earlier in our work are utilized. Moreover, we have designed two additional layers of extracting image details from the high frequency part.

    Binju et al [6] presents a literature survey of various rain or snow removal techniques from a single colour image. There are different methods through which we can remove rain or snow from the image in an efficient manner. Compared to all these methods the one that uses two step-processing in A Hierarchical Approach for Rain or Snow Removing in A SingleColor Image is found to be more effective. It provides 98% accuracy. This method can enhance the visual quality of the rain/snow-removed images. Wang et al [7] derived a simple linear model p = s + to describe the physical principle of imaging rain pixels. In order to remove rain streaks in a rain image, we first detect rain streaks by two characteristics of rain streaks. Once the binary location map of rain pixels is obtained, the original intensity of each rain pixel is approximated by a weighted average of all neighboring non-rain pixels. For every rain pixel, we train the parameters involved in the linear model. Once the parameters are determined for a rain pixel, its rain removed intensity can be calculated by plugging the observed intensity of the rain pixel into the model. Subjective and objecive evaluations demonstrate that our algorithm outperforms several state-of-the-art traditional methods for rain-removal. Compared with deep learning-based method,

    our linear model can obtain comparable rain-removed results no matter for light rain images or for majority of heavy rain images.

    Chris et al [8] studied the effects of rain and snow in the context of traffic surveillance and reviewed single-frame and video-based algorithms that artificially remove rain and snow from images and video sequences. The study shows that most of these algorithms are evaluated on synthetic rain and short sequences with real rain and their behavior in a realistic traffic surveillance context are undefined and not experimentally validated. In order to investigate how they behave in the aforementioned context; we have presented the AAU Rain Snow dataset that features traffic surveillance scenes captured under rainfall or snowfall and challenging illumination conditions. We have provided annotated ground truth for randomly selected image frames of these sequences in order to evaluate how the preprocessing of the input video by rain removal algorithms will affect the performance of subsequent segmentation, instance segmentation, and feature tracking algorithms.

    Park et al [9] introduces an efficient rain removal algorithm by including the contrast restoration method. Since the rain streaks in rainy image exist in the high frequency parts, many conventional rain removal algorithms first decompose the input rainy image into the base (low-frequency) layer and the detail (high-frequency) layer and then perform rain removal on the detail layer rather than on the entire image. However, the conventional algorithms have a problem that they cannot remove the contrast variation. For this reason, our method also considers the contrast enhancement on the base layer. We further estimate and eliminate the contrast variation on the base layer by low pass filtering the approximated rain streaks. Although the low frequency components contain not only the contrast variation but also the basic information of the image, our method can distinguish the contrast variation from the base layer and remove them automatically. Visual and quantitative experiments demonstrate our approach has the high efficiency and outperforms previous rain removal algorithm.

    IrappaBelagali et al [10] introduced a rain and snow removal method through using low frequency part of a single image. It is mainly a platform for future applications such as image identification, where some objects cannot be recognized in heavy rain and snow by radar, so this application is helpful mainly in this area. It is based on a key difference between clear background edges and rain streaks or snowflakes, the low frequency part can distinguish the different properties of them. Low frequency part is nothing but the non-rain and non-snow component. We modify it as a guidance image, here the high frequency part is nothing but input image to guided filter, so we get a non-rain or non-snow component of high frequency part and then add the low frequency part, so we are going to get restored image.

  3. METHODOLOGY

    The pipeline of the proposed rain removal algorithm can be as shown in Fig. 1.

    Fig.1. Architecture of Proposed System

    The raindrop-removal problem is intractable, since first the regions which are occluded by raindrops are not given. Second, the information about the background scene of the occluded regions is completely lost for most part. The problem gets worse when the raindrops are relatively large and distributed densely across the input image. To resolve the problem, we can use a generative adversarial network, which generates the outputs. To deal with the complexity of the problem, generative network first attempts to produce an attention map. This attention map is the most critical part of the network, since it will guide the next process in the generative network to focus on raindrop regions. This map is produced by a recurrent network consisting of deep networks combined with a convolutional CNN and a few standard convolutional layers. This is attentive recurrent network.

    Overall, besides introducing a novel method of raindrop removal, other main contribution is the injection of the attention map into generative networks, which works effectively in removing raindrops. Fig. 1 shows the overall architecture of proposed network. Following the idea of generative adversarial networks, given an input image degraded by raindrops, the generative network attempts to produce an image and free from raindrops.

  4. DETAILS

Artificial Intelligence has been witnessing a monumental growth in bridging the gap between the capabilities of humans and machines. Researchers and enthusiasts alike, work on numerous aspects of the field to make amazing things happen. One of many such areas is the domain of Computer Vision. The agenda for this field is to enable machines to view the world as humans do, perceive it in a similar manner and even use the knowledge for a multitude of tasks such as Image & Video recognition, Image Analysis & Classification, Media Recreation, Recommendation Systems, Natural Language Processing, etc. The advancements in Computer Vision with Deep Learning have been constructed and perfected with time, primarily

over one particular algorithm Convolutional Neural Network.

Fig, 2. Architecture CNN

A Convolutional Neural Network

(ConvNet/CNN) is a Deep Learning algorithm which can take in an input image, assign importance (learnable weights and biases) to various aspects/objects in the image and be able to differentiate one from the other. Fig 3 shows architecture of CNN which consist of learning (training) and testing. Convolution layer, RELU layer and pooling layer (for zooming effect) are used for learning features from dataset and classification layer compare this features to classify input data into healthy person or diabetic patients data. Over the last decade, tremendous progress has been made in the field of artificial neural networks. Deep-layered convolutional neural networks (CNN) have demonstrated state-of-the-art results on many machine learning problems, especially image recognition tasks. CNN is one of artificial neural networks which have distinctive architectures as shown in Fig. 3; Input data of CNN are usually RGB images (3 channels) or gray-scale images (1 channel). Several convolutional or pooling layers (with or without activation functions) follows the input layer. For classification problems, one or more full connection (FC) layers are often employed. The final layer outputs prediction values (such as posterior probability or likelihood) for K kinds of objects where the input image should be classified in.

Fig.3. An example of CNN architecture

Each layer of CNN can have a certain activation function which controls amount of output value to propagate its next layer. For intermediate layers, the rectified linear unit (ReLU)

Note that all i R is a sum of signals received by the i-th unit in the l-th intermediate layer. Meanwhile, for the last layer, the soft-max functions often used to obtain probabilistic outputs.

Note that z is a K dimensional vector where zk is a sum of signals received by the k-th unit in the last layer. Since the function is non-negative and has the unit sum property (kfk(z) = 1), the value of implies a class posterior probability that an input data belongs to the k-th class. Therefore, by using the SoftMax function in the output layer, CNN can act a role of probability estimators for the object classification problems. As one of the distinctive properties of CNN, they have consecutive multiple feature representations which are automatically organized in each convolutional layer through the training using given labeled instances. In spite of this interesting situation, typical dimensionality reduction methods (such as PCA) will visualize each feature representation individually, without regarding he relationships between those consecutive features. These are the steps used to training the CNN (Convolutional Neural Network). Step 1 Upload Dataset Step 2 The Input layer

Step 3 Convolutional layer Step 4 Pooling layer

Step 5 Convolutional layer and Pooling Layer Step 6 Dense layer

Step 7 Logic Layer

Scale the feature: Finally, we scale the function with the help of MinMaxScaler.

  1. import numpy as np

  2. import tensorflow as tf

  3. from sklearn.datasets import fetch_mldata Defining the CNN: It uses filters on the pixels of any image to learn detailed patterns compared to global patterns with a traditional neural network. To create CNN, we have to define:

  1. A convolutional Layer: Apply the number of filters to the feature map. After convolution, we need to use a relay activation function to add non-linearity to the network.

  2. Pooling Layer: The next step after the Convention is to down sampling the maximum facility. The objective is to reduce the mobility of the feature map to prevent overfitting and improve the computation speed. Max pooling is a traditional technique, which splits feature maps into subfields and only holds maximum values.

  3. Fully connected Layers: All neurons from the past layers are associated with the other next layers. The CNN has classified the label according to the features from convolutional layers and reduced with any pooling layer.

CNN Architecture o Convolutional Layer: It applies 14 5×5 filters

(Extracting 5×5-pixel sub-regions), o Pooling Layer: This will perform max pooling with a 2×2 filter and stride of 2 (which specifies that pooled regions do not overlap).

  • Convolutional Layer: It applies 36 5×5 filters, with ReLU activation function

  • Pooling Layer: Again, performs max Pooling with a 2×2 filter and stride of 2.

  • 1,764 neurons, with the dropout regularization rate of 0.4 (where the probability of 0.4 that any given element will be dropped in training)

  • Dense Layer (Logits Layer): There are ten neurons, one for each digit target class (0-9).

Important modules used in creating a CNN:

  1. Conv2d (). Construct a two-dimensional convolutional layer with the number of filters, filter kernel size, padding, and activation function like arguments.

  2. max_pooling2d (). Construct a twodimensional pooling layer using the maxpooling algorithm.

  3. Dense (). Construct a dense layer with the hidden layers and units

    1. RESULT

      Fig.4. Input Image with rain streaks Fig. 5.Output Image after applying DDN

      5.1. Result Analysis:

      Table No.1. Showing quantitative comparison on the basis of PSNR and SSIM value.

      Method

      Metric

      PSNR (db)

      S

      SIM

      Output Image

      Ground Truth Image

      Output Image

      GAN [1]

      39

      1

      0.78

      Image decomposition [3]

      40

      1

      0.84

      Ours

      42

      1

      0.86

      Above table shows the PSNR and SSIM values for different methods. Higher PSNR value indicating quality of reconstructed image is better. Also, higher SSIM indicates the de-rained image is closer to ground truth image in terms of image structure properties. (SSIM equals to 1 for the ground truth)

    2. CONCLUSION

Here we are proposing a single image-based rain removal method which will remove rain components from the image and will produce clean and clear image. The method utilizes a Convolutional Neural Network (CNN). We have presented an end to-end deep learning framework for removing rain from individual images. We showed that combining the high frequency detail layer content of an image and regressing on the negative residual information has benefits for de-raining performance, since it makes the training process easier by reducing the mapping range.

REFERENCES

  1. Priti S. Gokhale. Prof.P.S.Malge Removing Rain Component From A Single Color Image International journal of Emerging trends in Engineering research volume 9. No.4, April 2021.

  2. Xueyang Fu1 Jiabin Huang1 Delu Zeng2 Yue Huang1 Xinghao Ding1 John Paisley, Removing rain from single images via a deep detail network

  3. Wang, Y., Liu, S., Chen, C., & Zeng, B. A Hierarchical Approach for Rain or Snow Removing in a Single-Color Image. IEEE Transactions on Image Processing, 26(8), 39363.

  4. He Zhang, Vishwanath and Vishal M. Patel, "Image De-raining Using a Conditional Generative Adversarial Network.

  5. Arti R. Waghchaure, Rupali B. Bachhav, Shital S Bhalerao, A Hierarchical approach For rain removing in single color image, Vol-4 Issue-3 2018 IJARIIE-ISSN(O)-2395- 4396.

  6. BinjuBentex, Dr. K. S Angel Viji, Survey on Removal of Rain or Snow from a Single Color Image, International Journal of Advance Research, Ideas and Innovations in Technology, 2018, Volume 4,

    Issue 1

  7. Yinglong Wang Shuaicheng Liu and Bing Zeng, Removing rain streaks by a linear model, arXiv:1812.07870v1 [cs.CV] 19 Dec 2018.

  8. Chris H. Bahnsen and Thomas B. Moeslund, Rain Removal in Traffic Surveillance: Does it Matter?, IEEE Transactions on Intelligent Transportation Systems

  9. Kiwoong Park, Songhyun Yu JechangJeong,

    A Contrast Restoration Method for Effective Single Image Rain Removal

    Algorithm, 978-1-5386-2615-3/18©2018 IEEE

  10. IrappaBelagali Prof. Sumangala N B, Rain and Snow Removal Using Multi-Guided Filter from a Single-Color Image, International Journal of Engineering Research & Technology (IJERT) ISSN: 2278- 0181 NCRACES – 2019 Conference Proceeding

Leave a Reply