A Comprehensive Review on Image Dehazing

DOI : 10.17577/IJERTV9IS060822

Download Full-Text PDF Cite this Publication

Text Only Version

A Comprehensive Review on Image Dehazing

Jini Elsa Joseph

M.Tech Image Processing College of Engineeing Chengannur

Kerala,India

AbstractHaze is a challenging problem that degrades the quality of digital images. It entirely affects the field of military and civil systems, surveillance e.t.c. Image dehazing can provide the best solution to enhance these images. Nowadays, deep learning methods have been progressed in the field of image dehazing. Various studies have been introduced for the removal of haze. This paper reviews several works that deal with image dehazing of daytime and nighttime images.

Keywords – Image dehazing, Image restoration, Deep learning

  1. INTRODUCTION

    Haze is one of the most critical problem in the areas of image processing and computer vision. Under the haze condition, the quality of digital images becomes worse. Because, it changes the colors and reduces the contrast of images, it diminishes the visibility of the scenes and it is a threat to the reliability of many applications like Outdoor surveillance, Object detection, Outdoor photography, Images: Auto- cropping, Automatic Thumb nailing, Content aware resizing, Video Compression, Graphics, Rendering, Art. Haze also decreases the clarity of the satellite images and underwater images.

    Haze often occurs when absorption and scattering of dust and smoke particles in relatively dry air. When atmospheric conditions block the suspension of smoke and other pollutants they concentrate and form a low-hanging shroud that damages visibility. Subduing of haze is a very challenging task in the case of image processing

    The removal of haze in the image is known as image dehazing. There are two different of dehazing: Daytime and nighttime dehazing. There have been many dehazed methods for daytime dehazing. The daytime haze model is a linear equation consisting of the transmission map and atmospheric light. To produce a magnificent daytime dehazing image, it is proved to estimate its corresponding atmospheric light and the transmission map. Apart from daytime dehazing, nighttime dehazing is also a relevant topic.

    In the case of Nighttime dehazing, atmospheric light is not a global uniform. Apart from daytime dehazing, it is due to the light sources reflected from different sources like car lights, street lights, neon lights, etc.

    Gopakumar. G

    Associate Proffessor College of Engineering Chengannur

    Kerala,India

    According to Mccartny table, size of the haze is smaller than fog particles,but larger than air molecules. Haze consists of aerosol particls. It extends up to several kilometers. Therefore, the removal of haze in daytime and nighttime is much more difficult.

    The entire paper is organized as follows: Section I gives the introduction, section II describes related methods of image dehazing and in section IV the conclusion of the paper, followed by the references.

    Fig 1 shows Daytime image model

    Fig 2 shows nighttime image model[1]

    The figure shows a detailed explanation of the standard daytime and nighttime model. The daytime model contributes a uniform global light source, usually the sun. The camera captures atmospheric light as well as the light reflected from the object. There are no other light sources so that there is no extra glow term added in the camera image.

    Apart from the direct transmission, there are light sources in the nighttime haze model so that a glow term is found in the camera's image. The reason for the glow in the camera image is because of multiple scattering of light sources in an irregular direction.

  2. RELATED WORKS

    Yu Li [1] introduced a haze model that is suitable for varying light sources and their glow. As mentioned, this

    model consists of atmospheric light, transmission map and also includes a glow. The input is glow image and it is separated into the glow and glow free images through a quadratic optimization problem. Further processing is done on the glow-free images. Estimation of atmospheric light and transmission map is the main procedure in this method. This method is very simple and cost-effective. But it does not produce any better haze results.

    Cosmin Ancuti [2] contributed a fusion process which is a single image-based approach that is used to enhance nighttime haze images. It is done on the patches of the image, not on the entire image. This method uses several inputs from the original hazy image. The first input is computed using a small patch size, thereby preventing estimation of the airlight from multiple light sources. The second input using larger patches and it improves the global contrast since it removes a significant fraction of the airlight. The third input is the discrete Laplacian of the original image which is used to reduce the glow effects from the image. Thus input images make a way to enhance the finest details transferred that are to the fused output.

    Three weight maps provide greater emphasis in the fusion process to ensure the regions of high contrast or of high saliency. Local contrast weight identifies the amount of local variation of each input and is computed by applying a Laplacian filter to the luminance of each processed image. This has been used in applications such as tone mapping and assigns high values to edge and texture variations. The saturation weight map controls the saturation gain in the output image.

    The main goal of the fusion process is to produce a better output image by blending derived inputs with specified weight maps that are designed to preserve the most significant features in an image. The advantages of this method are simplicity and computational efficiency. However, this fusion process leads to annoying halo artifacts, mostly at locations with strong transitions in the weight maps. Such unpleasing artifacts can be overcome by using a multiscale Laplacian decomposition.

    Jing Zhang [3] proposed a new imaging model related to the diminishing of nighttime haze, which uses a novel efficient dehazing method with illumination estimation for nighttime haze conditions.

    Based on the imaging model, the dehazed method contains three steps: light compensation, color correction, and dehazing. The first step(light compensation) estimates the light intensity and further for enhancement, a gamma correction is applied to the light intensity to balance the overall illumination of the image. Then, the next step (color correction) estimates the color characteristics of the incident light. Finally, haze is removed by using the dark channel prior to estimating the pointwise environmental light in the dehazing step. By comparing with other methods, this method can achieve illumination balanced and haze-free results and noiseless. Moreover, it also has a good ability to renders colors in objects using light.

    Jing Zhang [4] introduced a new method, maximum reflectance prior which is a core idea to address a haze removal problem from a single nighttime image, even in the

    presence of multicolored and non-uniform illumination, a model is proposed. For daytime dehazing, this model is appropriate. The main reason is that the global atmospheric light is assumed to be the only light source for the daytime haze environment, and the attenuation and scattering characteristics are identical for each channel. However, nighttime scenes usually have multiple colored artificial light sources, resulting in a strongly non-uniform and varicolored ambient illumination.

    Therefore, the local ambient illumination is added into both the attenuation term and scattering term of the standard hazy imaging model to obtain th nighttime hazy imaging model. This model is entirely different from Li et al method [1].

    The aim is to estimate the ambient illumination and the transmission for each pixel to recover the haze-free image at nighttime. The maximum reflectance prior estimates the color map of ambient illumination and removes its effect from the processing image. Then, estimate the intensity of varying illumination and the transmission and remove the haze effect and finally, obtain the color-balanced and haze- free image. But there are some failure cases: there are some color distortions in the regions of grasses and leaves. The main reason is that the maximum reflectance prior does not influence on these regions.

    Minmin Yang [5] proposed a super pixel-based method to remove haze from a single nighttime haze image. This method is a revamped version of [1]. The input night image contains a glow that is decomposed into the glow and glow-free images through solving a quadratic optimization problem. The super pixel-based algorithm is applied to the nighttime glow free images. There are two components to be identified in the proposed algorithm, estimation of atmospheric light and transmission map. The nighttime glow free haze image is divided into super-pixels using the SLIC algorithm. The brightest pixel intensity in each super- pixel is regarded as this superpixel's atmospheric light. The transmission map is estimated through a dark channel prior. The dark channel of the haze image is decomposed into two layers (base and detail layer). The transmission map is computed from the base layer. The WGIF is used to decompose and also for reducing morphological artifacts from the image. To avoid noise in the sky region, a threshold is applied in the resultant transmission map, after the haze is removed. However, the segmentation of glow- free nighttime haze images into super-pixels increases the algorithms complexity.

    Pei and Lee [6] introduced a method based the color transfer processing. The color transfer method is applied to the nighttime haze input image, which uniformly transfers the air light color into grayish. After the color transfer processing on the initial resultant nighttime haze images, the output images are frequently operating the refined dark channel prior method for eliminating the haze and the Bilateral Filter in Local Contrast Correction (BFLCC) to enhance contrast. After such procedures, the resultant images are appealing haze-free images. The colors of a nighttime haze image were mapped to the daytime haze image. Then, a dark channel prior based algorithm was adopted to estimate the transmission map. A post-

    processing step was provided to improve the inadequate brightness and low overall contrast of a haze-free image. Although their method has reliable dehazing quality, the color of the whole haze-free image looks unnatural due to the color transfer.

    Wencheng Wang [7] introduced a fast algorithm for single image dehazing, based on the linear transformation that includes only linear operation. It is assumed that a linear relationship exists between the hazy image and the haze-free image

    The method is divided into 3 steps. First, the Estimation of atmospheric light is done through grayscale transformation (0-255). Second, the transmission map is estimated using a linear transformation model that has less computational complexity. The atmospheric light is obtained with a channel method based on a quad-tree subdivision. It is manupulated by using the ratio of grays and gradients in the region. With that knowledge, a haze- free image is acquired through the atmospheric scattering model. The transmission map is roughly calculated using the minimum color channel. The linear transformation algorithm is used to identify the rough transmission map. Third, Gaussian blurring is used to refine the rough transmittance function. Once the atmospheric light and transmission map is estimated, scene radiance can be recovered. The experimental results show that this method avoids saturation and halo effects.

    Zheng Guo Li [8] contributed a new globally guided image filtering (G-GIF) to overcome the problem of Guided image filtering (GIF) and Weighted guided image filtering (WGIF). The G-GIF has mainly consisted of a global structure transfer filter and a global edge-preserving smoothing filter. In this paper, G-GIF is used to study nighttime haze removal. This is based on the minimal color channel and dark channel prior. The dark channel prior is decomposed into the base layer and detail layer via G-GIF. The structure of the base layer is compared with the structure of a haze image to avoid morphological artifacts. The atmospheric light is obtained by using a hierarchical searching method based on the quad-tree subdivision. When the transmission map is estimated via G-GIF, the image can be restored. Inputs of the G-GIF are an image to be filtered and a guidance vector field where the structure is defined by the guidance vector field. The structure of the haze image is preserved better by the minimal channel and it is selected to generate the guidance vector field. Experimental results show that the dehazed image by the G-GIF is sharper than the GIF and WGIF.

    Zhengguo Li [9] proposed this method which is similar to [8]. The edge-preserving algorithm is to estimate the transmission map based on the concepts of the minimal color channel and simplified dark channel[8]. The main difference is dark channel is used to reduce the variation of the direct attenuation. The simplified dark channel of the haze image can be decomposed into a base layer and a detail layer via an existing edge-preserving smoothing technique. This method applies to haze images, underwater images and normal images without haze.

    Yi-Hsuan Lai [10] introduced a theoretic and heuristic bounds of scene transmission to guide the optimum and

    well known DCP of haze-free images that can justify by showing theoretic bound. There are two scene priors with the constraints on the solution space including scene radiance and scene transmission, to formulate a constrained minimization problem and solve it by quadratic programming..

    Kaiming He [11] proposed a novel dark channel in image haze removal. Dark channel means most of the local regions which do not cover the sky, frequently some pixels (dark pixels) show very low intensity in at least one of the color (RGB) channel.

    The intensity of these dark pixels in that channel is mainly handout by the airlight. Therefore, these dark pixels can directly provide an accurate estimation of the transmission of haze. The dark channel prior is not a good option for the sky regions. Fortunately, the color of the sky is usually similar to the atmospheric light in a haze image. Since the sky is infinite and tends to has zero transmission. A soft matting algorithm is used to refine transmission. The next step is to estimate the transmission map and atmospheric light to recover the scene radiance.

    Boyi Li et. al.[12] proposed Aod-net which is an end to end frame work to dehaze an image. The input is haze image and output is a dehazed image. Compared to all other deep learning methods, the Aod-net is very simple. The main contribution of Aod-net is that it estimates atmospheric light and transmission maps in a simplified manner. It is not at all complex. There are 2 modules: k- estimation and clean image generation module. The k- estimation module aims to minimize the MSE between haze and an original image. But it has a drawback that light passing through the emulsion on a film or plate, is not reflected into it, but is absorbed by a layer of dye or pigment, usually on the back of the film(Antihalation).

    Bolun Cai [13] proposed Dehazenet which is a trainable end to end CNN system. The input is a haze image and it outputs a median transmission map. Feature extraction, multi-scale mapping, local extremum, and nonlinear regression were used to estimate the median transmission map. The Maxout unit in feature extraction maximizes feature maps to discover the apprehnsion of the hazy images. BReLU is an activation function that is useful for image restoration and reconstruction instead of ReLU and Sigmoid, which is a bilateral restraint and local linearity for image restoration. Dehazenet is a lightweight architecture, increases efficiency and restore haze-free images. Sky region in haze image is a difficult task because sky and haze show similar phenomenons according to the atmospheric scattering model. Dehazenet tries to reduce antihalation which is an appreciable task.

    Jinjiang Li [14] proposed a residual deep CNN for dehazing which contributed more efficiency and less error- prone. It is based on residual deep CNN. Nowadays, CNN is very efficient for image dehazing. Each decade CNN develops a variety of changes in the field of image dehazing. Residual deep CNN is one of them that continues to learn the development in the field of image dehazing. The network is divided into two phases. In the first phase, the estimation of the transmission map. In the second phase, the clear image is estimated using the residual network to obtain

    a clear image. Batch normalization is used to increase learning speed.

    The total number of convolutional neural networks is six layers, which are convolution layer, a slice layer, element-by-element operation layer, multi-scale convolution layer, max pool layer, and convolution layer to estimate the transmission map. The network is divided into two phases. In the first phase, the estimation of the transmission map by minimizing the loss function between the reconstruction transmission and the corresponding ground truth map. In the second phase, a clear image is obtained using the residual network.

    Batch normalization is used to increase learning speed. This part combines the principle of convolution with the ReLU activation function, batch normalization, and residual network theory. It is a fast and efficient dehazing CNN model based on residual error. But more layer, higher the cost.

    Wenqi Ren [15] proposed a multi-scale CNN to learn effective and multiple features. There are two networks, a coarse-scale network, and a fine-scale network. The scene transmission map is estimated by a coarse-scale network that predicts a holistic transmission map based on the entire image and refines dehazed results locally by the fine-scale network.

  3. CONCLUSIONS

Haze removal methods have become more useful for many image processing and computer vision applications. All the dehazing methods useful for surveillance, for remote sensing and under water imaging, photography etc. Most of the methods are based on the estimation of atmospheric light and transmission map. This paper presents review of few papers related to image dehazing and addressed haze removal techniques.

ACKNOWLEDGMENT

We would like to thank our Director (IHRD) and principal of our institution for providing us the facilities to support this work. Also we thank Jyothi R.L., Asst. Professor in Computer Science of our institution for the valuable comments that greatly improved the work.

REFERENCES

  1. J. Zhang, Y. Cao, and Z. Wang, 'Nighttime Haze Removal with Glow and Multiple Light Colors' ,in 2015 IEEE International Conference on Computer Vision,2015.

  2. C. Ancuti, C. O. Ancuti, C. De Vleeschouwer, and A. C. Bovik, Night-time dehazing by fusion,,IEEE International Conference on Image Processing, pp. 2256-2260, 2016.

  3. J. Zhang, Y. Cao, and Z. Wang, Nighttime haze removal based on a New Imaging Model,,IEEE International Conference on Image Processing, pp. 957-960, Oct. 2012.

  4. Zhang, Y. Cao, S. Fang, Y. Kang, and C. W. Chen, 'Fast haze removal for nighttime image using the reflectance prior',in 2017 IEEE conference on Computer vision and pattern recognition,Dec.2017

  5. Minmin Yang, Jianchang Liu, and Zhengguo Li,'Super-pixel Based Single Nighttime Image Haze Removal',in IEEE transactions on multimedia , 28-30 Nov. 2018

  6. C. Pei and T. Y. Lee, Nighttime haze removal using color transfer pre-processing and dark channel prior, IEEE International Conference on Image Processing, pp. 957-960, Oct. 2012.

  7. C. Pei and T. Y. Lee,Fast Image Dehazing Method Based on Linear Transformation, IEEE Transactions on Multimedia, vol. 19, no. 6, pp. 1142-1155, Jun. 2017

  8. Kou, W. H. Chen, C. Y. Wen, and Z. G. Li, Gradient Domain Guided Image Filtering,, IEEE Transactions on Image Processing, vol. 24, no.11,pp. 4528-4539, Nov. 2015.

  9. Z. G. Li and J. H. Zheng, Edge-preserving decomposition-based singleimage haze removal, IEEE Transactions on Image Processing, vol. 24,no. 12, pp. 5432-5441, Dec. 2015.

  10. Z. G. Li, J. H. Zheng, W. Yao, and Z. J. Zhu, Single Image Dehazing via Optimal Transmission map under scene priors, IEEE International Conference on Image Processing, Oct. 2012

  11. Z. G. Li, J. H. Zheng, W. Yao, and Z. J. Zhu, Single image haze Removal via a simplified dark channel, in 2015 IEEE International Conferenceon Acoustics, Speech and Signal Processing, pp. 1608- 1612, Apr. 2015.

  12. Lingke Zeng, Xiangmin Xu, Bolun Cai, Suo Qiu, Tong Zhang and Dacheng Tao, Fellow, IEEE, \emph{All in One Dehazing Network, IEEE Conference on Image processing, vol. 25, no. 11,

    November 2016

  13. Bolun Cai, Xiangmin Xu, Member, IEEE, Kui Jia, Member, IEEE, Chunmei Qing, Member, IEEE and Dacheng Tao, Fellow, IEEE, DehazeNet: An End-to-End System for Single Image Haze Removal IEEE Transactions on Image Processing, VOL. 25, NO. 11, NOVEMBER 2016

  14. Samuel F. Dodge , Lina J. Karam ''Single image dehazing using gradient channel prior''},IEEE Transactions on Image Processing ,

    Vol:27(8) , Aug. 2018,pp.4080-4090

  15. Lingke Zeng, Xiangmin Xu, Bolun Cai, Suo Qiu, Tong Zhang and Dacheng Tao, Fellow, IEEE, \emph{Single Image Dehazing via Multi-scaleConvolutional Neural Networks}ECCV 2016,Springer, VOL. 25, NO. 11, December 2016

Ref No.

Accuracy

[1]

SSIM:- 0.9987

[2]

PSNR:-13.69

[3]

SSIM:- 0.300

[4]

PSNR:- 16.88

SSIM:- 0.9950

[5]

PSNR:- 15.88

SSIM:- 0.750

[6]

PSNR:- 16.50

SSIM:- 0.672

[7]

PSNR:- 17.51

SSIM:- 1.000

[8]

PFDA:- 0.535

[9]

Running time:- 12.61

[10]

Execution time:- 0.6794

[11]

PSNR:- 13.13

SSIM:- 0.66

[12]

PSNR:- 21.54

SSIM:- 0.92

[13]

PSNR:- 20.97

SSIM:- 0.9993

[14]

PSNR:- 18.50

SSIM:- 0.81

[15]

PSNR:- 21.25

SSIM:- 0.85

Table 1 shows the reference number of the paper and its accuracy.

Leave a Reply