

- Open Access
- Authors : Rukmani Pandey, Prof. Manoj Chaudhary
- Paper ID : IJERTV14IS040397
- Volume & Issue : Volume 14, Issue 04 (April 2025)
- Published (First Online): 02-05-2025
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
Single Image Dehazing from Repeated Averaging Filters using Artificial Intelligence Techniques
Published by : http://www.ijert.org
International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
Vol. 14 Issue 04, April-2025
Rukmani Pandey,
M.Tech Scholar,
Department of Computer Science & Engineering, JBIT, Dehradun, (U.K.) India.
Prof. Manoj Chaudhary,
Guide, Department of Computer Science & Engineering, JBIT, Dehradun, (U.K.) India.
AbstractImage processing is a physical action that refers to the process of transforming an image signal into a tangible image. This process is referred to as “image processing.” Representations of the image signal can be either digital or analogue, whichever one the user prefers. The actual output itself could be a physical image or the attributes of an image. Either one of these two options is possible. Both of these choices are acceptable. Image processing is a logical process that involves the detection, recognition, classification, measurement, and evaluation of the worth of various physical and cultural items, as well as the patterns such objects create and the spatial relationships among those patterns. The processing of images involves each and every one of these steps.
This body of work proposes a method of Repeated Averaging Filters for estimating the ambient light from a single foggy image. This method contributes further to increased radiance recovery. [Further citation is required] The currently known methods for dehazing have a problem with halo artefacts showing in the final output image after dehazing is finished. This is a problem because halo artefacts are distracting. Using recurrent averaging filters, integral pictures, and a feed forward neural network in concert with one another, an averaged channel is derived from a single image in order to achieve this objective. When it comes to removing halo artefacts from an image, this procedure is not only faster but also more effective than other techniques. Not only does the recently suggested method for dehazing obtain results that are equivalent in terms of quantitative and computational analysis, but it also beats many of the state-of-the-art methods that have been employed in the past.
Index TermsImage Dehazing; Averaging Filter; Integral Image; Gaussian smoothing, Feed Forward Neural Network
- INTRODUCTION
A digital image is a numerical representation of an object that may be viewed on a computer screen. Pixels are the term used to describe to the very little visual components that make up this. Each every pixel has a designated position and value that are associated with it. One pixel in an image is equivalent to a certain amount of brightness at a certain location in the picture. During the processing of the image, each of these pixels is subjected to each and every operation that is carried out. Digital image processing refers to the technique of performing image processing on digital images using computer algorithms in order to either obtain an image that is superior to the original or to extract information that is pertinent from the original image (DIP for short). The use of this technology has a number of benefits, some of which include the adaptability and flexibility of digital image processing, as well as the capacity for data storage and communication. When performing digital image processing, there is no need to make any alterations to the hardware, and the information that is stored on the computer may be moved to a new location without any problems. Memory and the rate at which it can process images are the two obstacles that are preventing digital image processing from reaching its full potential. Before we can process digital images, we need to save them to a storage medium first. This gives us the ability to retrieve them in the event that we require them at a later date. When it comes time to save the image data, you have a variety of various options available to pick from in terms of storage media. The types of storage devices that are encompassed by this category are going to include things like optical discs, magnetic discs, and floppy discs.
In the fields of video processing, computer vision, and digital photography, dehazing an imagewhether it be a single image or a collection of imagesis an important and frequently required activity [1]. This not only increases the visibility of a variety of things but also the colour change that is caused by light passing through air. The elimination of haze is essential for a variety of reasons, one of which is that it aides computer vision algorithms in the transition from a low level to a high level of image processing and offers information regarding depth [2].
Image restoration is the foundation of one strategy for dehazing images, whereas image enhancement is the foundation of the other strategy [3]. Both of these strategies are capable of being contrasted and compared to one another. In order to get beyond the need for dehazing, image restoration-based approaches first create an atmospheric scattering model, and then they use the inversing degradation process. [4] Image restoration-based approaches. The phrase “overcoming dehazing” is also included in the approaches that are based on image restoration. Further The methods of image restoration that are based purely on the
analysis of a single image make up the second category [5], which contrasts with the first category, which consists of those methods that take into account a number of different photographs. In a manner analogous to this, a number of additional methodologies, including the Retinex [6], homomorphic [7], and wavelet transform [8] approaches, were discussed.
On a variety of photographs, the capabilities of prior proposed methods to reduce haze were evaluated and compared. However, multiple-based image processing algorithms have come into significant issues in online imaging dehazing applications, which require a very advanced sensor. These applications are required to have the ability to process images in real time. As a direct consequence of this, a sizeable number of research focused their attention on dehazing a single image [2, 9-11].
Later on, the Dark Channel Prior [2] is proposed for use in single picture dehazing, and a significant amount of emphasis is placed on techniques that are based on the dark channel prior. Assuming that the idea of a dark channel is a simple one leads to prior methodologies, which has resulted in a significant amount of work being done [2, 9]. The process of dehazing a single image using the dark channel prior method can be broken down into four stages: the first stage involves estimating the air light (also known as atmospheric light), the second stage involves transmission, the third stage involves refining the estimated transmission map, and the fourth stage involves recovering the scene’s radiance. Our proposed solution, which was created with the help of the dark channel prior (DCP) method and faces the same challenges as the DCP method does, such as the requirement to eliminate the halo artefacts from the final recovered scene radiance map, was influenced by the DCP method. In addition to this, the dehazing of single foggy photographs that were shot in outdoor locations can be accomplished using our technology in a thorough manner.
- LITERATURE REVIEW
During the early stages of the development of image processing, the most important tools for image enhancement and restoration were linear filters. They performpoorly in the presence of non-additive noise as well as in circumstances in which system nonlinearities or Gaussian statistics are present [19].
Images generated these days from any source will suffer some degree of quality loss during the process of transmitting and manipulating the images. We are unable to extract useful information from the images as a result of the degradation that has occurred in them. Because of this, we recognize the need for a method that can restore the original image from a distorted version of it. Because of this, picture restoration plays a very significant part in the field of image processing. Image processing is becoming increasingly vital. Image Restoration is used to restore images that have unknown blur kernal and additive noise. This can be accomplished by cleaning up the image.
Techniques for image restoration and enhancement are utilized in order to either improve the overall appearance of the image or to retrieve the finer information contained within images that have become deteriorated. The goal of image restoration and enhancement is to process a picture in such a way that the final image will be better suited for a certain application than the original image was. Image restoration and enhancement are two different processes. These methods can be utilized in a wide range of contexts, including computer vision, video surveillance, satellite image processing and analysis, medical image processing and analysis, and many more. The process of filtering the observed image in order to minimize the effect of degradations is what image restoration is all about.
It’s possible that the photos will be ruined due to things like sensor noise or random turbulence in the atmosphere, among other things. Images are typically ruined by random noise. Noise can be introduced into an image during the capturing, transmitting, or processing stages, and it can be either reliant on the image content or independent of it. The probabilistic features of noise are typically used to characterize it. According to [Jain, 1989], the extent and precision of one’s knowledge on the process of image degradation, in addition to the filter design criterion, are critical factors in determining the efficiency of image restoration filters. Image restoration frequently makes use of conventional filters such as mean filters, median filters, and others of their kind. But these traditional filters have their own drawbacks, which eventually led to the invention of more complex filters including decision-based median filters, switching median filters, wavelet filters, fuzzy filters, and so on [Gonzalez and Woods, 2008]. [Gonzalez and Woods]
Image enhancement is performed with the intention of either improving the interpretability or perception of information contained within images for human viewers or providing better input for other automated image processing techniques [Pratt, 2001]. Image enhancement is a process that modifies photos to improve how accurately they portray nuances and subtleties. Image contrast enhancement is one type of image enhancement operation that involves transforming one image into another so that the look and feel of an image can be improved for the purpose of either machine analysis or the visual perception of human beings [Acharya and Ray, 2005]. Image enhancement operations are performed in order to improve the appearance and quality of an image. Researchers working in a wide variety of sectors, such as medical imaging, forensics, atmospheric sciences, and others, cannot do their jobs effectively without this instrument.
Aghi and Ajami presented an innovative method for the removal of noise from colour images that were based on artificial neural networks. Their primary purpose is to develop an adaptive noise canceller by making use of the proper neural networks in their work. An automatic method for minimizing the amount of grain that is present in film images was presented by A. De Stefano and colleagues. Using a parameterized family of functions, these method thresholds the wavelet components of the image, which results in a reduction in the amount of noise? The Vector Rank M-type K-Nearest Neighbor (VRMKNN) filter was initially developed by Volodymyr P. and Francisco G. F. in order to eliminate impulsive noise from colour static images and dynamic image sequences. The vector method and the rank M-type K-nearest neighbor algorithm are utilized in the processing of multichannel images by this filter.
In a wide variety of image processing and computer vision applications, picture segmentation is a crucial step that must be completed. Applications across a diverse range of subject areas are what have sparked people’s interest in the topic. For instance, evaluating various parts of an aerial photograph might assist one have a better understanding of the type of vegetation that is present. When retrieving images from huge image databases using content-based image retrieval, scene segmentation
is an effective technique to use. Image attributes that are able to characterize the regions that are going to be segmented are required for the majority of the segmentation approaches. In particular, both colour and texture have been exploited in their own right and to a significant extent. Because colour information is a vector that can take on several dimensions, the techniques used to segment grayscale images cannot be applied directly to it. The currently available methods for colour image segmentation can be generally categorized into eight different approaches. These approaches are edge detection, region growth, clustering, neural network, fuzzy, tree/graph based methods, probabilistic or Bayesian methods, and histogram thresholding. In order to distinguish between corrupted and uncorrupted pixels, non-linear filters such as the Adaptive Median Filter (AMF) [Hwang and Haddad, 1995] can be applied, and then the filtering procedure can be carried out. Uncorrupted pixels will be preserved in their original state while noisy pixels will have their values replaced with the median. Since there are very few erroneous pixels that need to be replaced by the median values, AMF functions very effectively when there is a modest level of noise density. It is necessary to increase the window size in order to achieve better noise removal at greater noise densities. This results in a lower correlation between the values of corrupted pixels and those of replacement median pixels. In decision-based median filtering or switching median filtering. The choice is made based on a threshold value that has been established. The most significant disadvantage of using this approach is how difficult it is to specify a reliable decision measure. Because these filters will not take into consideration the local characteristics, edge details may not be retrieved properly, particularly when there is a high level of noise. This is especially true when the noise level is high.
- IMAGE RESTORATION AND ENHANCEMENT
One method of image dehazing is based on image enhancement, while the other is based on image restoration. Both methods of image dehazing can be used interchangeably. The topic of digital image processing is home to a number of active research subfields, one of the most prominent of which is picture restoration and improvement. Picture restoration makes use of past understanding of the phenomenon that causes image deterioration in order to attempt the reconstruction or recovery of an image that has been degraded. On the other side, image enhancement is the process of emphasizing or sharpening image features such as edges, boundaries, or contrast in order to make a visual display more effective for presentation and analysis. This can be accomplished by using a variety of techniques. Techniques for image restoration and enhancement find widespread application in a variety of fields, including computer vision, video surveillance, medical image processing, and satellite image processing, amongst others.
- Image Restoration
Random noise is a common cause of image degradation, and it can appear at any point in the process of acquiring, transmitting, or processing an image. The degradations could be the result of sensor noise, relative motion between the object and the camera, random atmospheric turbulence, and other factors. Noise can be dependent or independent of the content of a picture, and its properties are typically characterized by their probabilistic qualities. Noise is typically produced during picture transmission, and this noise is independent of the image signal itself. A very good approximation of the noise that can be found in a variety of real-world situations is called Gaussian noise. The technique of eliminating noise that has in some way ruined an image is now commonly referred to as “image noise reduction.” Image restoration involves filtering the observed image in order to minimize the influence of degradations, which requires prior knowledge of the shape of the degradation. By minimizing the amount of noise in an image, the objective of image restoration is to recreate an image that is as faithful to the original as is practically possible.
Processes that are deterministic and processes that are stochastic are the two primary classifications that are utilized in image restoration approaches. Deterministic processes are those in which a prior knowledge of degradation function or point spread function is present, whereas stochastic processes are those in which there is no prior knowledge of degradation function or point spread function, such as the blind de-convolution method. Deterministic processes are distinguished from stochastic processes by the presence of a prior knowledge of degradation function or point spread function. After that, deterministic approaches are separated into two categories: parametric and non-parametric. It is not necessary for linear filters to preserve the image’s non-negative characteristics or signal-dependent noise. As a result, non-linear and iterative restoration algorithms have come into existence as a result of this. Image enhancement is distinct from image restoration in that the latter seeks to highlight aspects of an image in order to make it more aesthetically pleasing to the viewer, but does not necessarily produce realistic data from the perspective of a scientist. Image restoration, on the other hand, seeks to return an image to its original state. Image enhancement approaches (such as contrast stretching or de-blurring via a nearest neighbor algorithm do not use an a priori understanding of the process that formed the image. Examples of these techniques are contrast stretching and nearest neighbor de-blurring.
- Image Enhancement
Image improvement can involve a wide variety of techniques, such as sharpening, adjusting the contrast, applying filters, interpolating and magnifying, faux colouring, and so on. The most challenging aspect of picture enhancement is determining how to quantitatively measure the success of an enhancement. Because of this, a significant number of picture enhancement methods are based on empirical evidence and involve interactive processes in order to provide desirable outcomes. Despite this, image enhancement continues to be quite essential because of its applicability in practically all of the applications that deal with image processing. In order to increase the quality of a colour photograph, it may be necessary to adjust the colour contrast or colour balance of the image. [Gonzalez and Woods, 2008] suggests that the enhancement of colour photographs becomes a work that is more challenging not just owing to the additional dimension of the data but also due to the additional complexity of colour perception.
ISSN: 2278-0181
Vol. 14 Issue 04, April-2025
Image enhancement methods can be utilized to either improve the overall visual appeal of an image or to extract finer information from images that have been damaged. The primary goal of image enhancement is to process an image in such a way that the end product is more suitable for a certain application than the original image was. It’s possible that an approach that works really well for improving the quality of one type of image might not be the most effective way to improve the quality of another type of image at all. It has been determined that enhancing colour images with the RGB colour space is improper since it ruins the colour composition that was present in the original image. [Hanmandlu and Jha, 2006] explains that this is the primary reason why the HSV colour space is used in the majority of picture enhancement techniques, particularly approaches that boost contrast.
There are two primary categories that can be utilised to classify the many techniques for enhancing images: transform domain approaches and spatial domain methods. The methods that belong to the first category are based on making adjustments to the frequency transform of an image, whereas the methods that belong to the second category act directly on the pixels. However, even with quick transformation algorithms, computing a two-dimensional (2-D) transform for a big array (picture) is a very time-consuming activity that is not appropriate for real-time processing.
Image enhancement is essentially the process of increasing the interpretability or perception of information contained within images for human viewers, as well as giving a “better” input for various forms of automated image processing techniques. The primary purpose of image enhancement is to modify aspects of an image in order to make it more appropriate for a specific observer to use in conjunction with a particular endeavour. During this process, a change may be made to one or more of the characteristics of the image. The selection of qualities and the manner in which they are updated are both unique to the undertaking in question.
- Image Restoration
- PERFORMANCE PARAMETERS
Because there are no ground facts available, evaluating the effectiveness of hazy picture improvement and restoration algorithms is a time-consuming and difficult task. We measure the effectiveness of the algorithms in two different ways so that we can get an accurate picture of the improved visibility. In the first step of this process, we will conduct a qualitative comparison of our methodology with that of other modern methodologies. Since this metric is open to interpretation, accurate quantification cannot be provided using it. The second strategy involves doing a quantitative comparison by making use of the measurements that have previously been utilized by other studies. Although some researchers have used mean squared error (MSE) [19] and structural similarity index metric (SSIM) [20], these metrics do not pass the test. This is especially true due to the fact that these metrics require reference images in order to perform an accurate evaluation, and MSE in particular is designed for applications such as image compression. We have employed it in order to conduct a comparison analysis with the other methods that are now in use.
There are also other methods, such as counting the number of edges that are visible both before and after the restoration process, comparing the number of edges that are visible in the output images to the number of edges that are not present in the hazy images, and determining the mean ratio of the gradients at the edges that are visible. This metric was first introduced by Hautiere et al. in [60], and it was put to use for the purpose of evaluating visibility recovery in [20]. For the purpose of quantifying the level of restoration quality, we have referred to blind contrast enhancement indications in order to evaluate the quantitative comparison.
The subsequent sections will provide an explanation of the specifics of these measures, and then they will be followed by a qualitative and quantitative comparison that will be applied to one of the haze iages.
- Peak signal to noise ratio (PSNR) and Mean Squared Error (MSE)
The peak signal-to-noise ratio, often known as PSNR, is defined as the ratio of the maximum possible value of a signal to the amount of distorted noise that can be present without negatively impacting the quality of its representation. It is essential that both of these images have the exact identical proportions. The PSNR can be represented mathematically in the following manner:
= 202 ( ) (1)
Where the MSE (Mean Squared Error) is:
1 1
1
=
0
|(, ) (, )|2 (2)
0
Where f represents the matrix data of the original image, g represents the matrix data of the processed image, m and n indicate the numbers of rows and columns of pixels in the images, and I and j represent the index of the row and the column accordingly. The value of MAXf indicates the signal that is strongest in image f. The primary drawback of the PSNR metric is that it is dependent on numerical comparison, and it does not properly take into account the biological components of the human vision system, such as the structural similarity index. This is the most significant deficiency of the PSNR measure (SSIM).
- Structural similarity index (SSIM)
Image quality was evaluated using the SSIM measure that was established by Wang et al. The structural similarity index, also known as the SSIM index, is used to determine the degree of similarity between two images. In comparison to more traditional methodologies such as mean square error, it is more in line with how people actually experience things (MSE). As a result of its correlation to human visual perception, SSIM has emerged as a universal quality metric that can be applied to quantitative
analysis in the context of image and video applications. For input image O and R, let , and denote the mean of O, the variance of O, and the covariance of O and R respectively, SSIM is mathematically given as
(2 + 1)(2 + 2)
= (2 + 2 + )(2 + 2 + ) (3)
1 2
Where C1 and C2 are constants, this metric has been suggested for the haze environment for quantitative analysis in Lu et al.
- Peak signal to noise ratio (PSNR) and Mean Squared Error (MSE)
- IMAGE DEHAZING METHODS AND MODELS
The work that we do adheres to the restoration-based picture dehazing procedures, which can be broken down into single- image and multiple-image-based image dehazing methods. Dehazing techniques that make use of several based pictures can be categorized as polarization methods [12-14]. The approach that was suggested [12] took into account the scene points and located the depth discontinuities by modifying the intensities of the scene structure while taking into mind a variety of weather circumstances. In a similar manner, the approach that was proposed in [13] noticed that using only a polarized filter is insufficient to dehaze the haze image and instead used many orientations in conjunction with polarization to provide more accurate estimations. Another regularization-based method, referred to as [14], modeled inheriting body constraints and contextual regularization, both of which mutually estimated the scene transmission.
It was noticed for a single image in [10] that a normal image without haze has higher contrast; however an image with both haze and fog present had lower contrast. This was the case for the normal image. Therefore, in order to accomplish this goal, a local contrast is boosted, and while this resulted in improved visibility, the method continued to suffer from halo artefacts in the map’s final output. The method described in [11] focused on the albedo of the scene and made the assumption that surface shading and transmission have no co-relations locally after the computation of transmission.
In recent times, the well-known dark channel prior, abbreviated as DCP [2,] has been implemented. Extensive testing has been done on photographs taken outside, and the researchers discovered a phenomenon known as dark pixels. The observations were based on the existence of dark pixels in natural outdoor photos, and it was shown that at least one colour channel in an RGB image has the lowest pixel intensities, even when the sky region is ignored, which tends to be a dark channel.
Researchers were able to move in novel directions because to the DCP approach. The DCP approach had a few drawbacks as well, including the fact that it required the use of soft matting to optimize the transmission, which was a computationally intensive endeavour. In addition to this, it is not suited for photos that contain brighter objects, since it chooses the greatest pixel intensities, which might lead to issues with the final out map. This is because it selects the highest pixel intensities. For this reason, [9] recommended using guided filters, which maintained a significant portion of the edges while functioning as smooth operators.
In a similar manner, [1] introduced an approach to overcome the halos in single picture dehazing. For this reason, fixed points are computed by making use of the nearest neighbours (N-N) in order to regain smooth transmission using feed forward neural. The debate that was just had provides us with the impetus to suggest a new method of repeated averaging filters, which takes into account the issues that were just addressed.
- Haze Imaging Model
The haze imaging model in [4], [12] which shows a hazy image formation and widely used so far, is given as
I(x) = J(x) t(x) + A (1 t(x)) (4)
Where I is hazed image, J is the haze free image, x is a pixel location, A is the air light. I(x) and J(x) can be referred to as the intensities of the pixel location in Iand J respectively, where tcan be referred to as transmission coefficient which describes reflecting probability from an object not scattered and absorbed by air particles. The transmission map is given as
() = () (5)
is scattering coefficient and d is scene depth. The captured image in clear weather is 0 and hence I J. But when has some value it results in a hazy image. In (4) the first component J(x)t(x) is the direct attenuation which is inversely proportional to the scene depth. The second component A (1 t(x)) is the air light which is directly proportional to the scene depth. Thus dehazing is all about to recover J from I after estimation of A and t from I.
From haze imaging (1), transmission t is the ratio of two line segments which can be represented mathematically as:
() ()
() = () = () (6)
Fig 1: The Haze Imaging Model
- Dark Channel Theory
Dark Channel prior [2] suggests that most of the haze-free images have low pixels intensities in at least one color channel expect sky region due to three factors :1) Shadows of buildings, cars and cityscape images: 2) other objects in the image as for instance trees and plants :3) and some dark surfaces such as dark trunks of trees and stones. Noticing this phenomenon suggested that in the presence of haze, the dark pixels values altered by the air light by providing a direct contribution to its values. Therefore dark channels provide a direct clue for estimating the haze transmission. The dark channel is defined as
() = ((()) ) (7)
Where (x) is a local patch centering at x. is a color channel of J. This scrutiny revealed that tends to low intensity such as zero, and hence is demonstrated as a dark channel of J. Summarizing our algorithm for recovering J, first a dark channel () is derived from the hazy image, then we applied the repeated averaging filters to normalize the dark channel and estimated the better atmospheric light A on the basis of repeated averaging filters from the obtained dark channel. Finally got the haze free image as an output at low computational cost with hgh visual effects, estimated the dark channel from input image.
- Integration of DCP Theory with Repeated Averaged Channel Prior
A method that was described in [16] came close to approximating the Gaussian filter. It is necessary to use a specialised averaging filter in order to compute Gaussian approximations. The goal of the suggested method was to get Gaussian approximations using integral images by combining both repetitive filtering and an averaging filter with specified values of sigma and n. (where sigma is a standard deviation and n is the averaging). The standard deviation, which is the mathematical expression of the averaging filter, is used to define an averaging filter with a width of w
2 1
=
12
The ideal filters width is defined for averaging filter as
122
(8)
=
+ 1 (9)
12
Following the derivation of (9), we repeatedly applied this filter to the estimated dark channel of the input image using integral images, which resulted in a new channel that was averaged. In order to do computations quickly, an integral image is utilised. A sum area table, also known as an integral image, is a data structure that allows for the speedy and accurate calculation of the sum of the values contained within a rectangular grid. The following is an example of the mathematical representation for integral images:
= (, ) (, ) (, ) + (, ) (10)
Where S is the total number of pixels contained within an arbitrary rectangle defined by the points a, b, c, and d, and After obtaining the channel of the haze image that had been repeated and averaged, we assessed the light from the atmosphere.
- Estimation of the Atmospheric Light from the Repeated Averaged Channel
Estimating the amount of light coming from the atmosphere, denoted by the letter A, is a crucial step in the process of image dehazing. The previous method [2] extracted the high intense values from the dark channel in order to estimate the amount of light coming from the atmosphere. The selection of the pixels with the highest intensity from the foggy image, however, presents a dilemma in this instance. Because the pixels with a high intensity may also be a component of other brighter things in the input image, such as an automobile or another item.
The method that was proposed in [2] directly assessed the amount of light coming from the atmosphere by picking the 0.1 percent of pixels with the highest intensity from the dark channel. However, the final output image produced by this approach of estimating ambient light contains some artefacts that seem like holes. On the other hand, we approximated the atmospheric light from the repeated averaged dark channel by picking 0.2 percent of the pixels with the highest intensity and integrating it with the haze imaging (1), which resulted in findings that were less than ideal in the final output map.
- Transmission Estimation
We have estimated the air light A from the dark channel of the repeated averaged channel. For estimating the transmission it is assumed that a local patch and transmission in the given patch () is constant which can be denoted as t(x). The minimum operation is applied to all three color channels of haze image. Therefore (4.1) becomes as
() ()
( ( )) = () ( ( )) + (1 ()) (11)
() ()
Radiance J tends to zero in the absence of haze on the assumption of dark channel and given as:
() = ( (())) = 0 (12)
()
Which leads to the following equation:
( (
()
)) = 0 (13)
()
Now we can estimate the transmission t(x) by inserting (13) in (11) and final equation for transmission estimation will be written as follows:
() = 1 (
()
( )) (14)
()
is the parameter to keep the naturalness of the image and to perceive the depth for the human eye.
- Transmission Estimation
- Haze Imaging Model
- FEED-FORWARD NEURAL NETWORKS
Feedforward neural networks are artificial neural networks where the connections between units do not form a cycle. Feedforward neural networks were the first type of artificial neural network invented and are simpler than their counterpart, recurrent neural networks. They are called feedforward because information only travels forward in the network (no loops), first through the input nodes, then through the hidden nodes (if present), and finally through the output nodes.
Feed forward neural networks are primarily used for supervised learning in cases where the data to be learned is neither sequential nor time-dependent. That is, feedforward neural networks compute a function f on fixed size input x such that f(x)
y for training pairs (x, y). On the other hand, recurrent neural networks learn sequential data, computing g on variable length input = {1, 2 }such that g() yk for training pairs (, ) for the all 1 .
Fig 2:Feed-forward neural networks
- RESULT AND DISCUSSIONS
The ASUS PC with an Intel Core i7-6700HQ 2.60 GHz CPU and 8.00 GB of installed memory (RAM) is the one that will be used for our experiments. The operating system will be Windows 10, and MATLAB 2016b will be loaded into the machine. Our research was conducted using a broad data set, which consisted of a collection of hazy photographs taken in the outdoors and used in earlier research. These images included cityscapes, aerial views, and landscapes. On the basis of the findings of our experiments, it was demonstrated that our method is applicable to any scene or input image that contains some fog, haze, or dust as a kind of pollution.
This carried out evaluation of the algorithm in terms of both qualitative and quantitative analysis. The dark channel earlier technique had a few flaws that needed to be fixed. For instance, this could not be useful for the pictures that have items that are really brilliant and have high intensities. Because it chooses the pixels with the highest luminance, such as picking pixels of an automobile from the input image as atmospheric light, it can lead to an inaccurate transmission map.
It uses the soft-matting method to refine the transmission map, which is an operation that takes a significant amount of time. This is another one of its limitations. However, the method that we have presented utilises some degree of sigma in conjunction with repeated averaging filters and feed-forward Neural Networks. This allows our method to circumvent the methods that were previously utilised for the purpose of transmission map refining. Therefore, a transmission map that is smooth and filtered and free from the halo artefacts that are produced by the DCP approach can be retrieved.
- Qualitative Evaluation
For the purpose of qualitative evaluation, we compared our findings to the methodologies described in [9], [14-15], and [17]. In terms of qualitative assessment, our newly proposed method trumps all of the other methods that have come before it. Figure 3 displays, using a variety of data images, the qualitative outcomes that our technique suggestion has produced.
- Input Data Image
- By He [9]
- By Zhu [14]
- By Zhu [15]
- By Base Paper [14]
- Proposed Algorithm Result
Fig 3:Qualatative Comparison of different data images
- Quantitative Evaluation
Table 1:Quantitative Comparison of different data images
Dataset MSE Meng
SSIM Meng
MSE He
SSIM He
MSE Zhu
SSIM Zhu
MSE Base
SSIM Base
MSE Propose
SSIM Propos e
1 1856.909 0
0.73937 7
1901.469 0
0.634 7
1438.667 3
0.579 6
5324.456 4
0.573 3
373.074 6
0.8724 2 2569.968 9
0.82469 9
2578.623 0
0.823 8
4250.657 7
0.579 8
4476.650 2
0.763 8
1644.61 7
0.8406 3 7317.003 8
0.36885 5
7221.031 3
0.351 6
1195.625 7
0.860 9
13116.08 3
0.261 3
609.540 7
0.9039 4 7724.221 3
0.55878 6
7724.221 3
0.558 7
3237.960 8
0.476 8
6932.632 2
0.581 0
749.372 4
0.8391 5 3157.225 3
0.50493 3
5931.483 6
0.674 3
6827.864 1
0.594 1
14885.37 3
0.314 8
723.044 4
0.8142 Averag e 4525.065 7
0.59933 0
5071.365 6
0.608 6
3390.155 1
0.618 2
8947.039 0
0.498 8
819.930 0
0.8540 For qualitative evaluation the SSIM and MSE measures are computed and compared with the methods [9], [14], [15] and [17].
*SSIM is Structural Similarity Index for measuring image quality
*MSE is Mean Square Error
- Qualitative Evaluation
- CONCLUSION
In this work, a method that successfully dehazes images with dense haze while also being suitable to real-time systems has been developed. Taking into consideration the application of integral image operations is a solution that eliminates the operational complexity. The use of the filters with repeated averaging led to better predictions of the air light, which led to further improvements in the recovered scene radiance. The transmission map was modified with the help of our proposed method by increasing the sigma amount, and the halo artefacts that were present in the earlier approaches were eliminated.
Both qualitative and objective forms of visual analysis, as well as quantitative methods, have been utilized in the process of evaluating the outcomes. Mean Square Error (MSE) and structural similarity (SSIM) were the metrics that were utilized in order to provide a quantitative assessment of the haze-free photographs. The signal intensity, the extent of feature preservation, and the recovery of structural details gained in the haze-free image are all quantified by these measures.
REFERENCES
- J. Y. Chiang and Y. C. Chen, “Underwater image enhancement by wavelength compensation and dehazing,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 1756-1769, 2012.
- Zhang, Shengdong, and Jian Yao. “Single image dehazing using fixed points and nearest-neighbor regularization.” Asian Conference on Computer Vision. Springer, Cham, 2016.
- Seow, Ming-Jung, and Vijayan K. Asari. “Ratio rule and homomorphic filter for enhancement of digital colour image.” Neurocomputing 69.7-9 (2006): 954-958.
- Dippel, Sabine, et al. “Multiscale contrast enhancement for radiographies: Laplacian pyramid versus fast wavelet transform.” IEEE Transactions on medical imaging 21.4 (2002): 343-353.
- He, Kaiming, Jian Sun, and Xiaoou Tang. “Guided image filtering.” IEEE transactions on pattern analysis & machine intelligence 6 (2013): 1397-1409.
- Tan, Robby T. “Visibility in bad weather from a single image.” Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on. IEEE, 2008.
- Fattal, Raanan. “Single image dehazing.” ACM transactions on graphics (TOG) 27.3 (2008): 72.
- He, Kaiming, Jian Sun, and Xiaoou Tang. “Single image haze removal using dark channel prior.” IEEE transactions on pattern analysis and machine intelligence 33.12 (2011): 2341-2353.
- Xu, Yong, et al. “Review of video and image defogging algorithms and related studies on image restoration and enhancement.” Ieee Access 4 (2016): 165-188.
- McCartney, Earl J. “Optics of the atmosphere: scattering by molecules and particles.” New York, John Wiley and Sons, Inc., 1976. 421 p. (1976).
- Zhou, X., L. Bai, and C. Wang. “Single Image Dehazing Algorithm Based on Dark Channel Prior and Inverse Image.” International Journal of Engineering-Transactions A: Basics 30.10 (2017): 1471-1478.
- Kovesi, Peter. “Fast almost-gaussian filtering.” Digital Image Computing: Techniques and Applications (DICTA), 2010 International Conference on. IEEE, 2010.
- Cooper, Ted J., and Farhan A. Baqai. “Analysis and extensions of the Frankle-McCann Retinex algorithm.” Journal of Electronic Imaging 13.1 (2004): 85-93.
- Narasimhan, Srinivasa G., and Shree K. Nayar. “Contrast restoration of weather degraded images.” IEEE transactions on pattern analysis and machine intelligence 25.6 (2003): 713-724.
- Schechner, Yoav Y., Srinivasa G. Narasimhan, and Shree K. Nayar. “Instant dehazing of images using polarization.” null. IEEE, 2001.
- Meng, Gaofeng, et al. “Efficient image dehazing with boundary constraint and contextual regularization.” Proceedings of the IEEE international conference on computer vision. 2013.
- Zhu, Qingsong, Jiaming Mai, and Ling Shao. “A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior.” IEEE Trans. Image Processing 24.11 (2015): 3522-3533.
- Haseeb Hassan, Bin Luo, Qin Xin, Rashid Abbasi, Waqas Ahmad. Single Image Dehazing from Repeated Averaging Filters. 8th International Information Technology and Artificial Intelligence Conference, (ITAIC 2019) on. IEEE, 2019.
- Kovasznay, L.S.G., Joseph H.M., Image Processing, Proceedings of the IRE, vol. 43, issue 5, pp. 560-570, May 1955.
- Zhang, Shengdong, and Jian Yao. “Single image dehazing using fixed points and nearest-neighbor regularization.” Asian Conference on Computer Vision. Springer, Cham, 2016.