Enhancement of Underwater Images

DOI : 10.17577/IJERTV9IS080003

Download Full-Text PDF Cite this Publication

Text Only Version

Enhancement of Underwater Images

An Advanced Approach based on Multi-Scale Fusion Technique

Vivek Sharma, Rakesh Verma

Department of Electronics and Communication Engineering Hindu College of Engineering

Sonepat, Haryana, India

Abstract Underwater photography becomes an important research area in Ocean Engineering, Computer Graphics and Surveillance. But underwater images appear blue-green and hazy since the longer wavelength of sunlight cannot penetrate deep in the water due to salinity and high concentration of dissolve impurities in the seawater. The paper proposes Multi- scale Fusion technique in order to remove hazing effects and to enhance visibility of such images. An input underwater image is processed for deriving two images from Gamma Correction and sharpening filter. The associated weight maps are then computed and merged together using Gaussian and Laplacian pyramids. Patch-based Contrast Quality Index (PCQI) and Underwater Color Image Quality Evaluation (UCIQE) are used for determining the quality of recovered underwater images. The experimental results indicate that the final image has better visibility and contrast than the original image.

Keywords Underwater photography; multi-scale fusion; white balancing; weight maps; PCQ; UCIQE.

  1. INTRODUCTION

    Now a day, the underwater photography becomes important not only for the marine lovers but also for the various other applications such as ocean engineering, archaeology, surveillance, computer graphics etc. But it is always challenging to take a noise free image in uneven conditions specially, in the underwater where even light cannot penetrate deep in the water. In fact, the underwater images taken from the conventional camera appears blue- green and it contains lots of noise such as haze, blurring effect, non uniform lighting, low contrast, foggy appearance etc. The primary cause of this noise is rapid attenuation of light when it travels in the water.

    When light travels from air to water, both refraction and attenuation occurs. The refractive index (i.e. the amount of refraction) is dependent of both salinity and temperature of the water. The high level of salinity and low level of temperature increases the refractive index. The level of dissolve salts, organic substance and water molecules increases with depth which are the primary cause of reduction of intensity of the light with depth. The ultraviolet and the infrared wavelengths cannot penetrate much in depth of the ocean while the green and blue light wavelengths can penetrate to any depth. This is why the underwater images appear blue-green. The sunlight can only penetrate up to 990 meters (about 3260 feet) in the ocean.

    The scattering of light can be defined as the reflection/refraction of a part of the light away from the

    original direction of object. It depends on wavelength of light and dissolved impurity of the water. It results in low contrast of underwater images. The overall poor visibility and color cast caused by the effects of underwater imaging conditions deteriorate the capability to fully extract valuable information from underwater images for further processing such as marine, mine detection and aquatic robot inspection. Hence, it is of great interest to restore degraded underwater images for high-quality underwater imaging [3].

    There are numerous numbers of image enhancement and restoration techniques have been proposed since last two decades. J. Y. Chiang and Y. C. Chen [2] proposed WCID algorithm which can effectively restore image color balance and remove haze. Amjad Khan [5] proposed wavelet based fusion algorithm in which the hazy image has been enhanced in terms of color and contrast. G. Suresh, V. Natrajan and A. Shanavas [6] describes that an imaging transducer array combined with an acoustic lens can work as the front end of an underwater acoustic imaging system, which can identify objects even in turbid waters with high resolution. In the approach specified in [7], quality parameters such as entropy, peak signal-to noise ratio (PSNR) and average mean brightness error (AMBE) are measured to justify the efficiency of the work.

    In this paper, the proposed method presents an advanced approach of image enhancement. The algorithm for underwater dehazing technique is shown in fig.1. It includes three filters: White-Balancing, Gamma-Correction and Sharpening. The steps are summarized as follows: (1) the input underwater image passes through white balancing filter which adjust the color cast of image so that image looks more natural. (2) The processed output image of the white balancing filter passes through gamma correction filter and sharpening filter for weight maps calculation. (3) Gaussian pyramid and Laplacian pyramid are then applied to the weight maps of two images derived from above two filters.

    (4) The output images of above process are merged together by using multi-scale fusion technique to get the final output underwater image.

    Fig.1: Algorithm for Underwater Image Dehazing Technique

  2. METHODOLOGY

    1. White-Balancing of Undrwater Image (Input Image)

      The sunlight is composed of six basic colors; Red, Orange, Yellow, Green, Blue and Violet. Red color has the longest wavelength with the lowest energy while the violet color has the shortest wavelength with the highest energy. When sunlight enters in seawater highest wavelength of light is absorbed first followed by other lower wavelengths of light. The shortest wavelength of light can reach only upto 90-110 ft deep in the water and therefore objects present there appears

      Fig.2: White-balanced image of input underwater image blue/green. To remove this unrealistic color cast of image taken deep in the water, white-balancing filter is used. Fig.2

      illustrate white-balancing image of an input underwater image.

    2. Gamma-Correction of Undrwater Image (Input Image 1)

      An image consists of small picture elements known as pixel. The value of pixel varies from 0 to 1 (0 defines complete darkness or black and 1 defines complete brightest or white) and it has different brightness level depending upon this pixel value. However, the image captured in water deeper than 30 ft suffers with over brightness as some of the colors of sunlight is absorbed and cannot possible to recover. Also, when white-balancing filter is applied to the input image it over exposed few color content of the image. To correct the uneven brightness level of white-balanced image, Gamma correction filter is used. The output of Gamma correction filter is shown in fig.3. The Gamma correction filter balances the individual pixel value of an image non-linearly.

    3. Sharpening of Undrwater Image (Input Image 2)

      Sharpening filter is used to enhance the edges and fine details of the underwater images. These details are consists of high frequency components and enhancing the high frequency components of an image enhances the visual quality of the image. The sharpened image of input white- balanced image is as shown in the fig.4. The unsharp mask filter is used to mix the blurred underwater image with the white-balanced image in order to get the sharper image. The sharpened image Is is expressed as

      Is = Iin + (Iin G * Iin) (1)

      Fig.3: Gamma corrected image of white-balanced image (Input 1)

      Fig.4: Sharpened Image of white-balanced image (Input 2)

      where Iin is the white-balanced image to sharpen, G * Iin is the Gaussian filtered version of Iin, and the value of is 0.5.

      The value of defines sharpness of the input white-

      balanced image. Small value of can leave te image unsharp where as large value of can over saturate the image. Therefore, a new unsharp masking process is used to

      sharpen the image which is named as Normalized Unsharp Masking Process. It is defined by the following expression

      Is = [Iin + (Iin G * Iin)] /2 (2)

      where, (.) defines the linear normalization operator or Histogram Stretching.

      This operator does not require any parameter tuning and results in effective sharpening of the image. This operator shifts and scales all the color pixel intensities of an image with a unique shifting and scaling factor defined so that the set of transformed pixel values cover the entire available dynamic range [6].

      The second input mainly helps in reducing the degradation caused by scattering as shown in Fig. 4. Since the difference between white balanced image and its Gaussian

      There is a limitation of the saliency map that it has a tendency to highlight the regions with the high luminance value. It decreases the saturation of highlighted regions. Saturation weight map is used to overcome this limitation.

      WSat = 1/3[(Rn Ln)2 + (Gn Ln)2 + (Bn Ln)2] (3)

      Above equation is evaluated as the deviation for every pixel location between the Rn, Gn, and Bn color channels and the luminance Ln of the nth input.

      The normalized weight map is the combination of the three weight maps defined above into a single weight map. It is evaluated for each input m as

      filtered image is a high pass signal that approximates the Wn +

      opposite of Laplacian, this operation is less suitable to enlarge the high frequency noise, thereby generating undesired artifacts in the second input [6].

    4. Weights of the Fusion Process

      The weight maps are used to define the pixel values with higher weights in the final image. There are following weight maps:

      1. Laplacian Contrast Weight (WL): It roughly calculates the global contrast by evaluating the absolute value of a Laplacian filter applied on each input luminance channel. This weight map has a drawback while using for underwater dehazing purpose as it is not adequate to recover the contrast it famed very little between a ramp and flat regions. Therefore, an additional weight map is required to solve the above said problem.

      2. Saliency Weight (WS): This weight map highlights the salient objects that have lost their importance in the underwater arena. To evaluate the saliency level saliency estimator of have been employed.

        Fig.5: Normalized weight of input 1

        Fig.6: Normalized weight of input 2

        Wn = (4)

        Wn + K.

        here is the small regularization term that ensures that each input contributes to the output and k is the index of the inputs. Here, = 0.1 and K = 2 for the implementation, Wn is the aggregated weight map of the WL, WS and WSat and K is the number of aggregated maps.

        The normalized weight maps of the corresponding weights for two inputs are as shown in fig.5 and fig.6 respectively.

    5. Multi-Scale Fusion Process

    The reconstructed image R(k) at every pixel location k can be obtained as

    R(k) = Wn(k)In(k) (5)

    here In is the input image.

    The disadvantage of this approach is that it creates a circle of white light or colored light around some of the pixels. This limitation can be reduced by using multi-scale linear filter or non-linear filters.

    The multi-scale reduction of an image is based on Laplacian pyramid. It reduces the image into a sum of band- pass images. The input image is filtered by each level of the pyramid by using a low-pass Gaussian kernel G, and filtered image is decimated by a factor of 2 in both directions. Now the up-sampled version of the low-pass image is subtracted from the input image. It approximates the Laplacian and decimated low-pass image is used as the input for the successive level of the pyramid. The M-level Lm of the pyramid can be calculated by the following equation

    I(x) = I(x) G1{I(x)} + G2{I(x)}

    = L1{I(x)} + G1{I(x)}

    = L1{I(x)} + G1{I(x)} G2{I(x)} + G2{I(x)}

    = L1{I(x)} + L2{I(x)} + G2{I(x)}

    .

    .

    .

    = Lm{I(x)} (6)

    here, Gm denotes a sequence of m low pass filtering and the decimation, followed by m up-sampling operation. In the above equation Lm and Gm represents the mth level of the Laplacian and Gaussian pyramid respectively.

    Now each and every source input In is reduces into a Laplacian pyramid whereas the normalized weight maps Wn is reduces using Gaussian pyramids. The number of levels of both the pyramid is same.

    Rm(x) = Gm{Wn(x)}Lm{In(x)} (7)

    here m denotes the pyramid level, and n refers to the number of input images. The number of levels depends on the image size. The visual quality of the blended image is directly dependent on the number of levels. To get the dehazed output, the fused contribution of each level are summed-up together after appropriate up-sampling.

  3. RESULT AND DISCUSSION

    The simulation result of the proposed work show that the multi-scale technique used for underwater image enhancement purpose is quite better than several previous methods. The input underwater image has taken from several different underwater images Database for testing different types of underwater images to show that the proposed algorithm is much efficient to enhance the quality of the hazy image taken from several different underwater conditions and is free from camera settings as stated previously. Fig.7 shows the underwater image taken as an input from the Database. The corresponding reconstructed output image is shown in fig.8. MATLAB version 2015a is used for simulation.

    Fig.7: Underwater Image from the Source (Input Image) [19]

    Fig.8: Reconstructed Output Underwater Image

    PCQI and UCIQE are the two matrices which are used to evaluate the quality of the recovered output image. Two different images with their associated quantitative evaluation are as shown in table1 below. PCQI is s general-purpose image contrast metrics while the UCIQE is dedicated to underwater image assessment. UCIQE is specifically used to evaluate the non-uniform color cast and blurring effects [19].

    TABLE I. QUANTITATIVE EVALUATION OF INPUT UNDERWATER IMAGE

    Input Image from Source [19]

    Reconstructed output image

    PCQI

    UCIQE

    1.2834

    0.8593

    1.2421

    0.9845

  4. CONCLUSION

The paper presents an advanced approach to dehaze underwater image. The principle of multi-scale fusion is used to enhance the quality of underwater image. A pair of filter is used to enhance the color contrast and edge sharpening of the white-balanced image known as Gamma correction filter and sharpening filter respectively. Weights maps are explained to Quantitative Evaluation of Input Underwater Image protect the original quality of the processed image. An immense amount of experiments are performed in order to define that the proposed dehazing technique is capable of enhancing the quality of the underwater image to a greater extent.

REFERENCES

  1. M. Bhowmik, D. Ghoshal, and S. Bhowmik, An Improved Method for the Enhancement of Under Ocean Image, 2015 International Conference on Communication and Signal Process. ICCSP 2015, pp. 17391742, 2015.

  2. J. Y. Chiang and Y. C. Chen, Underwater Image Enhancement by Wavelength Compensation and Dehazing, IEEE Trans. Image Process, vol. 21, no. 4, pp. 1756-1769, 2012.

  3. Chongyi Li, Jichang Guo, Shanji Chen, Yibin Tang, Yanwei Pang, and Jian Wang Underwater Image Restoration based on Minimum Information Loss Principle and Optical Properties of Underwater Imaging, in the proc. of IEEE International Conference on Image Processing (ICIP), pp. 1993-1997, 2016.

  4. C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekert, Enhancing underwater images and videos by fusion, in 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 8188.

  5. U. Qidwai and C. H. Chen, Digital Image Processing: An Algorithmic Approach with MATLAB, 1st ed. Chapman & Hall/CRC, 2009.

  6. Amjad Khan, Syed Saad Azhar, Aamir Saeed Malik, Atif Anwar and Fabrice Meriaudeau, Underwater Image Enhancement by Wavelet

    Based Fusion, in 2016, IEEE 6th Int. Conf. on Underwater System Technology: Theory and Application, 978-1-5090-5798.

  7. G. Suresh, V. Natarajan and A. Shanavas, Underwater Imaging Using Acoustic Lense, 2015 IEEE Underwater Technology (UT).

  8. Singh, R., & Biswas, M., Contrast and color improvement based haze removal of underwater images using fusion technique, 2017, 4th Int. Conf. on Signal Processing, Computing and Control (ISPCC).

  9. S. K. Nayar and S. G. Narasimhan, Vision in bad weather, in Proc. IEEE ICCV, Sep. 1999, pp. 820827.

  10. R. Schettini and S. Corchs, Underwater image processing: state of the art of restoration and image enhancement methods, EURASIP J. Adv. Signal Process., vol. 2010, Dec. 2010, Art. no. 746052.

  11. M. D. Kocak, F. R. Dalgleish, M. F. Caimi, and Y. Y. Schechner, A focus on recent developments and trends in underwater imaging, Marine Technol. Soc. J., vol. 42, no. 1, pp. 5267, 2008.

  12. G. L. Foresti, Visual inspection of sea bottom structures by an autonomous underwater vehicle, IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 31, no. 5, pp. 691705, Oct. 2001.

  13. E. H. Land, The Retinex theory of color vision, Sci. Amer., vol. 237, no. 6, pp. 108128, Dec. 1977.

  14. C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and A. C. Bovik, Single-scale fusion: An effective approach to merging images, IEEE Trans. Image Process., vol. 26, no. 1, pp. 6578, Jan. 2017.

  15. C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and P. Bekaert, Color balance and fusion for underwater image enhancement, IEEE Trans. Image Process., vol. 27, no. 1, pp. 379393, Jan. 2018.

  16. S. Wang, K. Ma, H. Yeganeh, Z. Wang, and W. Lin, A patch structure representation method for quality assessment of contrast changed images, IEEE Signal Process. Lett., vol. 22, no. 12, pp. 23872390, Dec. 2015.

  17. M. Yang and A. Sowmya, An underwater color image quality evaluation metric, IEEE Trans. Image Process., vol. 24, no. 12, pp. 60626071, Dec. 2015.

  18. Vikas Varshney, Savita, Neha Sharma and Manoj Sharma, underwater image dehazing technique using the principle of fusion, IETET Conference, ISBN: 978-93-5351-529-4 ©2019, pp. 202206, April. 2019.

  19. Source of Dataset for underwater image processing and quality evaluation: http://puiqe.eecs.qmul.ac.uk/Dataset.

Leave a Reply