Underwater Image Enhancement using White Balance and Fusion

Download Full-Text PDF Cite this Publication

Text Only Version

Underwater Image Enhancement using White Balance and Fusion

Sophiya Philip

Department of Computer Science and Engineering LBS Institute of Technology for Women,

Kerala, India

Gisha G S

Department of Computer Science and Engineering LBS Institute of Technology for Women,

Kerala, India

AbstractThis electronic document is Underwater images have many applications in marine biology, entertainment, archeology, oceanography etc. Oceans contains very rare attractions like ship wreaks, fishes, amazing deep sea landscapes, marine animals, etc. Underwater images contains more noise caused by low contrast, fogginess, blurred effect and less visibility due to attenuation of propagated light through the water. Therefore enhancement of Underwater is more needed to increase the visibility of the image. In this paper, we propose a method to enhance the underwater images to improve the visibility by using the White balance and Fusion method. This is a single image method. White balance is use to reduce the greenish effect due to absorption of higher wavelength which is attenuated when it propagate through water. Fusion method is used to eliminate the effect of fogginess and blurriness to get clear vision of underwater images.

Keywords Underwater image, gamma correction, white balance, fusion

  1. INTRODUCTION

    This Earth is aquatic planet therefore water covered 70% of earth surface. Exploration of underwater image vision is still difficult due to worst visibility of ocean environments. Now many researches are going on based on enhancement of underwater images which looks foggy, scattered, blurred effect and low intensity images because of the effect of propagation of light through the water, which is used for many applications like archaeology, entertainment and marine biology, etc. The scattering and absorption of light produce fogginess, low intensity images and effect of blurriness. Because of this, enhancement of underwater is an important task to get clear vision of images for all applications.

    Characteristics of the camera lens and target objects are influenced by the received light for an ideal transmission medium but it is not relevant in the case of underwater images. Several factors are influence the amount of light available in the underwater. In addition to the variable amount of available underwater light, underwater particles are 800 times denser particles that compared to atmosphere. As a result when light enters to the water, partly it reflected back and partly it passes through the water [1]. When light passes deeper in the underwater, the some portion of light passes through the water will get reduced. And also some portion of propagated light will get absorbed by the water molecules [2] causing increase the darkness of underwater images when depth is increasing. The light ray quality is reduced according to increase of depth and also resulting dropping of color one by one depending on the wavelength.

    Because of this gradually absorption of different light wavelength in the underwater environment, longest wavelength will get absorbed first, that is red in 10 to 15ft, and then followed by the orange color in 20 to 25 ft, and then yellow color in 35 to 45ft and so on. Images taken at 10ft depth will be having a noticeable loss of red, 25 ft will have identifiable loss of red color and orange color. Finally, the green color will get diminished at further depth [3]. Moreover refractive index is also depending on judging the distance. Due to this, the object looks 25% larger than the actual size of the object.

    In the case of reflection of light, Church [4] describes the light reflection is depending on sea structure. Another problem is related to water, that cause to bend the tight to diffuse or make crinkle patterns. Mainly, the quality of water depends and controls the filtering properties of water like sprinkle of dust particles in water [2].

    According to Anthoni [1], some part of part of light will get reflected that is partly polarized horizontally and some part will vertically passes through the water. The vertical polarization quality is causes decrease in shining the object and helps to efficiently capture colors that are deep otherwise may not be able to capture.

    The researches of Jaffe [5] and McGlamery [6] have invented that there exist mainly three components when the light incident on an image plane: direct component, forward scattering and back scattering. The light components which are directly reflected by the target object onto the image plane is the direct components. At each image coordinate x the direct component is expressed as:

    ED() = J()ed() = J()t() (1)

    Where J() is the object radiance, d() represent the distance from the object plane to the observer, and is the coefficient of attenuation. The term ed() represent the transmission t() of light through underwater medium.

    Apart from the absorption, the existing particles in the medium of underwater cause the scattering of light. Deviation of light on its path to camera lens is the Forward-scattering. The artificial light hits the particles of water and then reflected back towards the camera is the Back-scattering. In practical cases, back-scattering cause main reason for loss of contrast and shifting of color in the images of underwater. Mathematically, it is often expressed as:

    EBS()= B()(1ed()) (2)

    where B() represents the color vector called back- scattered light.

    By ignoring the component of forward scattering, the simplified underwater optical model thus becomes:

    I()= J()ed()+B()(1ed()) (3)

    This model of simplified underwater camera (3) has a almost similar form of the model, Koschmieder [7], characterized the light propagation in the atmosphere. So underwater image enhancement is most needed application to use the images for many purpose. Here propose a method to enhance the images of underwater environment to get a better result.

  2. PROPOSED SYSTEM OVERVIEW

    Browse image

    Calculate RGB

    Find Red Channel

    Browse image

    Calculate RGB

    Find Red Channel

    Normalize Red Channel

    Normalize Red Channel

    Apply CLAHE

    Apply CLAHE

    Gamma Correction

    Sharpening the image

    Gamma Correction

    Sharpening the image

    Fusion Process

    Fusion Process

    Enhanced Image

    Enhanced Image

    Fig 1: Proposed System Overview

  3. RELATED WORK

    The existing techniques for underwater enhancement can be clustered in to various classes. An important class is corresponds to the techniques using the divergent-beam underwater Lidar imaging (UWLI) system [9], the specialized hardware for enhancement [8], [9], [10], used an optical/laser- sensing method to take turbid images of underwater. These systems are extremely expensive, and power consuming.

    The second class using the method based on polarization. This approach use several different degree polarization images of same scene is taken by a camera which have filter. For instance, Schechner and Averbuch [11] explore the polarization related to light which is back-scattered to calculate the transmission map. This is effective in distant region recovering, but not applicable to acquisition of video, and also limited help regarding dynamic scenes.

    The third class of approaches have multiple images [12],

    1. or a rough estimation of the scene model [14]. Narasimhan and Nayar [12] explored modification in scene point intensities under various conditions of weather used to detect discontinuities of depth n the scene. The Deep Photo system

    2. is suited to restore images by using the geo referenced

      urban 3D models and digital terrain. Since this information (images and depth estimation) is generally not obtainable, so these methods are not practically used for common users.

      The fourth class of method explores the similarities between propagation of light in fog and underwater. This model was firstly formed under strict assumptions like illumination of homogeneous atmosphere, unique coefficient of extinction, and process of space-uniform scattering [15], the imaging of underwater is a great challenge due to the reason that scattering depends on the wavelength of light, i.e. on the component of color.

      Recently, many algorithms that specically enhance images of underwater based on the method Dark Channel Prior (DCP) [16], [17] have been introduced. It supposes that the brightness of an object in a natural scene is small in at least one color component, and accordingly describe regions of small transmission as the ones with color value of large minimal. In the underwater enhancement methods, the method of Chiang and Chen [18] segments both the regions of foreground and background are based on the method DCP, and use this information to eliminate the fogginess and variations of color based on compensation of color. Drew, Jr., et al. [19] also build on DCP, and suppose that the source of visuals under the water lies in the blue channels and green color channels. Their Underwater Dark Channel Prior (UDCP) has shown better calculation for the transmission of underwater images than the conventional DCP. Galdran et al. [20] observed the underwater that the red component gets reduced with the increase in distance to the camera, and implements the Red Channel prior to recover colors. Emberton et al. [21] designed a hierarchical rank based method, using a set of values to get the image regions which are mostly haze-opaque, thereby the back- scattered light calculation refinement, which in turns makes better the model inversion of light transmission. Lu et al.[22]. Specify color lines, as in [23], to calculate the ambient light, and apply a variety of the DCP to calculate the transmission. Recent time, [22] has find extension to make better resolution of its de-scattered and color-corrected output. Consider a method based on fusion is then to blend those two intermediate images of HR. This fusion focuses at maintain the edges and other detailed structures of the HR image which are noisy, while taking advantage of the deducted noise and scatter in the second HR image. However it will not affect the providing of color obtained with [22]. In other hand, our method basically focus at betterment of the colors which are absorbed (white balance component), and uses the fusion method for enhancing the edges (sharpening block) and the color contrast (Gamma correction and CLAHE). Actually, this method provides an alternative to [22], while the HR fusion method is introduced in [24].

      Compared to other, this is a new method for white balancing that will perform well in the heavy light attenuation. And also propose a definition of inputs and associated weight maps

      To conclude the survey section, here mentioned various class of techniques used for enhancing the image. There are modification of traditional methods for enhancing which are found in correction of color, stretching/ histogram equalization, and also the linear mapping. These methods are effective for the regions which are well illuminated, and form halos and color degrading in the presence of weak lightening.

  4. WHITE BALANCE METHOD

    This white balance method [25] focuses on restore the colors which are degraded due to absorption white light propagate through the water. The main problems of underwater images are the greenish-blue appearance due to scattering of waves when the depth is increasing. Higher wavelength waves get absorbed first. So red will be absorbed first and so on. The loss of color depends on the distance between the observer and the plane. This include two steps, one is compensating the red channel and apply Gray-World Algorithm to calculate the white balanced image.

    Compensating the red channel is based on four observations

      1. Red channel is degraded first when it passes through the water and green channel is almost safe because of shorter wavelength compared to red channels.

      2. Compensating Compensate the red channel by adding the fraction of green channel to the red channel to restore the red channel to retrieve the natural appearance of the underwater images.

      3. Compensation of red channel by using the green channel is done with the mean values of green channel and red channel. The difference between the mean values of green channel and the mean value of red channel must be proportional to get the balanced output

      4. To eliminate the degradation of red channel during the Gray World step follows the compensation of red channel; firstly affect the small red channel pixel values. That is, the green channel pixel information will not be given to red channel where the red channel information is already significant. So avoid the appearance of reddish in the over regions which are exposed using Gray World algorithm. The highly degraded red channel will be compensated and the less attenuated red channel which is near to the observer need not be compensated due to less degradation of red channel.

    Mathematically express the above observations, the compensated red channel Irc at every pixel location () as follows:

    Irc()= Ir()+.( g r).(1 Ir()).Ig(), (4)

    Where, Ir and Ig is the channels of red color and green color of image I, at the interval of [0, 1], r and g denote the mean value of Ir and Ig. In second term of the Equation 4, each factor results from one of the above four observations, and denotes a constant parameter, usually the value of = 1 for different acquisition settings and the illumination conditions.

    When blue channel is highly degraded and the if restoration of the red channel results to be insufcient, also restore the blue channel degradation, i.e. the compensated blue channel Ibc is computed as:

    Ibc()= Ib()+.(g b).(1 Ib()).Ig(), (5)

    Where, Ib and Ig is the channels of blue color and green color of image I, and is set to the value one. Rests of the results are formulated based on the red color compensation (optionally the blue color). Using assumption of Gray-World algorithm to calculate and restore the illuminant color cast.

    Despite white balancing is important to retrieve the colors which are attenuated when light passes through the water. This is not sufficient in the case of edges and to resolve the dehazing difficulty by the scattering effects. Therefore introducing an effective fusion relying on CLAHE and gamma correction and sharpening to reduce the fogginess of image which is white balanced.

  5. FUSION METHOD

    The fusion method is used for enhancing the underwater images. There are several methods used for fusion process. By using laplacian pyramid method reduce the back scattering effects we can get a better result for the underwater image enhancement. For the fusion process there will have two input images. Our underwater image enhancement technique consist mainly three steps: inputs followed by white balancing and CLAHE, denition of weight maps, and fusion of the weight maps and inputs.

    1. Inputs for Fusion Process

      Color correction is very important in the underwater images. The color correction is done by the white balancing techniques. To obtain the first input, perform CLAHE (Contrast-Limited Adaptive Histogram Equalization) to get brighter image and the overall contrast will be improved. Then perform gamma correction that is focus on global contrast correction.

      The second input is derived from the sharpened version of image which is outputted from the white balanced section. So use the Gussian filter which is unsharp masking principle to blurred or unsharp. Mathematically expressed as, the sharpened image S = I +(I G I) where I is the image to be sharpen, GI is the Gaussian ltered version of I, and is a parameter, is not trivial. A small fails to sharpen image I, but a very large gets in regions which are over-saturated, with darker shadows and brighter highlights. To eliminate this problem, we dene the sharpened image S as follows:

      S =(I +N{I GI})/2, (6)

      N{.} denoting the operator of linear normalization also called histogram stretching. This second input mainly reduces the attenuation caused by the scattering.

    2. Weights For Fusion Process

      During the blending process, weight maps are used such that the pixels having high weights will be there in the output image. They are defined based on saliency metrics and image quality.

      Laplacian contrast weight (WL): Value of Lapacian filter is applied on the input luminance channel to compute the global contrast. This is efficiently able to differentiate between flat and ramp regions but it is not sufficient for the underwater image enhancement technique. To overcome this additional weight is used for contrast metric assessment

      Saliency weight (WS): focus on regaining the salient object that degraded in the underwater image. Using the method saliency estimator of Achantay et al. [26], we can calculate the saliency level. This weight mainly focused on brightened areas. Obtained another weight map for the observation. So the saturation reduced in brighter regions to overcome the problem of focusing on the brightened areas.

      Saturation weight (WSat) : uses the fusion algorithm to accept the chromatic information by taking the advantages of heavy saturated regions. For each input images, calculate the weight map by using the derivation between the luminance Lk of kth input and Rk,Gk,Bk color channels

      WSat =1/3[(RkLk)2+(GkLk)2+(BkLk)2] (7)

      The aggregate weight map Wk is obtained by adding the three weight maps which are WL, WS, WSat for each and every input k. Normalize the k based on pixel by pixel for each input as k=(Wk +)/( + K.), where denote the small regularization term that secure each input gives contribution to the output which have the value to be set to 0.1.

      Using two inputs and the weight map exposedness leads to amplify the relevant artifacts helps to reduce the overall complexity.

    3. Fusion Process

    The image is reconstructed using fusion process at every pixel of input images which have given weight maps. The reconstructed image, R(x) obtained by fuse the input images with the weight maps at each pixel location (x):

    R()= () Ik() (8)

    where Ik is the weighted input by the normalized weight maps k. this approach produces unwanted halos [27]. Laplacian pyramid method is using commonly to avoid this problem.

    In the multiscale fusion method, the decomposition of image into band-pass images which are represented in the form of a pyramid. Low-pass Gaussian kernel G is used for filter the input images at each level of the pyramid and decimate the images which are filtered by a factor of 2 in both of the direction. After that it subtract from input to get low pass image. Then do the inverse laplacian method and uses the decimated low-pass image as the input for the corresponding levels of the pyramid.

    Initially, Gl used to denote a sequence of l low-pass ltering and decimation, followed by operations of l up- sampling, we the N levels Ll of the pyramid is expressed as:

    I()=I()G1{I()}+G1{I()} L1{I()}+G1{I()}

    =L1{I()}+G1{I()}G2{I()}+G2{I()}

    = L1{I()}+L2{I()}+G2{I()}

    = …

    = N l=1 Ll (9)

    Here, Ll and Gl denotes the lth level of the Laplacian and Gaussian pyramid, respectively.

    Gaussian normalized weights are independently performed at each and every level l:

    Rl()= { k()}Ll {Ik()} (10)

    Where, l represents the levels of pyramid and k denotes the number of input images. Generally, the number of levels N depends on the image size.

    The method, Multi-scale fusion is very sensitive to the edges and sharp regions which are motivated by human visual

    systems. This gives better result to underwater image enhancement.

  6. RESULT AND DISCUSSIONS

    By using this method, an enhanced image for the underwater image is obtained. This is achieved by combining the various methods, which are white balance, fusion and also CLAHE.

    Fig-2: Input Image Fig-3: White Balanced image

    Fig-4: CLAHE image Fig-5: Gamma Corrected image

    Fig-6: Sharpened image Fig-7: Enhanced image

  7. CONCLUSION

Here proposed a system for enhancing underwater images to eliminate the unwanted color casting due to attenuated wavelength, fogginess, blurriness, etc. This method is based on white balance and fusion. Laplacian fusion method is used here which is impulse by human visual systems that are suitable for underwater images because it eliminates the back-scattering effects. This gives more enhanced output.

REFERENCES

  1. J Floor Anthoni 2005

  2. Luz Abril Torres-Mndez and Gregory Dudek, Color Correction of Underwater Images for Aquatic Robot Inspection Lecture Notes in Computer Science 3757, Springer A. Rangarajan, B.C. Vemuri, A.L. Yuille (Eds.), 2005, pp. 60-73, ISBN:3-540- 30287-5.

  3. Balvant Singh, Ravi Shankar Mishra , Puran Gour (IJCTEE)vol1Issue-2, 2012.E. H. Miller, A note on reflector arrays (Periodical style Accepted for publication), IEEE Trans. Antennas Propagat., to be published.

  4. White, E.M., Partridge, U.C., Church, S.C, Ultraviolet dermal reection and mate choice in the guppy, pp. 693-700, 2013.

  5. B. L. McGlamery, A computer model for underwater camera systems, Proc. SPIE, vol. 208, pp. 221231, Oct. 1979.

  6. J. S. Jaffe, Computer modeling and the design of optimal underwater imaging systems, IEEE J. Ocean. Eng., vol. 15, no. 2, pp. 101111, Apr. 1990.

  7. H. Koschmieder, Theorie der horizontalen sichtweite, Beitrage Phys.

    Freien Atmos., vol. 12, pp. 171181, 1924

  8. M. D. Kocak, F. R. Dalgleish, M. F. Caimi, and Y. Y. Schechner, A focus on recent developments and trends in underwater imaging, Marine Technol. Soc. J., vol. 42, no. 1, pp. 5267, 2008.

  9. D.-M. He and G. G. L. Seet, Divergent-beam LiDAR imaging in turbid water, Opt. Lasers Eng., vol. 41, pp. 217231, Jan. 2004.

  10. M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, Synthetic aperture confocal imaging, in Proc. ACM SIGGRAPH, Aug. 2004, pp. 825834.

  11. Y. Y. Schechner and Y. Averbuch, Regularized image recovery in scattering media, IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 9, pp. 16551660, Sep. 2007.

  12. S. G. Narasimhan and S. K. Nayar, Contrast restoration of weather degraded images, IEEE Trans. Pattern Anal. Mach. Learn., vol. 25, no. 6, pp. 713724, Jun. 2003.

  13. S. K. Nayar and S. G. Narasimhan, Vision in bad weather, in Proc. IEEE ICCV, Sep. 1999, pp. 820827.

  14. J. Kopf et al., Deep photo: Model-based photograph enhancement and viewing, ACM Trans. Graph., vol. 27, Dec. 2008, Art. no. 116.

  15. H. Horvath, On the applicability of the koschmieder visibility formula, Atmos. Environ., vol. 5, no. 3, pp. 177184, Mar. 1971.

  16. K. He, J. Sun, and X. Tang, Single image haze removal using dark channel prior, in Proc. IEEE CVPR, Jun. 2009, pp. 19561963.

  17. K. He, J. Sun, and X. Tang, Single image haze removal using dark channel prior, IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 23412353, Dec. 2011.

  18. J. Y. Chiang and Y.-C. Chen, Underwater image enhancement by wavelength compensation and dehazing, IEEE Trans. Image Process., vol. 21, no. 4, pp. 17561769, Apr. 2012.

  19. P. Drews, Jr., E. Nascimento, F. Moraes, S. Botelh, M. Campos, and

    R. Grande-Brazil, Transmission estimation in underwater single images, in Proc. IEEE ICCV, Dec. 2013, pp. 825830.

  20. A. Galdran, D. Pardo, A. Picón, and A. Alvarez-Gila, Automatic red- channel underwater image restoration, J. Vis. Commun. Image Represent., vol. 26, pp. 132145, Jan. 2015.

  21. S. Emberton, L. Chittka, and A. Cavallaro, Hierarchical rank-based veiling light estimation for underwater dehazing, in Proc. BMVC, 2015, pp. 125.1125.12.

  22. H. Lu, Y. Li, L. Zhang, and S. Serikawa, Contrast enhancement for images in turbid water, J. Opt. Soc. Amer. A, Opt. Image Sci., vol. 32, no. 5, pp. 886893, May 2015.

  23. R. Fattal, Dehazing using color-lines, ACM Trans. Graph., vol. 34, Nov. 2014, Art. no. 13.

  24. H. Lu, Y. Li, S. Nakashima, H. Kim, and S. Serikawa, Underwater image super-resolution by descattering and fusion, IEEE Access, vol. 5, pp. 670679, 2017.

  25. Codruta O. Ancuti , Cosmin Ancuti, Christophe De Vleeschouwer , Philippe Bekaert, Color Balance and Fusion for Underwater Image Enhancement IEEE transactions on image processing, vol. 27, no. 1,

    January 2018

  26. R. Achantay, S. Hemamiz, F. Estraday, and S. Susstrunk, Frequencytuned salient region detection, in Proc. IEEE CVPR, Jun. 2009, pp. 15971604.

  27. C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekaert, Enhancing underwater images and videos by fusion, in Proc. IEEE CVPR, Jun. 2012, pp. 8188.

  28. P. L. J. Drews, Jr., E. R. Nascimento, S. S. C. Botelho, and M. F. M. Campos, Underwater depth estimation and image restoration based on single images, IEEE Comput. Graph. Appl., vol. 36, no. 2, pp. 2435, Mar./Apr. 2016.

  29. K. B. Gibson, D. T. Vo, and T. Q. Nguyen, An investigation of dehazing effects on image and video coding, IEEETrans. Image Process., vol. 21, no. 2, pp. 662673, Feb. 2012.

  30. N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, Initial results in underwater single image dehazing, in Proc. IEEE OCEANS, Sep. 2010, pp. 18.

  31. M. Grundland, R. Vohra, G. P. Williams, and N. A. Dodgson, Cross dissolve without cross fade: Preserving contrast, color and salience in image compositing, Comput. Graph. Forum, vol. 25, no. 3, pp. 577 586, 2006.

  32. E. P. Bennett, J. L. Mason, and L. McMillan, Multispectral bilateral video fusion, IEEE Trans. Image Process., vol. 16, no. 5, pp. 1185 1194, May 2007.

  33. C. O. Ancuti, C. Ancuti, and P. Bekaert, Effective single image dehazing by fusion, in Proc. IEEE ICIP, Sep. 2010, pp. 35413544.

  34. T. Mertens, J. Kautz, and F. Van Reeth, Exposure fusion: A simple and practical alternative to high dynamic range photography, Comput. Graph. Forum, vol. 28, no. 1, pp. 161171, 2009.

  35. G. C. Rafael and W. E. Richard, Digital Image Processing. Englewood Cliffs, NJ, USA: Prentice-Hall, 2008.

  36. A. Ortiz, M. Simó, and G. Oliver, A vision system for an underwater cable tracker, Mach. Vis. Appl., vol. 13, pp. 129140, Jul. 2002.

  37. A. Olmos and E. Trucco, Detecting man-made objects in unconstrained subsea videos, in Proc. BMVC, Sep. 2002, pp. 110.

Leave a Reply

Your email address will not be published. Required fields are marked *