Feature Enhancement in Visually Impaired Images

DOI : 10.17577/IJERTV9IS090120
Download Full-Text PDF Cite this Publication

Text Only Version

 

Feature Enhancement in Visually Impaired Images

V. Sathananthavathi

Student, Department of ECE

Mepco Schlenk Engg. College

R. Raj Pradeep

Student, Department of ECE

Mepco Schlenk Engg. College

D. Kandavel

Student, Department of ECE

Mepco Schlenk Engg. College

Abstract- Improving the features of visually impaired image is big open problem in computer vision. Many algorithms are available for improving the features of visually impaired images. This paper deals with two different methodologies used for improving the visually impaired image. The first methodology is known as Phase Stretch Transform, it improves the image by edge detection, image enhancement. The Phase Stretch Transform is a newly introduced computational approach for image processing. But Phase Stretch Transform can be applied by converting the colour image to grayscale image. The next methodology is Adaptive Gamma Correction. It enhances the contrast of the image with parameters set by Adaptive Gamma Correction. AGC can be used in enhancing both colour and grayscale image. It requires no conversion of colour image to grayscale image.

Keyword: AGC, PST, Peak signal to noise ratio, Maximum Difference, Normalized Absolute Error.

1.INTRODUCTION

In this Digital world, digital cameras are becoming cheap and affordable to all standards of people. But most of the people are not aware of capturing a perfect shot using camera. These images are often affected by atmospheric changes, the poor quality of the image-capturing devices, the lack of operator expertise etc. since, these factors are unpredictable and varies from place to place. User cannot expect a perfect image. This paper aim is to enhance the visibility of the important feature of the image after the image is captured by using digital image processing.

Digital image processing is the use of a digital computer to process digital images through an algorithm. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. To process a digital image, user must have basic knowledge on digital image. Digital image is nothing but a 2-D signal which is defined by a mathematical function f(x,y) , where x and y are two co-ordinates namely horizontal and vertical.in other words Digital images are made of picture elements called pixels. Typically, pixels are organized in an ordered rectangular array. The size of an image is determined by the dimensions of this pixel array. The image width is the number of columns, and the image height is the number of rows in the array. Thus, the pixel array is a matrix of M columns x N rows. To refer to a specific pixel within the image matrix, define its

coordinate at x and y. The coordinate system of image matrices defines x as increasing from left to right and y as increasing from top to bottom.

Since digital images plays a major role in medical image processing, satellite image analysis, texture analysis and synthesis, remote sensing, digital photography, surveillance, and video processing applications. So, digital image must provide the features of the object accurately or it may lead to misconception. This paper aims in extracting the features from a visually impaired image. One of important feature of digital image processing is image enhancement. Image enhancement is the process of adjusting digital images so that the results are more suitable for display or further image analysis. For example, user can remove noise, sharpen, or brighten an image, making it easier to identify key features.it is achieved by varying the features like saturation, sharpness, denoising, tonal adjustment, tonal balance, and contrast correction/enhancement. some of the commonly used image enhancement techniques are Histogram equalization, Noise removal using a Wiener filter, Linear contrast adjustment, Median filtering, Unsharp mask filtering, Contrast-limited adaptive histogram equalization (CLAHE) etc.

B.Jalali et al [1] proposed Feature Enhancement in visually impaired images explains about phase stretch transform is a physics inspired nature algorithm, equalizes the input brightness across a range of intensities resulting in high dynamic range in visually impaired images. M.H. Asghari and

  1. Jalali et al [2] proposed the motivation behind the algorithm is the observation that salient image feature points are often characterized by changes in local curvature, and the scale interaction model localizes such image information. M.H. Asghari and B. Jalali [3] proposed the output phase of the transform reveals transitions in image intensity and can be used for edge detection and feature extraction. M.Arockia Helan et al [4] has explained about PST and CLAHE for enhancing the visually impaired images and performance is compared with Sobel edge detector.

    Shanto Rahman, Md.Mostafijur Rahman, M.Abdullah-Al- Wadud, Golam DastegirAlQuaderi and Mohammad Shoyaib,et al.,[5] proposed AGC performs and classification of image the contrast level present in the image. The image varied into dark and bright images.

    1. PHASE STRETCH TRANSFORM

      The Phase Stretch Transform (PST) is a new computational approach for signal and image processing.PST has roots in photonic time stretch technique and a best method for real time measurements of ultra-fast events. PST is also optics inspired algorithm which has superior properties that can be exploited to develop advanced algorithms for feature extraction from digital images. Fig 1 block diagram of PST

      The blocks in PST processing includes:

      • Image Acquisition
      • Image Pre-Processing
      • Construct PST

        FIG 1 BLOCK DIAGRAM OF PST

        1. IMAGE ACQUISITION

          The Acquisition of digital image is the processes of making of photographic image, for example, physical scenes or the inside structures of the article. The term is regularly accepted to incorporate the handling, pressure, stockpiling, printing, and show of such image. Contingent upon the kind of sensor, the subsequent image information is standard 2D images, a 3D volume, or an image arrangement. Re-inspecting is performed to guarantee that the image facilitate framework is right. Fig 2 show the input image

        2. IMAGE PRE-PROCESSING

Image pre-processing is the term for task on image at the most reduced dimension of reflection. The uigetfile function is to get the image from the dataset. After getting the input image the pre-processing step has been performed. In pre-processing the input image is resized to 512 X 512 sized images for the processing. Fig 3 shows the grayscale image

    1. ONSTRUCT PST

      PST is a physics-inspired digital image transformation that emulates propagation of electromagnetic waves through a diffractive medium with a dielectric function that has warped dispersive property.PST has parameters and mathematical function detecting the edges in the image.

      The parameters that are required to be designed for the proposed edge detection methods are:

      1. and : Strength () and Warp () of the applied kernel
      2. : Bandwidth of localization kernel
      3. : Threshold value

Parameters of the kernel ( and ) control the edge detection process.

PST Calculation:

0[, ] = {[, ]}

2 {[, ]. [, ]. 2{[, ]}}

The function [, ] is the frequency response of the localization kernel and the warped phase kernel [, ] is described by a nonlinear frequency dependent pase.

[, ] =.[,] [, ] = [, ] = []

1 1 2

1 1 2

 

=

 

. ..tan (.)( 2).(1+(.) )

 

. tan1(.)(12).(1+(.)2)

Where = 2 + 2, = tan1( ), ln(.) is the natural logarithm, and is the maximum frequency . and are real-valued numbers related to the strength () and warp () of the phase profile applied to the image. Fig 4 shows the edge detected using PST,

FIG 2 INPUT IMAGE

FIG 3 GRAYSCALE IMAGE

FIG 2.4 EDGE DETECTED IN IMAGE USING PST

FIG 2.5 OVERLAY IMAGE

After the edge detected using PST, an overlay is created by using edges detected by PST (fig 5 shows overlay image). The overlay image and input are fused together, complement, haze has been reduced and complement is done. High-boost filter used for further enhancing the image. High-boost filter is used for further enhancing the image.

FIG 6 IMAGE WITH PST AND HIGH BOOST APPLIED

High boost filter is used to enhance high frequency component while keeping the low frequency components constant. Fig 2.6 shows the enhanced image with high boost filter.

FIG 7 FLOW CHART OF THE AGC

  1. COLOR TRANSFORMATION

    Several color models, such as red-green-blue (RGB), Lab, HSV, and YUV are available in the image processing domain. However, images are usually available in RGB color space where the three channels are much correlated. And hence, intensity transformations done in the RGB space are likely to change the color of the image. For AGC, project adopts HSV color space which separates the color and brightness information of an image into hue(H), saturation (S), and value

    (V) channels. HSV color model provides several advantages such as having a good capacity of representing a color for human perception and the ability of separating color information completely from the brightness (or lightness) information. Hence, enhancing the V-channel does not change the original color of a pixel.

    1. ADAPTIVE GAMMA CORRECTION

      Adaptive gamma correction is a contrast enhancement technique. Contrast is the difference in luminance or color that makes an object (or its representation in an image or display) distinguishable. The existing contrast enhancement techniques can be categorized into three groups: global, local, and hybrid techniques. Hybrid enhancement techniques comprise of both global and local enhancement techniques.

  2. IMAGE CLASSIFICATION

    Every image has its own characteristics, and the enhancement should be done based on that. To appropriately handle different images, the proposed AGC first classifies an input image I into either low-contrast class Q1 or high (or moderate) contrast class Q2 depending on the available contrast of the image using Eq. (1).

    1, 1/

    () = {

    2,

    (1)

    A.Flow chart of the AGC

    The main objective of the AGC is to transform an image into a visually pleasing one through maximizing the detail information. This is done by improving the contrast and brightness without incurring any visual artifact. Fig 7 shows flow chart of the model.

    Where D=diff((+2), (2)) and is a parameter used for defining the contrast of an image. and are the standard deviation and mean of the image intensity, respectively. Equation (1) classifies an image as a low-contrast one when most of the pixel intensities of that image are clustered within a small range. The criterion in Eq. (1) is chosen being guided by the Chebyshevs in equality which states that at least 75% values of any distribution will stay within 2 around its mean on both sides.

    This leads to the simpler form of the criterion for an image to be classified as a low-contrast one as 4 1/ . From

     

    our experience, we have found that = 1 is a suitable choice for characterizing the contrasts of different images. Again,

    in AGC, according to Eq. (3), c becomes1for such images, and the transformation function becomes

    depending on the brightness of the image, different image

    = (7)

    intensities should be modified differently. Hence, divide each of the Q1 and Q2 classes into two sub-classes, bright and dark, based on whether the image mean intensity 0.5 or not. Thus, AGC makes use of the four classes as shown in Fig.8.

    FIG 8 IMAGE CLASSIFICATION

  3. INTENSITY TRANSFORMATION

For increasing the contrast in this type of images, the transformation curve should spread out the bright intensities over a wider range of darker intensities. To achieve this, according to AGC, should be larger than 1.

D.1.2 Dark images in Q1

Most of the intensities of an image in this class are clustered in a small range of dark gray levels around the image mean. For increasing the contrast of such images, the transformation curve needs to be spread out the dark intensities to the higher intensities. This requires a transformation curve that lies above the line Iout=. The transformation function is also desired to spread the clustered intensities more than the other intensities. For a dark image ( < 0.5) with low- contrast, Eqs. (4) and (5) are used and the final transformation function becomes

The transformation function of the proposed AGC is based

=

+(1 )

(8)

on the traditional gamma correction given by D.2 Enhancement of high-or moderate-contrast image

= (2)

An image falls into Q2 class when the intensities are

 

Where and are the input and output image intensities, respectively. c and are two parameters that control the shape of the transformation curve. In contrast to traditional gamma correction, AGC sets the values of and c automatically using image information, making it an adaptive method.

D.1.Enhancement of low-contrast image

According to the classification done in Eq. (1), the images falling into group Q1 have poor contrast. Low implies that most of the pixels have similar intensities. So, the pixel values should be scattered over a wider range to enhance the contrast. In gamma correction, controls the slope of the transformation function. The higher the value of , the steeper the transformation curve becomes. And the steeper the curve is, the more the corresponding intensities are spread, causing more increase of contrast. In AGC, we conveniently do this for low-contrast images by choosing the value of calculated by

= 2() (3)

In traditional gamma correction, c is used for

brightening or darkening the output image intensities. However, in AGC, we allow c to make more influence on the transformation. The proposed AGC uses different values of c for different images depending on the nature of the respective image according to

appreciably scattered over the available dynamic range. Brightness adjustment is usually more important than contrast enhancement in such images. In this case, and c are calculated similarly as in Eqs. (2) and (4). is now calculated differently using Eq. (8), not to make much stretching of the contrast.

=exp[(1(+))/2] (8)

      1. Dark images in Q2

        For images with <0.5, (+) 1, since both and are less than (or equal to)0.5which implies 1. Here, the transformation curves pass above the linear curve Iout =Iin , transforming the dark pixels into brighter ones. This increases the visibility of the dark images. For dark images with larger mean ( 0.5but <0.5), the transformation curves are very close to the linear curve, i.e., not many changes are made in the intensities.

      2. Bright images in Q2

For this class of images, , c, and are calculated using Eqs. (2), (4), and (8), respectively. In this case, images have good quality with respect to brightness and contrast. Here, th main target is to preserve the image quality. The

curves lie very close to the line I = , causing little change

c = 1

(4)

out

in contrast and ensuring not m

nges of intensities as

1+(0.5)(1)

where k is defined by

= + (1 ) (5)

uch cha

expected. Note that for the maximally scattered image, i.e., for = = 1/2 and =1/2,( i.e., when half of the image pixels

and the Heaviside function is given by

are at zero intensity and the other half at the maximum

Heavside(x) =

0, x 0

{

1, x > 0

(6)

intensity 1), not to change the image. It has the maximum contrast and is already enhanced. Upon the application of

Such choices of and c enable AGC to handle bright and dark images in Q1 class in different and appropriate manners. The effectiveness of the proposed transformation function is described in the following subsections.

D.1.1 Bright images in Q1

For low-contrast bright images ( 0.5), the major concern is to increase the contrast for better distinguish ability of the image details that are made up of high intensities. Hence,

AGC, the gray levels of the images are distributed over wider ranges in the histograms as desired.

  1. RESULT AND ANALYSIS
  1. Comparison Measure
    1. PSNR

      Peak signal-to-noise ratio (PSNR) is the ratio between the maximum possible power of an image and the

      power of corrupting noise that affects the quality of its representation. To estimate the PSNR of an image, it is necessary to compare that image to an ideal clean image with the maximum possible power.

      PSNR is defined as follows:

      PSNR= 10

      (1)2

      ( ) = 20

      ( 1 ) (9)

      10 10

      Here, L is the number of maximum possible intensity levels (minimum intensity level supposed to be 0) in an image. MSE is the mean squared error & it is defined as:

      MSE = 1

      1 1((, ) (, ) 2

      (10)

      =0

      =0 )

      Where, O represents the matrix data of original image. D represents the matrix data of degraded image. m represents the numbers of rows of pixels and i represents the index of that row of the image. n represents the number of columns of pixels and j represents the index of that column of the image. RMSE is the root mean squared error.

    2. Maximum Difference

      MD (Maximum Difference) provides the maximum of the error signal (i.e. difference between the processed and reference image).MD is defined as follows:

      MD = (| |) (11) i= 1,2,..m, j = 1,2,..n

      A.3. Normalised Absolute Error

      Normalized absolute error is the total absolute error normalized by the error simply predicting the average of the actual values.

      Fig 10 Dog 1 Edge image

      (||)

      NAE = =1 =1

      (12)

       

       

      =1

      =1

      ()

  2. Phase Stretch Transform Result

    Various images analysed by converting the colour image into grayscale image. Global variable used for warp, strength, threshold value and bandwidth of localisation kernel. The value of global variables is bandwidth of localisation kernel =0.21, strength=0.48, Warp=12.14, Thresh min=-1, Thresh max=0.0019.

    Fig 9 shows input image, Fig 10 shows edge detected image, Fig 12 shows overlaid image, Fig 11 shows PST and High Boost applied on the Image

    Fig 9 Dog 1 input image

    Fig 11 Dog 2 with PST and High Boost

    Fig 12 Dog 2 with overlay

  3. Adaptive Gamma Correction Result

In AGC, the mean and standard deviation of the image is calculated. By using standard deviation and mean, determine whether the image is low contrast or moderate (or high)

contrast image. If the mean value is less than 0.5 then image is dark, mean value is greater or equal to 0.5 then the image is bright. Fig 13 shows input image, Fig 14 shows low contrast dark image, Fig 15 shows output image. From table 4.4.1 it is evident that AGC provide better result compared to AGC.

Fig 13 Dog 2 input image

Fig 14 Dog 2 moderate dark image

Fig 15 Dog 2 Output image

DOG 2 PSNR MD NAE
PST 21.1806 120.779 0.9743
AGC 21.0076 121.52 0.9961
Table 1 Performance analysis between PST and AGC

V. CONCLUSION

This project proposes two different methodologies for enhancing the visually impaired images. Phase Stretch Transform uses the edge detection for enhancing the image. To detect the efficient features in the visually impaired images, the PST is being proposed to detect the features. But in PST the colour images are needed to be converted to grayscale image. Adaptive Gamma Correction is a simple, efficient, and effective technique for contrast enhancement. AGC has low time complexity and it can be applied in both colour and grayscale image. When comparing PST with AGC, AGC has upper hand because it takes less time for execution and no requirement for converting colour image to grayscale image. Performance of AGC is better than PST by analysing the tables. As a future work of this project, the edge detection of phase stretch transform need to be improved.

REFERENCES

  1. Bahram Jalali, Hossein Asghari, and Madhuri Suthar (2017) Feature Enhancement in visually impaired images, IEEE transaction, Volume. 6, pp. 2169-3536.
  2. M.H.Asghari and B.Jalali, Physics-inspired image edge detection, in Proc. IEEE Global Conf. Signal Inf. Process., Dec. 2014, pp. 293296.
  3. M.H.Asghari and B.Jalali, Edge detection in digital images using dispersive phase stretch transform,J.Biomed.Imag.,vol.2015,Jan.2015, Art. no. 6.
  4. M.Arockia Helan, S.Poorna Lekha, Enhancement of Visually Impaired Image Using Phase Stretch Transform, Fifth International Conference on Science Technology Engineering and Mathematics (ICONSTEM)(2019)
  5. ShantoRahman,Md Mostafijur Rahman, M.Abdullah-Al-Wadud,Golam Dastegir Al-Quaderi and Mohammad Shoyaib, An adaptive gamma correction for image enhancement in EURASIP Journal on Image and Video Processing (2016) 2016:35
  6. Tarel, J.P., Hautiere, N., Caraffa, L., Cord, A., Halmaoui, H., Gruyer, D.: Vision enhancement in homogeneous and heterogeneous fog. IEEE Intell. Transp. Syst. Mag. 4(2), 620 (2012)
  7. Najmul Hassan, Sami Ullah, Naeem Bhatti, Hasan Mahmood, Muhammad Zia, A cascaded approach for image defogging based on physical and enhancement models, in Signal,Image and Video Processing in Springer Nature 2020 https://doi.org/10.1007/s11760-019- 01618-x.
  8. Christos V Ilioudis, Carmine Clemente, Mohammad H Asghari, Bahram Jalali, John J Soraghan, “Edge detection in SAR images using phase stretch transform”, Proc. 2nd IET International Conference Intelligence Signal Process, vol. 10, pp. 1-5, 2015.
  9. J.-P. Tarel and N. Hautière, Fast visibility restoration from a single color or gray level image, in Proceedings of IEEE International Conference on Computer Vision (ICCV09), Kyoto, Japan, 2009, pp. 2201-2208.
  10. Wang, W., Yuan, X.: Recent advances in image dehazing. IEEE/CAA J. Autom. Sinica 4(39), 410-436 (2017)
  11. Ahmed Zaafouri, Mounir Sayadi, Farhat Fnaiech, “A Developed Unsharp Masking Method for Images Contrast Enhancement”, IEEE Transaction, vol. 26, pp. 978, 2011
  12. B.S.Manjunath, C.Shekhar, R.Chellappa, “A new approach to image feature detection with applications”, Pattern Recognit., vol. 29, pp. 627- 640, Apr. 1996.
  13. A.S. Bhushan, F. Coppinger, B. Jalali, “Time-stretched analogue-to- digital cnversion”, Electron. Lett., vol. 34, no. 11, pp. 1081-1082, May. 1998.
  14. A.Mahjoubfar, C.L.Chen, B.Jalali, “Design of warped stretch transform” in Artificial Intelligence in Label-Free Microscopy, Berlin, Germany:Springer, pp. 101-119, 2017.
  15. T.Ilovitsh, B.Jalali, M.H.Asghari, Z.Zalevsky, “Phase stretch transform for super-resolution localization microscopy”, Biomed. Opt. Exp., vol. 7, no. 10, pp. 4198-4209, Oct. 2016.
  16. M.Pavli, H.Belzner, G.Rigoll, S.Ili, “Image based fog detection in vehicles”, Proc. 4th Intell. Veh. Symp., pp. 1132-1137, Jun. 2012.
  17. Garima Yadav, Saurabh Maheshwari, Anjali Agarwal, Contrast limited adaptive histogram equalization-based enhancement for real time video

    system, International Conference on Advances in Computing, Communications and Informatics (ICACCI) (2014)

  18. Loh, Yuen Peng and Chan, Chee Seng, Getting to Know Low-light Images with The Exclusively Dark Dataset Computer Vision and Image Understanding (2019)

 

Leave a Reply