Recovery of Motion blurred Video Signal using Signal Processing and estimation of blur

DOI : 10.17577/IJERTCONV2IS04008

Download Full-Text PDF Cite this Publication

Text Only Version

Recovery of Motion blurred Video Signal using Signal Processing and estimation of blur

Recovery of Motion blurred Video Signal using Signal Processing and estimation of blur

Ragini S. Gawande, Assistant professor(EXTC)

      1. ollege of Engineering Management studies

        & Research Mumbai University, India

        ragini_gawande@yahoo.com

        1. INTRODUCTION

          When compared to other types of display including CRT, Plasma, and projection displays, LCDs offer high

          resolution, low cost, narrow profile, and low power consumption. In addition, many of the original staggering points of LCD material and displays have been reduced or eliminated. For instance, LCD viewing angles have been greatly improved to the point where their performance is comparable to that of CRTs [1].

          However, even the most advanced LCD displays available on the market today exhibit motion blur round fast moving objects in the field of view.

          Puja A. Wankhede Assistant professor(ETRX),

          K.C.College of Engineering Management studies &

          Research

          Mumbai University, India wankhedepuja@gmail.com

          For televisions the problem manifests itself during scenes containing fast motion, both global and local. In

          computer monitors, motion blur is most noticeable while scrolling text or while watching videos.The goal of this chapter is to provide an overview of the causes and solutions to LCD motion blur.

          In particular, we are going to discuss two issues:

          1. How do we model the motion blur caused by LCD?

          2. How do we reduce the motion blur caused by LCD?

            1. What is LCD Motion Blur?

          Liquid crystals (LC) are organic fluids that exhibit both liquid and crystalline like properties. They do not emit light by themselves, but the polarization phase can be changed by applying electric fields. The light sources used in LCD are typically the cold-cathode fluorescent lamps (CCFLs), although LED-based back- lights are also becoming more available in the market.

          Due to the sample-hold characteristic of liquid crystals, fast moving scenes displayed on the LCD are often seen blurred. This phenomenon is known as the LCD motion blur. We emphasize the word motion because if the scene is stationary, then LCD and CRT will give essentially the same level of sharpness.

          LCD motion blur is the result of both the slow Liquid Crystal response time and the inherent sample- and-hold drive nature of the LCD display .

          Figure 1. CRT and LCD rendering.

          Figure 1 illustrates the difference in rendering between LCDs and CRTs. On the CRT, the pixel intensity over time consists of a series of pulses, which are much shorter than the frame duration. On an LCD, on the contrary, the pixel intensity is sustained for the entire frame cycle. This hold-type rendering, in combination with the motion-pursuing function of the human visual system, then leads to motion blur [2].

          To understand this, consider an observer who tracks a moving edge on a CRT, and compare this to an LCD (Fig. 2). For the CRT, the path of the eye, as indicated by the arrows, when integrated over time (low temporal frequency filtering), does not lead to mixing of black and white in the image, and the moving edge is perceived sharply on the retina.

          For the LCD, due to the hold-type rendering, the path of the eye moves through white and black regions. The temporal integration of the eye then leads to blurred edge on the retina.

          Figure 2. Tracking a moving edge on a CRT and an LCD

          . Slow-response motion blur

          The second type of motion blur results from the slow response of the LCD. Ideally, when switched, a pixel should reach its target value instantly. In reality, however, the pixel takes a certain amount of time to switch (up to several cycles for older LCDs). This response is illustrated in Fig. 3.

          Figure 3. Slow-response in an LCD.

          Improving the slow-response motion blur is commonly achieved through overdrive, demonstrated in Fig. 4. The idea is to apply a larger driving value so that the target value is reached by the end of the frame.

          Figure 4. Overdrive.

        2. MODELING AND ANALYSIS

          A. Display-perception chain

          In order to model motion blur, both the LCD and the human visual system need to be accounted for. Figure 5 shows the display-perception chain, which consists of two parts. The first part is associated with the display: sample and hold. The second and third part are related to the human visual system: motion pursuit, and spatio-temporal low-pass filtering.[4],[5]

          Figure 5. Display-perception chain.

          Based on this display-perception chain, the LCD motion blur can be modeled as [1]:

          . (1)

          Equation (1) gives the perceived image, compensated for motion with speed vx through eye tracking. It corresponds to the perceived image on an ideal impulse display, convolved with the LCD temporal reconstruction function ht(t). This function includes the LCD temporal response, as well as the hold-type rendering.

        3. PROPOSED METHOD

          In order to invert the effects of the motion dependent spatial SINC, we will borrow ideas from the rich deconvolution literature in the area of spectroscopy and astronomy.

          1. RichardsonLucy Algorithm

            A commonly used iterative method for deconvolution of images with an estimated point spread function (PSF) is the RL algorithm

          2. Comparison to Wiener:

            Where ok is the output image at time step k, i(x) is the original input image, and s(x,v) is the motion blur PSF which operates in the direction of the motion vector(v). The blurring PSF is simply an ideal LPF with widths

            |vx+1 and vy+1 along the x and y directions, respectively. Each of the nonzero elements in the PSF has value(1/v|+1) . For the ratio in (3), we define 0/0=1.

            The RL algorithm converges to the maximum likelihood solution assuming Poisson counting statistics[8]. Even though the pixel values of many natural images and sequences do not follow a Poisson distribution, this assumption gives us many desirable properties of our converged solution. One key property worth noting is that ok(x) will be nonnegative as long as s(x,v) and i(x) are nonnegative. Thus, we avoid the uncomfortable situation of the algorithm producing negative pixel values and having to either clip them to zero or map them to some positive value. Another desirable property of the RL iteration is that it does not modify the norm of the image as long as the PSF(s(x,v)) is properly scaled.

            Despite our best efforts, it is impossible to guarantee that all motion vectors used by the deconvolution procedure are completely accurate. Thus, in situations where our obtained motion vector

            does not accurately represent the true motion in the scene, we will deconvolve with the wrong PSF. Doing so will not only limit our ability to reduce the amount of LCD motion blur, but might also cause us to introduce artifacts into the scene. Some of the most noticeable artifacts in block based procedures are discontinuities along block edges and differences in the quality of neighboring blocks (both spatially and temporally). In Fig. 3, we compare the resilience of the RL deconvolution procedure to motion vector errors to the resilience of the Wiener filter under the same conditions. We simulate the introduction of artifacts caused by motion vector errors by first deconvolving each block using a PSF created with a noisy translational motion vector and then simulating motion blur using the PSF derived from the true motion vector. For our purposes, we assume translatinal motion and an additive white Gaussian noise model with independence between the and terms.

            The results shown in Fig.3 were computed by processing each block independently and computing the average PSNR for each block over 50 trials. The mean and variance of the average PSNRs for all the blocks in the image were then computed and plotted. Although both curves maintain a high PSNR, we see that the PSNR of the RL procedure decays much slower than that of the Wiener filter as we increase the motion vector noise.

            Furthermore, the variance of the blocks is consistently lower indicating a reduced number of noticeable artifacts.

          3. Region of Interest Filtering:

          Using an incorrect in the RL algorithm, as is the case when motion vector errors are present, will result in amplified noise particularly in the smooth regions of the frame. These are also the same regions that have the least accurate motion vector estimates since it is difficult to match subtle features in successive frames during motion estimation. In addition, studies have shown that unless objects in the frame have significant details, the HVS does not track them individually but instead tracks the global motion in the scene[1].

          Within certain regions in which we are not very sensitive to motion blur, we will avoid noise amplification by accounting for these regions in our deconvolution procedure. Completely excluding these regions, as is done in , leads to temporal inconsistencies when regions in successive frames are labeled differently.

          In this work, we will employ a soft threshold approach in which we weight the application of the RL procedure based on the perceptual significance of the respective region. In order to classify regions based on perceptual significance, we define the scaled gradient magnitude (SGM) metric

          Where is the gradient operator in the respective direction. The logic behind lies in the fact that perceptually significant regions tend to have strong edges and features that lie perpendicular to the direction of motion and, hence, will have a high SGM value.

          The SGM value calculated for every block is used to weight the effect of the deconvolution procedure. Let o(x) be the deconvolved image, and i(x) be the original frame, we create the compensated image as

          where w=min((SGM/d).1). In this formulation, is a factor set a priori according to the specifications of the LCD. As d increases, the deblurring procedure will tend to have a smaller impact on the final image.

        4. FINAL ALGORITHM

          1. Estimate the motion vector (v) for each block in the frame.

          2. Calculate the SGM for that block.

          3. Apply the RL algorithm with s(x,v).

          4. Combine with the original image using above equation

          IV SUBJECTIVE PERCEPTUAL TESTING

          One can use conventional image quality metrics like Mean Square Error (MSE), Peak-Signal-to- Noise

          Ratio (PSNR) and Structural Similarity Index (SSIM) to perceive blur, but they are by definition reference based, which means that the system needs to have an idea of what an un-blurred image is.

          Structural Similarity Index (SSIM)

          Structural similarity [4] is based on comparing the structure of two images after subtracting luminance, and normalizing variance. It has a good correlation with mean opinion scores, but it is not reference- free. Using the MATLAB code provided by the authors of [9], obtained the following results for the output images:

          1. Original:100%

          2. Blurred Frame: 54%

          3. Deblurred Frame Using RL deconvolution Algorithm 76%

          4. Deblurred Frame Using Wiener deconvolution Algorithm 67%

          5. Retrieved Frame By final approach : 89%

            Output Images (a),(b),(c),(d),(e) illustrates the SSIM estimates blur in percentage. The metric do a good job of estimating blur levels.

            Moreover, it is clear that SSIM would work just as well for estimation of blur in images and justice to the difference in blur levels.

            1. Experimental results

          We tested this method by first pre-processing the original (unblurred) frame with deconvolution procedure, and then simulating the HVS-LCD response on the resulting pre-processed image. The simulation results presented in this paper use 4×4 pixel blocks and 10 iterations.

          Looking at the simulated results we see that this algorithm was indeed effective at reducing the amount of motion blur particularly around the edges of the buildings and windows.

          Sr.

          No.

          Different Algorithm

          PSNR

          1

          Blurred

          24.1131

          2

          RL

          deconvolution

          27.046

          3

          Wiener deconvolution

          25.0039

          4

          Final algorithm

          42.7817

        5. CONCLUSION

LCDs have shown great promise in the consumer arena but are unfortunately still plagued with the problem of motion blur. Even with a 0-ms response time, LCD motion blur will still be a problem due to the inherent sample-and-hold property of the display

itself. To reduce the effects of motion blur we introduced an algorithm which uses motion vector information and leverages the RL algorithm operating on perceptually significant regions. We analyzed the performance of the deconvolution procedure by deriving a lower bound for the expected mean squared error of the image. In addition, we derived some statistical properties of the introduced SGM perceptual significance metric as it relates to the quality of the motion vector estimate. Qualitative perceptual tests indicate that the algorithm in its current form reduces the amount of perceptible motion blur.

REFERENCES

  1. Mainak Biswas. Content Adaptive Video Processing Algorithms for Digital TV. PhD thesis, University of California, San Diego, 2005.

  2. X. Feng, H. Pan, and S. Daly, Comparisons of motion-blur assessment strategies for newly emergent LCD and backlight driving technologies, J. Soc. Info. Display, vol. 16, 981-988 (2008).

  1. Michiel Klompenhouwer and Leo Jan Velthoven. Motion blur reduction for liquid crystal displays: Motion compensated inverse filtering. In Proceedings of SPIE-IS&T Electronic Imaging. SPIE, 2004.

  2. Hao Pan, Xiao-Fan Feng, and S. Daly. LCD motion blur modeling and analysis. In Image Processing, 2005. ICIP 2005. IEEE International Conference on, volume 2, pages 11-21-4, 2005.

  3. M. Klompenhouwer and L. J. Velthoven, Motion blur reduction for liquid crystal displays: Motion compensated inverse filtering, presented at the SPIE-IS&T Electronic Imaging, 2004.

  4. H. Pan, X.-F. Feng, and S. Daly, Quantitative analysis of LCD motion blur and performance of existing approaches, in Proc. SID Symp. Dig. Tech. Papers, May 2005, vol. 36, pp. 15901593, SID.

  5. L. A. Shepp and Y. Vardi, Maximum likelihood reconstruction for emission tomography, IEEE Trans. Med. Imag., vol. MI-1, no. 2, pp. 113 122, Oct. 1982.

  6. .Narvekar, N. Karam, L. (2009). A No- reference Perceptual Image Sharpness Metric Based on a Cumulative Probability of Blur Detection International Workshop on Quality of Multimedia Experience. pp. 87-91

Leave a Reply