Synthesis Of High Dynamic Range Image Using Principal Component Analysis Method

DOI : 10.17577/IJERTV2IS70203

Download Full-Text PDF Cite this Publication

Text Only Version

Synthesis Of High Dynamic Range Image Using Principal Component Analysis Method

S. ASIF

SJBIT college, VTU, India

ABSTRACT: A multiple exposure fusion to enhance the dynamic range of an image is been used a compensation method for the image de- blurring. Under this process the construction of high dynamic range images (HDRIs) is performed by combining multiple images taken with different exposures and estimating the irradiance value for each pixel. This is a common process for HDRI acquisition. However the specific scenario a multiple exposure fusion to enhance the dynamic range of an image is evolved to examine the ghostful images. During this process, if there is any movement in the scene while the exposures are being captured, the moving objects will appear in different locations in these exposures. Therefore, merging corresponding pixel values from different exposures to produce an HDR image will cause a ghosting effect. To address the problem, this project presents an efficient and Accurate Principal Component Analysis Method which involves the sub-concept of the dynamic fusion followed by the support of the Eigen vectors to obtain an enhanced and fruitful deblurred image.

Keywords: PCA, Fusion, Multiple Exposure, Minimum Direction and Motion Search Algorithm.

  1. INTRODUCTION

    MOTION blur is the result of the relative motion between the camera and the scene during the integration time of the image. Motion blur can be used for aesthetic purposes, such as emphasizing the dynamic nature of a scene. It has also been used to obtain motion and scene progressible structural information. Motion blur has also been also used in computer graphics to create more realistic images which are pleasing to the eye. On the other hand, the real-world images often suffer

    from very strong motion blurs. The typical reasons are the camera motion and the motion of an object on the scene. Motion blur caused by a relative motion between a camera and a scene is inevitable due to the nature of a camera sensor that accumulates incoming light over a certain period of time. Many computer vision algorithms rely on the assumption that a scene is captured without such motion blur. However, this assumption generally does not hold unless the scene and the camera are both static. It is therefore important to correctly remove motion blur from images so that the subsequent algorithms can neglect the effect of motion blur.

    The illumination of typical world scenes around us varies over several orders of magnitude. Conventional sensors in image capture devices are only able to capture a limited part of this range. Instead, spectrally weighted radiance of a scene may be captured more accurately by spatially varying pixel exposures, using multiple imaging devices, or devices that use special sensors .These devices are expensive and will not be affordable for the average consumer f o r so me years to come. Meanwhile, there exist methods of obtaining high dynamic range (HDR) images using conventional devices. Such techniques require the user to take several images of the same scene at different exposures, and apply a weighted average over these to compute radiance values of the scene. Multiple exposure techniques have several disadvantages. For instance, if there is any movement in the scene while the exposures are being captured, the moving objects will appear in different locations in these exposures. Therefore, merging corresponding pixel values from different exposures to produce an HDR image will cause a ghosting effect. As such, existing techniques are only useful for creating HDR images of scenes that are completely still.

    This is rather restricting as most scenes contain motion.

    Figure 1.1: Scene Blurred By Linear Horizontal Motion of Camera

    In the last decade, to capture the HDRI, many techniques have been proposed based on the multiple exposure principle, in which the HDRI is constructed by merging some photographs shot with multiple exposures as described above as the same scenario. Many of the techniques assume that a scene is static during taking photographs. The motion of objects causes motion blur and ghosting artifacts. Although in some fields, such as video coding and stereo vision, many displacement (or motion) estimation methods are proposed; simply applying them into the multiple exposure fusion often fails since the intensity levels of the images are significantly different due to the failure of camera response curve estimation, and more importantly, low and high exposure causes blackout and whiteout to some regions of the images, respectively, in which correspondence between the images is hard to find between the source and the blurred images due to displacement of the pixels from its initial setting , a motion blurred image with its pixels displaced causing a disargumented image is shown in the Figure 1.1 . Moreover, in the case of low exposure, noises such as thermal noise and dark current sometimes make the displacement estimation difficult

  2. DESCRIPTION

    The existing methods are mentioned, their drawbacks given and the path looked upon in this algorithm described.\

    1. Block matching

      The most popular and to some extent the most robust technique to date for motion estimation is Block Matching (BM) [7, 8, 9].Two basic assumptions are made in this technique.

      1. Constant translational motion over small blocks (say 8*8 or 16 *16) in the image. This is the same as saying that there is a minimum object size that is

        larger than the chosen block size

      2. There is a maximum (pre-determined) range for the horizontal and vertical components of the motion vector at each pixel site. This is the same as assuming a maximum velocity for the objects in the sequence. This restricts the range of vectors to be considered and thus reduces the cost of the algorithm.

      The image in frame n, is divided into blocks usually of the same size, N*N. Each block is considered in turn and a motion vector is assigned to each. The motion vector is chosen by matching the block in frame n with a set of blocks of the same size at locations defined by some search pattern in the previous frame.

      Figure 2.1 Motion estimation via Block Matching

      I can use Mean Squared Error (MSE) as well, but MAE is more robust to noise. The block matching algorithm then proceeds as follows at each image block.

      1. Pre-determine a set of candidate vectors v to be tested as the motion vector for the current block

      2. For each v calculate the MAE

      3. Choose the motion vector for the block as that v which yields the minimum MAE. The set of vectors v in effect yield a set of candidate motion compensated blocks in the previous frame n for evaluation.

      Smallest vector that can be estimated. For integer accurate motion estimation the position of each block coincides with the image grid.For fractional accuracy [10], blocks need to b e extracted between locations on the image grid. This requires some interpolation. In most cases bilinear interpolation is sufficient.

      Figure 2.2 shows the search space used in a full motion search technique. The current block is compared to every block of the same size in an area of size (2w+N)*(2w+N). The searcp space is chosen by deciding on the maximum displacement allowed: in Figure 1 the maximum displacement estimated is +w or w for both horizontal and vertical components.

      The technique arises from a direct solution of equation 4. The BM solution can be seen to minimize the Mean Absolute DFD (or Mean Square DFD) with respect to v, ver the N*N block. The chosen displacement, d satisfies the model equation 4 in some average case.

    2. Computation

      The Full Motion Search computationally d e m a n d i n g . G i v e n a maximum expected displacement of +w or -w pels, there are searched blocks (operations per block for an integer accurate motion estimate. Several reduced search techniques have been introduced which lessen this burden. They attempt to reduce the operations required.

      .

      Figure 2.2 Top Row: Frames 2, 3, 4 of the Mobile and Calendar Sequence, Middle Row: Frame difference without motion compensation, Last Row: DFD after Integer BM +4 or -4 pixel

      search

  3. IMPLEMENTATION

    Principal component analysis is a variable reduction p r o c e d u r e . It is useful when you have obtained data on a number of variables (possibly a large number of variables), and believe that there is some Because of this redundancy, you believe that it should be possible to reduce the observed variables into a smaller number of principal components (artificial variables) that will account for most of the variance in the observed variables

    conducting a principal component analysis are virtually identical to those followed when conducting an exploratory factor analysis. However,there are significant conceptual differences between the two procedures, and it is important that you do not mistakenly claim that you are performing factor analysis when you are actually performing principal component analysis. The differences between these two procedures are described in greater detail in a later section titled Principal Component Analysis is Not Factor Analysis.

    A reference-standard algorithm for reprocessing is provided by Image Processing Toolbox. Feature extraction is done by PCA algorithm between training set from database and f o r acquired image. In the field of machine learning, the goal of statistical classification is to use an object's characteristics to identify which class (or group) it belongs to. A linear classifier achieves this by making a classification decision based on the value of a linear combination of the characteristics. An object's characteristics are also known as feature values and are typically presented to the machine in a vector called a feature vector.

    The Principal Component Analysis (PCA) is one of the most successful techniques that have been used in image recognition and compression. PCA is a statistical method under the broad title of factor analysis. The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables), which are needed to describe the data economically. This is the case when there is a strong correlation between observed variables.

    The main idea of using PCA for face matching is to express the large 1-D vector of pixels constructed from

    2-D facial image into the compact principal components of the feature space. This can be called eigenspace projection.Eigenspace is calculated by identifying the eigenvectors of the covariance matrix derived from a set of facial images (vectors). The details are described in the following section.

  4. ALGORITHM

    The Block diagram of the overall work is explained. And also the flow of the blurring and deblurring methods is given.

    Select Multiple Images

    Image Preprocessing

    Find Mean Image

    Select Multiple Images

    Image Preprocessing

    Find Mean Image

    Motion Search

    Motion Search

    PCA Fusing

    PCA Fusing

    HDR IMAGE

    HDR IMAGE

    Fig 4.1 Top Level Block Diagram

    1. Mathematics of PCA Algorithm

      Step 1: suppose we have M vectors of size N (= rows of image n columns of image)representing a set of sampled images. s represent the pixel values.

      Step 2: Calculate the mean Image from the each image

      vector

      Step 3: Defined mean center Image

      Step 4: To find a set of M orthonormal vectors

      for which the quantity is maximized with the orthonormality constraint

      .

      Step 5: Find out the eigenvectors and eigenvalues of the covariance matrix.

      Step 6: Compute A facial image can be projected onto M' (<< M) dimensions.

    2. Motion estimation is Block Matching Algorithm

      Step 1: The image in frame n, is divided into blocks usually of the same size, N*N.

      Step 2: Chosen by matching the block in frame n with a set of blocks of the same size at locations.

      Step 3: Define the DFD between a pixel in the current frame and its motion compensated pixel in the previous frame.

      Step 4: Define the Mean Absolute Error of the DFD between the block in the current frame and that in the previous frame.

      \

    3. Three Step Search

      The simplest mechanism for reducing the computational burden of Full Search BM is to reduce the number of motion vectors that are evaluated. The Three-step search is a hierarchical search strategy that evaluates first 9 then 8 and finally again 8 motion vectors. to refine the motion estimate in three successive steps. At each step the distance between the evaluated blocks is reduced. The next search is centered on the position of the best matching block in the previous search. It can be generalized to more steps to refine the motion estimate further. Figure 4 shows the searched blocks in frame n-1 for this process.

    4. Cross Search

      The cross search is another variant on the sub sampled motion vector visiting strategy. It changes the geometry of the search pattern to a + or * pattern. Figure 4.5 shows the searched blocks in frame n-1 for this process. If the best match is found at the centre of the search pattern or the boundary of the search window, then the search step is reduced.

      Figure 4.4 Figure 4: Illustration of searched locations

      (central pixel of the searched block is shown) in Three-step BM (left) and Cross-search BM (right). The search window extent is shown in red for Cross- search. The best matches at each search level are circled in blue.

    5. Advantages and Features of PCA

      PCA benefits entrance control in buildings, access control for computers in general, for automatic teller machines in particular, day-to-day affairs like withdrawing money from bank account, dealing with the p o s t office, passport verification, and identifying the faces in a given database.

      1. Smaller representation of database because we only store the training images in the form of their projections on the reduced basis.

      2. Noise is reduced because we choose the maximum variation basis and hence features like background with small variation are automatically ignored.

      3. The basic Benefit in PCA is to reduce the dimension of the data.

      4. No data redundancy as components is Orthogonal.

      5. With help of PCA, complexity of grouping the images can be reduced.

      6. Application of PCA in the prominent field of criminal investigation is beneficial

    6. PCA Features

      PCA computes means, variances, covariances, and correlations of large data sets.

      PCA computes and ranks principal components and their variances.

      Automatically transforms data sets.

      PCA can analyze datasets up to 50,000 rows and Columns.

      In the light of these two concepts of discussion does not fulfill the complete system without mentioning the foregoing theory. After the initial fundamental processes of the image processing the target is in

      obtaining the enhanced image which is obtained by the light intensity composition which follows in extracting each individual RGB plane which involves in the initialization of the matrix. In the post process a number of zeroes which are identified has to replace with 1. In the later attribution of the process find the non-zero pixels which results in the separating of the high intensity and the low intensity pxels and finally integrating the images to form an input for finding the minimum direction. Theprocess of minimum direction goes on as follows to subdivide the image into the subsequent blocks (source image) and apply the same procedure for the geometrified image. This signifies the comparisons between the two images for its accuracy. Hence to check for the process obtains the differences between the two image and calculate the minimum difference block as the result and is fed as the input of the program component analysis to fuse the images followed by the final touch of converting it to raw data format and adds it further to get an enhanced image . Finally to mention, these processes that are implemented together is called dynamic fusion which encompasses all these procedures.

      Figure 5.5 Principal component analyses

  5. RESULTS

The results of the algorithm are shown below.

  1. Three Images are taken which have Low, Medium and High Exposures

    Figure 5.1 Low Exposure Image

    Figure 5.2 Medium Exposure Image

    Figure 5.3 High Exposure Image

  2. We have to fuse these above Images for getting Mean Image for which we compare with these three images. Below Image is the Mean Image

    Figure 5.4 Mean Image

  3. After taking Mean Image. Light Compensation will be done to all four images; Below Figure 8.5 is the Image which has Light Compensated Successfully.

    Figure 5.5 Light Compensated Image

  4. High Dynamic Range Image with deblurring is the Figure 8.6 shown below

Figure 5.6 High Dynamic Range Image with Deblurring

6 CONCLUSTION

In this project, instead of removing the motion blur as spatial blur, I proposed deblurring with a Principal component analysis with the combination of photometric calibration for Acquiring High Dynamic Image. The results showed that I could avoid segmenting images based on the local motions and that temporal deblurring effectively removed motion blur even in the presence of motion occlusions. For all the experiments, I assumed that exposure time was unknown. In our future work, it can be planned on extending the proposed method to the case, where the light intensity Variations.

7. REFERENCE

[1] Multiple Exposure Fusion for High Dynamic Range Image Acquisition by Takao Jinno, Published by IEEE IN 2012.

[2] Ghost Removal in High Dynamic Range Images by Khan, E.A. Akyiiz, A.O.; Rein hard,E. Published by IEEE IN 2006

[3]Synthesis of High Dynamic Range Motion Blur Free Image From Multiple Captures by Xinqiao (Chiao) Liu Published by IEEE.

  1. Autonomous R o b o t i c I n s p e c t i o n f o r Lunar S u r f a c e O p e r a t i o n s by , Lau r en ce Edwards,

    Terrence Fon in 2007

  2. High-quality motion deblurring from a single image, by Q. Shan, J. Jia, and A. Agarwala, ACM Trans. Graph.,vol. 27, pp. 73:173:10, 2008.

[6]Removing camera shake from a single photograph, by

R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and

W. Freeman, ACM Trans. Graph., vol. 25, pp. 787 794,2006.

[7]Two dimensional m a t c h e d filtering for motion estimation, by P. Milanfar, IEEE Trans. Image Process.,vol. 8, no. 3, pp. 438444, Mar. 1999.

[8 Super resolution without explicit sub pixel motion estimation, by ] H. Takeda, P. Milanfar, M. Protter, and

E. Elad, IEEE Trans. Image

[9], IEEE Trans.Pattern Anal. Mach. Intell.,by M. Ben- Ezra and S. K. Nayar, Motion-based motion deblurring vol. 26, no. 6, pp. 689698, Jun. 2004.

[10]Removing non-uniform motion blur from images, b y S. Cho,Y.Matsushita, and S. Lee, in Proc.IEEE 11th Int.Conf, Brazil, Oct.2007, pp. 18.

Leave a Reply