 Open Access
 Total Downloads : 768
 Authors : Patel Shreyas, Baxi Aatha
 Paper ID : IJERTV1IS10412
 Volume & Issue : Volume 01, Issue 10 (December 2012)
 Published (First Online): 28122012
 ISSN (Online) : 22780181
 Publisher Name : IJERT
 License: This work is licensed under a Creative Commons Attribution 4.0 International License
Single Image Super Resolution
Patel Shreyas#1, Baxi Aatha #2
#1 Master in Computer Science & Engineering, Parul Institute of Technology, vadodara, Gujarat, India.
#2 Department of Computer Science & Engineering,
Parul Institute of Engineering & Technology, Vadodra, Gujarat, India
Abstract: These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their shortcomings. This computationally inexpensive method is robust to errors in motion and blur estimation and results in images with sharp edges. Simulation results confirm the effectiveness of our method and demonstrate its superiority to other superresolution methods. The SR image approaches reconstruct a single higher resolution image from a set of given lowerresolution images For the reconstruction stage a SR reconstruction model composed of the L1 normdata delity and total variation (TV) regularization is dened, with its reconstruction object function being efciently solved by the steepest descent method. Other SR methods can be easily incorporated in the proposed framework as well. Specifically, the SR computations for multiview images computation in the temporal domain are discussed.
Keywords Superresolution imaging, Resolution enhancement , Regularization, robust estimation, super resolution, total variation (TV).

INTRODUCTION
Super resolution is the process of combining a sequence of lowresolution (LR) noisy blurred images to produce a higher resolution image or sequence. The multiframe superresolution problem was first addressed in [1], where they proposed a fre quency domain approach, extended by others, such as [2]. Although the frequency domain methods are intuitively simple and
computationally cheap, they are extremely sensitive to model errors [3], limiting their use. Also, by definition, only pure translational motion can be treated with such tools and even small deviations from translational motion significantly degrade per formance.
Another popular class of methods solves the problem of resolution enhancement in the spatial domain. Noniterative spatial domain data fusion approaches were proposed in [4][6]. The iterative back projection method was developed in papers such as

and [8]. In [9], the authors suggested a method based on the multichannel sampling theorem. In [10], a hybrid method, combining the simplicity of ML with proper prior information was suggested. The spatial domain methods discussed so far are generally computationally expensive. The authors in [11] introduced a block circulant preconditioner for solving the Tikhonov regularized superresolution problem formulated in [10] and addressed the calculation of regularization factor for the under determined case by generalized cross validation in [12]. Later, a very fast superresolution algorithm for pure translational motion and common space invariant blur was developed in [5]. Another fast spatial domain method was recently suggested in [13], where LR images are registered with respect to a reference frame defining a nonuniformly spaced highresolution (HR) grid. Then, an interpolation method called Delaunay triangulation is used for creating a noisy and blurred HR image, which is subsequently deblurred. All of the above methods assumed the additive Gaussian noise model.
This paper is organized as follows. Section II explains the observation model concepts of the reconstruction of the image. Section III justifies the various methods of the reconstruction. Section IV justifies relation to other methods to reconstruction of the image. Section V concludes this paper.


OBSERVATION MODEL FOR SUPERRESOLUTION IMAGE
As depicted in Fig. 1, the image acquisition process is mod eled by the following four operations: (i) geometric trans formation, (ii) blurring, (iii) down sampling by a factor of q1 Ã— q2, and (iv) adding with white Gaussian. Note that the geometric transformation includes translation, rotation, and scaling. Various blurs (such as motion blur and out offocus blur) are usually modeled by convolving the image with a lowpass lter, which is modeled by a point spread func tion (PSF). The given image (say,
with a size of M1 Ã— M2)is considered as the high resolution ground truth,which is to be compared with the highresolution image reconstructed from a set of lowresolution images (say, with a size of L1 Ã— L2 each; that is, L1 = M1/q1 and L2 = M2/q2) for conducting performance evaluation. To summarize mathematically,
y(k) = D(k)P(k)W(k)X + V(k), (1)
= H(k)X + V(k), (2)
where y(k) andXdenote the kth L1Ã—L2 lowresolution image and the original M1Ã—M2 highresolution image, respectively, and k = 1, 2,…,. Furthermore, both y(k) and X are repre sented in the lexicographicordered vector form, with a sizeof L1L2 Ã—1 and M1M2 Ã—1, respectively, and each L1 Ã— L2 image can be transformed (i.e., lexicographic ordered) into a L1 L2Ã—1 column vector, obtained by ordering the image rowby row. D(k) is the decimation matrix with a size of L1 L2 Ã—
Fig. 1 The observation model,establishing the relationship between the original highresolution image and the observed lowresolution images.The observed lowresolution images are the warped, blurred,downsampled and no is version of the original highresolution image .
M1M2, P(kis the blurring matrix of size M1M2 Ã— M1M2, andW(k) is the warpingmatrix of size M1M2
Ã—M1M2. Consequently, three operations can be combined into one trans form matrix H(k) = D(k)P(k)W(k) with a size of L1 L2 Ã—M1M2. Lastly, V(k) is a L1L2 Ã— 1 vector, representing the white Gaussian noise encountered during the image acqui sition process. Note that V(k) is assumed to be
independent with X. Over a period of time, one can capture a set of (say,) observations . With such establishment, the goal of the SR image reconstruction is to produce one highresolution image X based. It is important to note that there is another observation model commonly used in the literature (e.g., [3437]). The only difference is that the order of warping and blurring oper ations is reversed; that is, y(k) = D(k)W(k)P(k)X + V(k). When the imaging blur is spatiotemporally invariant and only global translational motion is involved among multiple observed lowresolution images, the blur matrix P(k) and the motion matrix W(k) are commutable. Consequently, these two models coincide. However, when the imaging blur is spatio temporally variant, it is more appropriate to use the second model. The determination of themathematicalmodel for formulating the SR computation should coincide with the imaging physics (i.e., the physical process to capture low resolution images from the original highresolution ones).

SUPERRESOLUTION IMAGE RECONSTRUCTION
The generation of the low resolution image can be modeled as a combination of smoothing and downsampling operation of natural scenes by low quality sensors. Super resolution is the inverse problem of this generation process. One criteria of solving this inverse problem is minimizing the reconstruction error. Various methods are proposed in literature to deal with the inverse problem. In following section I categorize the different SR methods available in existing paper.

Interpolation Methods
Image interpolation is the process of converting the image from one resolution to other resolution. This process is performed on a one dimension basis row by row and then column by column. Image interpolatio estimates the intermediate pixel between the known pixels by using different interpolation kernel.

Nearest Neighbor Interpolation
Nearest neighbor interpolation is the simplest interpolation from the computational point of view. In this, each output interpolated pixel assign the value of nearest sample point in the input image [2]. This process just displaces the intensity from reference to interpolated one so it does not change the histogram. It preserves the sharpness and dose not produce the blurring effect but produce aliasing.

Bilinear Interpolation
In Bilinear interpolation the intensity at a point is determined from weighted some of intensity at four pixel closet to it. It changes the intensity so histogram is also change. It slightly smoothes the image but does not create an aliasing effect.

Bicubic Interpolation
In cubic interpolation intensity at point is estimated from the intensity of 16 closest to it. The basis function is Bicubic gives smooth image but computationally demanding.

Bspline Interpolation
Spline interpolation is the form of interpolation where interpolant is a special piecewise polynomial called a spline. There is a whole family of the basis function used in interpolation which is given as [2]. Higher order interpolation is much more used when image required many rotation and distortion in separate step. However for single step enhancement is increased processing time.

Hybrid Approach of Interpolation
In 2008, H. Aftab et al. [3] proposed a new hybrid interpolation method in which the interpolation at edges is carried out using the covariance based method and interpolation at smooth area is done by using iterative curvature based method. After finding edges and smooth area using information from the neighborhood pixels edge is interpolated using covariance based method. The covariance coefficient of HR image is obtaining using co variance parameter of LR image. In smooth are a curvature interpolation is carried out by first performing bilinear interpolation along the direction where the second derivative is lower and in
diagonal case the difference between diagonal is calculated and use bilinear interpolation where the intensity difference is less. This method has significant advantage in terms of the processing time, peak signal to noise ratio and visual quality compared to bilinear, bicubic and nearest neighbor.


Iterative back projection algorithm
In this algorithm [1][3] back projection error is used to construct super resolution image. In this approach the HR image is estimated by back projecting the error between the simulated LR image and captured LR image. This process is repeated several times to minimize the cost function and each step estimate the HR image by backprojecting the error. The main advantage of this method is that this method converges rapidly, less complexity and lowless number of iteration is required. In recently numbers of improvements are used with this approach which is different edge preserving mechanisms.

Robust LearningBased SuperResolution
This algorithm [5] synthesizes a highresolution image based on learning patch pairs of low and highresolution images. However, since a low resolution patch is usually mapped to multiple highresolution patches, unwanted artifacts or blurring can appear in superresolved images. In this paper, we propose a novel approach to generate a high quality, highresolution image without introducing noticeable artifacts. Introducing robust statistics to a learningbased super resolution, we efficiently reject outliers which cause artifacts. Global and local constraints are also applied to produce a more reliable high resolution image. Learningbased superresolution algorithms are generally known to provide HR images of high quality. However, their practical problem is the oneto multiple mapping of an LR patch to HR patches, which results in image quality degradation.

An Efficient ExampleBased Approach for Image SuperResolution
This algorithm [6], [7] uses learning method to construct super resolution image. The main
contributions of these algorithms are: (1) a class specific predictor is designed for each class in our examplebased superresolution algorithm this can improve the performance in terms of visual quality and computational cost; and (2) different types of training set are investigated so that a more effective training set can be obtained. The classification is performed based on vector quantization (VQ), and then a simple and accurate predictor for each category, i.e. a classspecific predictor, can be trained easily using the example patchpairs of that particular category. These class specific predictors are used to estimate, and then to reconstruct, the highfrequency components of a HR image. Hence, having classified a LR patch into one of the categories, the highfrequency content can be predicted without searching a large set of LRHR patchpairs.

Learning Based Super Resolution using Directionlets
In this algorithm [9] example based method using directionlets (skewed anisotropic wavelet transform) are used to generate high resolution image. It does scaling and filtering along a selected pair of direction not necessary horizontal and vertical like wavelet transform. In this approach the training set is generated by subdividing HR images and LR images into the patches of size 8*8 and 4*4 respectably. And then best pair of the direction is assign to each pair from five set of directions [(0,90),(0,45),(0,45),(90, 45),(90,45)] and then grouping the patches according to direction which reduce the searching time. Input LR image is contrast normalized and then subdivided into 4*4 patches. Each patch is decomposed into eight bands passing using directionlets. The directional coefficient of six bands HL,HH,VL,VH,DL,DH are learn from training set. Minimum absolute difference MAD criterion is used to select the directionlets coefficient. For AL and AH cubic interpolated LR image is used. These learned coefficients are used to obtain SR image by taking inverse directionlets transform. At the end contrast normalize is undo. Simple wavelet which is isotropic and does not follow the edges results in the artifacts which are removed in this case.


RELATION TO OTHER METHODS
Since this survey paper proposes a new approach to the super resolution restoration problem, it is appropriate to relate this new approach to the methods already known in the literature. In the sequel, we will present a brief description of each of the existing methods in light of the new results. The three main known methods for super resolution restoration are the IBP method [31][33], the frequency domain approach [24][26], the POCS approach [34][35], and the MAP approach [37]. This section will concentrate to propose some novel approach of single image super resolution with edge preservation..

The IBP Method
The IBP method [31][33] is an iterative algorithm that projects the temporary result onto the measurements, simu ating them this way. The above simulation error is used to update the temporary result. If we take this exact reasoning and apply it on our proposed model in (2.1), denoting the temporary result at the th step by , we get for the simulated measurements . The proposed update equation in the IBP method [31][33] is given in scalar form, but when put in matrix notations, we get where Q(k) are some error relaxation matrices to be chosen. The conguration obtained in (4.1) is a simple error relaxation algorithm (such as the steepest descent, the GaussSiedel algorithms, or other algorithms), which minimizes a quadratic error as dened in (2.4). This analogy means that the IBP method is none other than the ML (or least squares) method proposed here without regularization. In the IBP method presented in [31][33], th matrices Q(k) were chosen to be Q(k)={[1/f(k)]*[1/C(k)]*[D(k)]} where C(k ) is a reblurring operator, and D(k) is an interpolation to be determined [31][33]. If we choose the simple SD algorithm for the solution of (2.5), we get that . This result implies that choosing the transpose of the blur matrix as the reblurring operator, and zero padding as the interpolation operator gives almost the same result as the IBP method. The only difference is the choice of the warp matrix in the above two congurations. Since , the IBP method uses the additional positivedenite inverse of the matrices to
the error relaxation matrices proposed by the SD algorithm. These additional terms may compromise the convergence properties of the IBP algorithm, whereas the SD (and others) approach performed directly on the ML optimization problem assures convergence.
According to the above discussion, therefore, the new approach has thus several benets when compared to the IBP method, as follows.

There is a freedom to choose faster iterative algorithms (such as the CG) to the quadratic optimization problem.

Convergence is assured for arbitrary motion characteristic, linear space variant blur, different decimation factors for the measurements, and different additive noise statistics.

Locally adaptive regularization can be added in a simple fashion, with improved overall performance.


The POCS Method
The approach taken in [34][36] is the direct application of the POCS method for the restoration of superresolution image. The suggested approach did not use the smoothness constraint as proposed here, and chose to use the distance measure in order to get simpler projection operators. In the sequel, we have presented the bounding ellipsoid method as a tool to relate the POCS results to the stochastic estimation methods. We have seen that applying only ellipsoids as constraints gives a very similar result to the ML and the MAP methods [33]. In [34][36], it is suggested to add only the amplitude constraint given in to the trivial ellipsoid constraints. We have shown that instead, we can suggest a hybrid method that has a unique solution, and yet is very simple to implement.

Nonsubsampled Contourlet Transform Based Learning
Efficient representation of visual information lies at the heart of many image processing tasks, including compression, denoising, feature extraction, and inverse problems. Efficiency of a representation refers to the ability to capture significant information about an object of interest using a small description.
For image compression or contentbased image retrieval, the use of an efficient representation implies the compactness of the compressed file or the index entry for each image in the database. For practical applications, such an efficient representation has to be obtained by structured transforms and fast algorithms. For onedimensional piecewise smooth signals, like scanlines of an image, wavelets have been established as the right tool, because they provide an optimal representation for these signals in a certain sense. In addition, the wavelet representation is amenable to efficient algorithms; in particular it leads to fast transforms and convenient tree data structures. These are the key reasons for the success of wavelets in many signal processing and communication applications; for example, the wavelet transform was adopted as the transform for the new imagecompression standard, JPEG2000 [20]
However, natural images are not simply stacks of 1D piecewise smooth scanlines; discontinuity points (i.e. edges) are typically located along smooth curves (i.e. contours) owing to smooth boundaries of physical objects. Thus, natural images contain intrinsic geometrical structures that are key features in visual information. As a result of a separable extension from 1D bases, wavelets in 2D are good at isolating the discontinuities at edge points, but will not see the smoothness along the contours. In addition, separable wavelets can capture only limited directional information an important and unique feature of multidimensional signals. These disappointing behaviors indicate that more powerful representations are needed in higher dimensions.
To see how one can improve the 2D separable wavelet transform for representing images with smooth contours, consider the following scenario. Imagine that there are two painters, one with a waveletstyle and the other with a new style, both wishing to paint a natural scene. Both painters apply a refinement technique to increase resolution from coarse to fine. Here, efficiency is measured by how quickly, that is with how few brush strokes, one can faithfully reproduce the scene.


PROPOSED SCHEME
Super resolution is the problem of regenerating a high resolution image for one or multiple low resolution images of same scene. Most of the method reviewed are based on multiple low resolution images and mathematically complex. So, the objective of this is a generating high resolution image from single low resolution image, and this is known as single image super resolution. Such single image super resolution problems arise in a number of real word applications. A common application is the online image exchange. To save the storage space and communication bandwidth; it would be desirable that the low resolution image is downloaded and enlarged by user with some appropriate super resolution techniques. In super resolution there is always one aim to restore the high frequency component back which lies at the edges in the image. To take care of these all things, the contribution of this is to propose some novel approach of single image super resolution with edge preservation.

Conclusion:
It can also be served as an appreciable frontend pre process ing stage to facilitate various image processing applications to improve their targeted terminal performance The SR imaging has been one of the fundamental image processing research areas. It can overcome or compensate the inherent hardware limitations of the imaging system to provide a more clear image with a richer and informative content.. In this survey paper, our goal is to offer new perspectives and out looks of SR imaging research, besides giving an updated overview of existing SR algorithms. It is our hope that this work could inspire more image processing researchers endeavoring on this fascinating topic and developing more novel SR tech niques along the way.

References

Baikun Wan and Lin Meng, Video Image Superresolution Restoration Based on Iterative
BackProjection Algorithm , CIMSA, Hong Kong, China, 2009, pp 46 49.

Chen Chiung Hsieh and YoPing Huang Video SuperResolution by Motion Compensated Iterative BackProjection Approach journal of information science and engineering, vol 27, no 3, 2011, pp 11071122

S. Dai, M. Han, Y. Wu, and Y. Gong, Bilateral BackProjection for Single Image Super Resolution, IEEE Conference on Multimedia and Expo (ICME), 2007, pp. 10391042.

Vaishali B. Patel, Chintan K. Modi, Chirag
N. Paunwala, Suprava Patnaik, Hybrid Approach For Single Image Super Resolution Using Isef And Ibp: Specific Reference To License Plate Proceedings of the IASTED, Canada, June 2011, pp 152 157

Changhyun Kim and Kyuha Choi Robust learningbased superresolution Proceedings of IEEE 17th International Conference on Image Processing, 2010, pp 2017 2020

Xiaoguang Li and Kin Man Lam An efficient examplebased approach for image superresolution IEEE Int. Conference Neural Networks & Signal Processing Zhenjiang, China, June 2008, pp 575 580

W.T. Freeman, T.R. Jones, and E.C. Pasztor, ExampleBased Super Resolution, IEEE Computer Graphics and Applications, vol. 22, no. 2, 2002, pp. 5665.

R. Gonzalez and R. Woods, Digital Iage Processing, 3rd Edition, Pearson Eduction, Inc, Publishing as Prentice Hall, pp. 714715.

M. Irani and S. Peleg, Motion Analysis for Image Enhancement: Resolution, Occlusion and Transparency, Journal of Visual Communication and Image Representation (JVCIP), vol 4, no 4, 1993, pp. 324335.

C. S. Burrus, R. A. Gopinath, and H. Guo, Introduction to Wavelets and Wavelet Transforms, Prentice Hall, New Jersy, 1998.

M. Beaulieu, S. Foucher, L. Gagnon, Multi spectral image resolution refinement using stationary wavelet transform, Geoscience and Remote Sensing Symposium, vol. 6, 1989, pp. 40324034.

Nunez J, Otazu X, Fors O, Prades A, Pala V, Arbiol R. "Multiresolutionbased image fusion with additive wavelet decomposition". IEEE Transactions on Geoscience and Remote Sensing, vol 37, no 3, 1999, pp 12041211

M. Beaulieu, S. Foucher, L. Gagnon, Multi spectral image resolution refinement using stationary wavelet transform, Geoscience and Remote Sensing Symposium, vol. 6, 1989, pp. 40324034.

M.V.Joshi and S.Chaudhuri, A learning based method for image superresolution from zoomed observations, Proc. of 5th Int. Conf. on Advances in Pattern Recognition (ICAPR03) pp.179182, Calcutta, India, Dec.2003.

J. M. Shapiro, Embedded image coding using zerotrees of wavelet coefficients, IEEE Transactions on Signal Processing,vol. 41, no. 12, pp. 34453462, 1993

C.V.Jiji, M.V.Joshi and S.Chaudhuri, Single frame image superresolution using learned wavelet coefficients International Journal of Imaging Systems and Technology, vol.14, no.3, pp.105112, 2004

Videos lectures on Advanced Digital Signal ProcessingWavelet And Multirate by B. H. Gadre.

Nunez J, Otazu X, Fors O, Prades A, Pala V, Arbiol R. "Multiresolutionbased image fusion with additive wavelet decomposition". IEEE Transactions on Geoscience and Remote Sensing, vol 37, no 3, 1999, pp 12041211

M. Beaulieu, S. Foucher, L. Gagnon, Multi spectral image resolution refinement using stationary wavelet transform, Geoscience and Remote Sensing Symposium, vol. 6, 1989, pp. 4032 4034.

Do M N, Vetterli M. "The contourlet transform: an efficient directional multiresolution image representation". IEEE Transactions on
Image Processing,vol 14, no 12, 2005, pp
20912106

Bamberger R H, Smith M J T. "A filter bank for the directional decomposition of images: theory and design". IEEE Transactions on Signal Process, vol 40, no 4, 1992, pp 882893

J. P. Zhou, Arthur L. Cunha, and Minh N. Do. Nonsubsampled contourlet transform: construction and application in enhancement, IEEE ICIP. 2005, pp. 469472.

Da Cunha A L, Zhou J P, Do M N. "The nonsubsampled contourlet transform: theory, design and applications". IEEE Transactions on Image Processing, 2006, vol 15,no 10, pp 30893101

D. P. Capel, Image mosaicing and super resolution, Ph.D. disserta tion, Univ. of Oxford, Oxford, U.K., 2001.

M. Gevrekci and B. K. Gunturk, Super resolution under photometric diversity of images, EURASIP J. Adv. Signal Process.,vol.2007, 2007, Article ID 36076.

J.Ma, J. C.W. Chan, and F. Canters, Fully automatic subpixel image registration of multiangle CHRIS/Proba data, IEEE Trans. Geosci. Remote Sens., vol. 48, no. 7, pp. 28292839, July 2010.

W. Y. Zhao and S. Sawhney, Is super resolution with optical ow feasible?, in Proc. ECCV, LNCS 2350, 2002, pp. 599613.

G. Yang, C. V. Stewart, M. Sofka, and C.L. Tsai, Registration of challenging image pairs: Initialization, estimation, and decision, IEEE Trans. Pattern Anal.Mach. Intell., vol. 29, no. 11, pp. 1973 1989,Nov.2007.

H. Trussel and B. Hunt, Sectioned methods for image restoration, IEEE Trans. Acoust., Speech Signal Process., vol. ASSP26, no. 2, pp. 157164, 1978.

Q. Tian and M. N. Huhns, Algorithms for subpixel registration, Comput. Vision, Graphics, Image Process., vol. 35, pp. 220233, Aug. 1986.

S. Periaswamy and H. Farid, Medical image registration with partial data, Med. Image Anal., vol. 10, no. 3, pp. 452464, Jun. 2006.

X. Feng, Analysis and approaches to image local orientation estima tion, M.S. thesis, Dept. Comput. Eng., Univ. California, Santa Cruz, Mar. 2002.

M. A. MartÃnFernÃ¡ndez, M. MartÃnFernÃ¡ndez, and C. Al berolaLopez, A logeuclidean polyafne registration for articulated structures in medical images, in Proc. MICCAI, 2009, pp. 156164.

A. MohammadDjafari, Superresolution: A short review, A new method based on hidden Markov modeling of HR image and future challenges, Comput. J., vol. 52, no. 1, pp. 126141, 2009.

F. Chen, J.Ma, J. C.W. Chan, and D. Yan, Quantitative measurement of the homogeneity and co
ntrast of step edges in the estimation of the point spread function of a satellite image, Int. J. Remote Sens., vol. 32, no. 22, pp. 71797201, 2011.

S. Baker and T. Kanade, Limits on super resolution and how to break them, IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 9, pp. 1167 1183, Sep. 2002.

L. C. Pickup, Machine learning in multiframe image superresolu tion, Ph.D. dissertation, Univ. Oxford, Oxford, U.K., Feb. 2008.