Sparse Image Reconstruction Using Nonlinear Filtering for Compressed Sensing

DOI : 10.17577/IJERTV1IS4195

Download Full-Text PDF Cite this Publication

Text Only Version

Sparse Image Reconstruction Using Nonlinear Filtering for Compressed Sensing

Joshna Bikki

Department of ECE, JNTU College of Engineering, Anantapur, Andhra Pradesh, India

B.Doss

Faculty, Department of ECE, JNTU College of Engineering, Anantapur, Andhra Pradesh, India

Abstract

Compressed sensing is a new paradigm for signal recovery and sampling. It states that a relatively small number of linear measurements of a sparse signal can contain most of its salient information and that the signal can be exactly reconstructed from these highly incomplete observations. The major challenge in practical applications of compressed sensing consists in providing efficient, stable and fast recovery algorithms which, in a few seconds, evaluate a good approximation of a compressible image from highly incomplete and noisy samples. In this paper, we propose to approach the compressed sensing image recovery problem using adaptive nonlinear filtering strategies in an iterative framework, and we prove the convergence of the resulting two-steps iterative scheme. The results of several numerical experiments confirm that the corresponding algorithm possesses the required properties of efficiency, stability and low computational cost and that its performance is competitive with those of the state of the art algorithms.

  1. Introduction

    In most image reconstruction problems, the images are not directly observable. Instead, one observes a transformed version of the image, possibly corrupted by noise. In the general case, the estimation of the image can be regarded as a simultaneous de-convolution and de-noising problem. Intuitively, a better reconstruction can be obtained by incorporating knowledge of the image into the reconstruction algorithm. Compressed sensing, also known as compressive sensing. Compressed sensing is the process of acquiring and reconstructing a signal that is supposed to be sparse or compressible. It states that a relatively small number of linear measurements

    of a sparse signal can contain most of its salient information. It follows that signals that have a sparse representation in a transform domain can be exactly recovered from these measurements by solving an optimization problem. However, the number of salient features hidden in massive data is usually much smaller than their sizes. Hence data are compressible. In data processing, the traditional practice is to measure (sense) data in full length and then compress the resulting measurements before storage or transmission.

    Section II reviews the existing algorithms we consider. In section III we propose a method in order to analyze the reconstruction approach. Section IV presents the estimation of quality based on some numerical experiments. Section V presents simulation results and discussions. Concluding remarks are presented in section VI.

  2. Related Work

    A large amount of research has been aimed at finding fast algorithms for solving numerical solution of problems. In fact, although are convex optimization problems that can be solved using several standard methods such as interior points methods, when the problems are large, as in the case of real images, their practical use is limited by the extensive computation required to generate a solution. Recently, efficient computational methods for problems of the form have been developed.

    1.1. Existing Algorithms:

    They include iterated shrinkage methods, Bregman iterative algorithms, gradient projection algorithms[2], fixed point continuation algorithms iterative reweighted algorithms and the

    specialized interior point method proposed in for solving the equivalent quadratic problem with linear inequality constraints. In an algorithmic framework for problems of the form and is also proposed, that represents a generalization of the iterate shrinkage methods. Other kinds of approaches to solving problems are the iterative greedy algorithms which include OMP, ROMP and COSAMP, and the combinatorial algorithms.

    While detailed studies and many comparisons between the performance and efficiency of the previously mentioned algorithms are given in the literature for the reconstruction of sparse 1- Dsignals[1], it is not completely clear how they work for large 2-D and 3-D problems, especially from the computational speed point of view. From the results of, and it seems that the run time necessary to reconstruct 256×256 and 512×512 images ranges from one hour to a few minutes. These values are still too high to allow for their widespread use in several 3-D practical applications, as, for example, medical

    TV estimates have been considered. For each value of the penalization parameter the unconstrained sub problems are approached making use of a two-step iterative procedure based upon the forward-backward splitting method .Interestingly, in compressed sensing context, where the acquisition matrix is obtained as randomly chosen rows of an orthogonal transform, the two steps of the iterative procedure become an enforcing of the current iterate to be consistent with the given measurements, and a total variation filtering step.

  3. Reconstruction approach

    We set here our notation and state the results in the following. Let S belongs to R(N1XN2) be a randomly generated binary mask, such that the point-to-point product with any v belongs to R(N1XN2) , denoted by S x v , represents a random selection of the elements of v, namely, we have

    imaging.

    The only algorithm that, up to the

    vs S v

    with

    vij ,if sij 1

    s

    ij

    v

    0, if sij 0

    present, seems to be really effective for large scale problems is the split-Bregman algorithm [2] . In that paper, it is shown that, by minimizing energies involving the reconstruction total variation, and using

    Let T be an orthogonal transform acting on an image x, We denote by

    a split formulation combined with Bregman iteration, it is possible to obtain sufficiently accurate reconstructions of 256 x256 sparse gradient images in

    TS x

    S (Tx )

    a few tens of iterations. Recently, a new nonlinear filtering strategy has been proposed in the context of a penalized approach to the compressed sensing signal

    The randomly sub sampled orthogonal transform of the input data can be represented as

    recovery problem. A suitable filter is used according to the considered minimization problem and a fast flexible algorithm has been realized for its solution.

    y S (Tx )

    TS x

    The empirical results presented in show that this nonlinear filtering approach succeeds in the perfect recovery of sparse and sparse gradient 1-D-signals [1]

    We want to find u belongs to R (N1xN2) that solves

    from highly incomplete data, and that the computing time is competitive with the most efficient state of the art algorithms.

    min

    u RN1 N2

    F(u),

    subjectto

    TSu y

    In this thesis, considered an extension of this nonlinear filtering approach to the multi dimensional case. We focus on a recovery

    In the case of input data perturbed by additive white Gaussian noise with standard deviation

    problem where the optimal solution, in addition to satisfying the acquisition constraints, has minimal bounded variation norm, namely, it minimizes. The

    y S (Tx

    e) T e

    s

    s

    x

    optimal reconstruction is evaluated by solving a sequence of total variation regularized unconstrained sub problems, where both isotropic and anisotropic

    The problem can be cast as

    min

    u RN1 N2

    F(u),

    subject to

    Tsu 2

    vn un

    T T ( y

    TS un )

    where 2

    2 (M

    y

    2 2

    2 2M )

    S

    u arg min

    F(u)

    1 u v 2

    n 1

    To overcome this problem we use the

    u c 2 n 2

    well known penalization approach that considers a sequence of unconstrained minimization sub problems of the form

    The general scheme of the bound constrained algorithm is the following:

    StepA-0: Initialization.

    min

    u RN1 N2

    F(u)

    1 2

    y

    2

    Tsu 2

    k

    Given F(.),y,Ts,>0,>0,0<r<1,Toll0, min and 0 such that 0< min 0.

    Set k=0,u0,0=0 and 0,0= 0.

    S

    The convergence of the penalization method to the solution of the original constrained problem has been established (under very mild conditions). Unfortunately, in general, using very small penalization parameter values makes the unconstrained sub problems very ill-conditioned and difficult to solve. In the present context, we do not have this limitation, since we will approach these problems implicitly, thus, avoiding the need to deal

    Step A-1: Start with the inner iterations While (k,0> min and ||Tsuk,0-y||2>Toll) Step B-0: Start with the outer iterations i=0;

    Step B-1:

    Updating step:

    with ill-conditioned linear systems. This is obtained by evaluating an approximation of the solution of

    vk ,i

    uk ,i

    T T ( y

    TS u

    k ,i )

    iteratively, using an operator splitting strategy (frequently considered in the literature to solve L1-

    Constrained Nonlinear filtering Step:

    2

    1 2

    regularized problems), and taking advantage of the particular structure of the resulting problems.

    uk ,i 1

    arg min u c

    F (u)

    2 k ,i

    u vk ,i

    Fig 1: Test images (a) phantom (b) head image of size (256×256)

    The corresponding bound constrained two-step iterative algorithm is the following. The proposed penalized splitting approach corresponds to an algorithm whose structure is characterized by two- level iteration. There is an outer loop, which progressively diminishes the penalization parameter in order to obtain the convergence to the global minimum, and an inner loop, which iteratively, using the two-step approach, minimizes the penalization function for the given value of LAMBDA

    Convergence Test:

    If |F(uk,i+1)-F(uk,i)|/ F(uk,i+1) k,i i=i+1

    k,i= k,i-1 go to step B-1 Otherwise go to step A-2.

    Step A-2: Outer iteration Updating

    K=k+1

    k,0=r. k-i,i uk,0=uk-1,i+1 endwhile

    Terminate with uk,0 as an approximation of x

  4. Numerical Experiment

    In this, we demonstrate the effectiveness of the proposed image reconstruction algorithm by reporting several numerical experiments that highlight its reconstruction capabilities, stability

    and speed. In all the experiments, we have used simulated data. This choice is motivated by the need to give an objective quantitative evaluation of the effectiveness of the proposed algorithm by using reconstructed image quality. In fact, the only visual inspection of the reconstructed images, as given in and, is not really enough to compare the performance of different reconstruction algorithms. The image quality has been evaluated using the PSNR value, defined as

    generated mask

    generated mask

    PSNR 20 log R

    Fig2: Acquisition Mask 1 & Mask 2

    Where

    10 RMSE

    RMSE Error

    N1 N2

    of a sparse gradient image, and compressible image widely used for comparisons between different methods, the Head image. All the experiments are performed using sub sampled frequency acquisitions,

    Where R>0 is the maximum value image gray level range and

    but, in order to demonstrate the capabilities of our nonlinear filtering method, we have tested it using two different acquisition strategies corresponding to the

    2

    N N masks given in Fig.2 .

    x

    2

    Error u

    1 2

    (ui, j

    xi, j )

  5. Experimental Results

    i 1 j 1

    The PSNR values that we give in the different experiments refer to the first, minimum energy reconstruction .As already mentioned, we have considered only the sparse gradient case and in all our experiments we have used both the anisotropic and isotropic discrete approximations of the total variation. Since we have experimentally seen that it is not important to find a very accurate solution of the variation problem, for all the experiments we have fixed to four the number of iterations of the isotropic estimate yielded by the digital total variation filter. This choice represents a good compromise between accuracy and efficiency. A higher iteration number increases the computing time without a real improvement in the reconstruction. The anisotropic estimate is obtained using just one iteration of the recursive new median filter. Since both the considered algorithms deal only with unconstrained minimization

    We remark that, while the two methods substantially perform very well in all the experiments, we have noted some differences in their behavior that justify the use of both estimate. The images used in our experimentation, shown in Fig. 1, are the Shepp Logan Phantom, the typical test image of the computed tomography literature and an example

    Simulations are performed using Matlab software which possesses excellent graphics and matrix handling capabilities. Matlab has a separate toolbox for image processing applications, which provided simpler solutions for many of the problems encountered in this research. The results shown in table 1 & 2 are obtained by running a matlab implementation of the bound constrained version of NFCS-2D on a PC with an Intel pentium4 HT3.4Ghz processor and 3GB of RAM under Windows XP.

    PHANTOM image Reconstruction:

    original image generated mask

    sparse image reconstructed image

    filtered image(NFCSI) filtered image(NFCSA)

    Table 2: Comparisons of PSNR, CPU Time, and Error rate for HEAD IMAGE WITH Mask2

    Method

    PSNR

    Error Rate

    CPU Time in sec

    NFCS-2D

    62.035

    39662.2

    11,44sec

    NFCS-2DI

    57.42

    22479.2

    24.007sec

    NFCS-2DA

    129.395

    12.67

    5.66sec

    HEAD IMAGE Reconstruction:

    original image generated mask

  6. Conclusion

For the solution of the compressed

sparse image reconstructed image

sensing reconstruction problem we have proposed an efficient iterative algorithm, based upon a penalized splitting approach and an adaptive nonlinear Filtering strategy, and its convergence property has been established. The capabilities, in terms of accuracy, stability, and speed of NFCS-2D, are illustrated by the results of several numerical experiments and comparisons with a state of the art algorithm. We remark that, even if we have analyzed the sparse gradient case with under sampled frequency acquisitions, our approach is completely general, and works for different kinds of measurements and different choices of the function. Also for future work, it is also possible for reconstruction of 3D i.e. color images.

filtered image(NFCSI)

filtered image(NFCSA)

REFERENCES

Table 1: Comparisons of PSNR, CPU Time, and Error rate for SHEPP-LOGAN WITH Mask1

  1. L. Montefusco, D. Lazzaro, and S. Papi, Nonlinear filtering for sparse signal recovery from incomplete measurements, IEEE Trans. Signal Process. vol. 57, no. 7, pp. 24942502, Jul. 2009.

  2. W. Yin, S. Osher, D. Goldfarb, and J. Darbon, Bregman iterative algorithms for minimization with applications to compressed sensing, SIAM J. Imag. Sci., vol. 1, pp. 143 168, 2008

    Method

    PSNR

    Error Rate

    CPU Time in sec 1

    [

    NFCS-2D

    63.7432

    32708.24

    12.59 a

    NFCS-2DI

    59.0103

    18729.18

    9.869 1

    NFCS-2DA

    137.3919

    2.2564

    7.925 [

    t

  3. T. F. Chan, S. Osher, and J. Shen, The digital TV filter and nonlinear denoising, IEEE Trans. Image Process., vol. 0, no. 2, pp. 231241,Feb. 2001.

4] N. C. Gallagher, Jr and G. W.Wise, A theoretical nalysis of the properties of median filters, IEEE Trans. Acoust. Speech, Signal Process.,vol. ASSP-29, no. 6, pp. 1361141,Dec.1981.

5] J. Darbon and M. Sigelle, A fast and exact algorithm for otal variation minimization, in PROC. 2nd Iberian Conf. Pattern Recognit. Image Anal., 2005, vol. 3522, pp. 351

359.

  1. L. I. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noise removal algorithms, Phys. D, vol. 60, pp.259268,1992.

  2. S. Kim, K. Koh, M. Lustig, and S. Boyd, An efficient method for compressed sensing, in Proc. IEEE Int. Conf. Image Process., 2007,vol. 3, pp. 117120.

  3. J. Bioucas-Dias and M. Figueiredo, A new TwIST: Two step iterative shrinkage-thresholding algAortihms for image restoration, IEEE Trans.Image Process., vol. 16, no. 12, pp. 29923004,Dec.2007.

  4. J. Cai, S. Osher, and Z. Shen, Linearized Bregman iterations for compressed sensing, Math. Comput., to be published.

  5. E. J. Candés, Compressive sampling, in Proc. Int. Congr. Math.,Madrid, Spain, 2006, vol. 3, pp. 14331452.

  6. E. J. Candés, J. Romberg, and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Commun. Pure Appl. Math.,vol. 59, no. 8, pp. 12071223, 2006.

Leave a Reply