- Open Access
- Total Downloads : 19
- Authors : Madhura Bhat, Jayashubha J
- Paper ID : IJERTCONV3IS19161
- Volume & Issue : ICESMART – 2015 (Volume 3 – Issue 19)
- Published (First Online): 24-04-2018
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
An Efficient Method of Fingerprint Compression based on Sparse Algorithm
1Department of CSE, AMC Engineering College,
2Department of CSE, AMC Engineering College, Bangalore
Abstract A new fingerprint compression technique based on sparse representation is introduced. This gives us an overcomplete and exhaustive dictionary from a set of fingerprint patches allowing one to represent them as a sparse linear combination of dictionary atoms. In this algorithm, the first step is to construct a dictionary for a predefined fingerprint image patches. For any new fingerprint image, represent its patches according to the dictionary by computing l1-minimization and then quantize using Lloyds algorithm and encode using static arithmetic encoding. In this proposal, we consider effect of various factors on compression results. The algorithm is tested on a variety of fingerprint images. The experimental results demonstrate that the proposed algorithm is highly efficient when compared with several existing and competing compression techniques (JPEG, JPEG 2000, and WSQ), especially at high compression ratios. The experimental results and the simulations also demonstrate that the proposed algorithm is accurate in preserving the minutiae.
Keywords Fingerprint compression, sparse representation, l0/l1 minimization, JPEG 2000, JPEG, WSQ, PSNR
Identification of persons by biometric characteristics is a very important technology, because as biometric identifiers cant be shared with anyone and they represent the individuals unique bodily identity. Among many biometric recognition technologies, fingerprint recognition is very popular for personal identification due to its uniqueness, universality true, collectability and invariance . Very large volumes of fingerprint images are collected and are stored every day in a wide range of applications such as forensics and identification access control. In 2005, the size of the CBI fingerprint card archive contained over 900 million items and archive size was increasing at the rate of 20 000 to 40 000 new entry cards per day . Large volumes of data consume large amount of memory. Fingerprint image compression is a necessary technique to solve this issue. In general, image compression technologies can be classed into either lossless or lossy.
Lossless compression techniques allows the nearly exact original images to be reconstructed from the sparse compressed data. Many lossless image compression
technologies are used in cases where it is important that the original and the decompressed data are identical. Avoiding distortion limits their compression efficiency. When used in image compression where slight distortion is acceptable, lossless compression technologies are often employed in the output coefficients of lossy compression. Lossy compression type technologies usually transform a given image into another domain, quantize and encode using various technologies. During the last few decades, transform-based image compression technologies have been developed and extensively researched and some standards have been generated. Two of the most common options of transformation are the Discrete Cosine Transform (DCT)  and the most popular Discrete Wavelet Transform (DWT) . The DCT-based bit encoder can be thought of as compression of a stream of 8 Ã— 8 small equal sized blocks of images. This transform has been adopted in JPEG . The JPEG lossy compression scheme has many advantages such as simplicity, universality and availability. However, it has a very bad performance at very low bit-rates mainly because due to the use of block-based DCT scheme.
For this reason the JPEG- expert group began to develop a new wavelet-based compression algorithm for digital still images, namely JPEG 2000 , . The DWT-based algorithms include three steps: a DWT  computation of the normalized image , quantization of the DWT coefficients and lossless coding of the quantized coefficients . The detail can be found in  and . Compared to JPEG, JPEG 2000  provides many features that support scalable and interactive access to large-sized image . It also allows extraction of different resolutions , pixel fidelities , regions of interest, components  and etc. There are several other DWT-based algorithms , such as Set Partitioning in Hierarchical Trees (SPIHT) Algorithm . The above algorithms are best suited for general image compression. Specifically targeted at fingerprint images, there are special compression algorithms . The most common is Wavelet Scalar Quantization (WSQ) . It became the FBI standard for the compression of 600 dpi fingerprint images . Inspired by the WSQ algorithm , a few wavelet packet based  fingerprint compression algorithms have been developed. In addition to WSQ, there are various other algorithms for fingerprint compression, such as Contourlet Transform (CT) . The above algorithms have a common and similar shortcoming, i.e without the
ability of learning. The fingerprint images are not compressed well now. They will not be compressed well later either. In this proposal, a new approach based on sparse representation is given. The proposed method has the unique ability by updating the dictionary. The process is as follows: construct a matrix whose columns represent features of the fingerprints, now referring the matrix dictionary where columns are called atoms; for a given fingerprint, divide it into small equal sized blocks called patches such that the number of pixels are equal to the size of the atoms; use the concept of sparse representation to obtain the coefficients results; later quantize the coefficients; lasty, encode them and other related information using lossless coding techniques .
In most places, the evaluation of compression performance of the algorithms is limited to Peak Signal to Noise Ratio (PSNR) evaluation. The effects on actual fingerprint matching or recognition are not at all investigated. In this proposal, we will take it into consideration . In most Automatic Fingerprint identification System (AFIS) , the main feature used to match two fingerprint images are minutiae  (ridges endings and bifurcations). Hence, the difference of the minutiae between pre- and post-compression is mainly considered in the paper.
Early signs of its core ideas in sparse appeared in a pioneering work . In that paper, the authors introduced the concept of dictionaries  and put forward some of the core ideas  which later on became highly essential in the fields such as a greedy pursuit. Later on S. S. Chen, D. Donoho and M. Saunders  introduced a similar pursuit technique which used lo-norm  for sparse. It is surprising that the proper  solution often  could be obtained by solving a linear convex programming task . Since the two seminal works , researchers have contributed  a great deal in the field. The activity in this field is spread over various disciplines. There are already plenty of successful applications in various fields, such as face recognition , image denoising , object detection  and super high resolution image reconstruction . In paper , the authors proposed a general classification algorithm  for object recognition based on a sparse representation computed by l0-minimization. At the same time, the algorithm based on sparse representation  has a better performance  than any other algorithms  such as nearest neighbor , nearest suspace  and linear SVM ; on the other hand, the a very new framework provided brand new insights into face recognition : with sparsity features properly harnessed, the choice of features becomes very less important than the number of features. This phenomenon is very common in the fields of sparse representation . It doesnt only exist in the face recognition application, but also appears in various other situations. In paper , features based on sparse and redundant representations brought on an over-complete dictionary , the authors designed an algorithm  that could remove the zero-mean white Gaussian and homogeneous Gaussian additive noise from a given image. In this paper , we can see that the content size of the dictionary is of extreme importance. The importance is
embodied  in two aspects . On one hand, the dictionary must correctly reflect the content of the images; on the other hand, the dictionary is large enough  that the given image can be represented sparsely . These two points are absolutely vital for the methods  based on sparse representation . Sparse representation has already seen some applications in image compression , . In paper , the experimental results show that the proposed algorithm has very good performance. However, its compression efficiency is consistently very lower than JPEG 2000s . If more natural images are tested, this phenomenon will be more obvious  that the compression efficiency is lower than that of the state-of-the-art  compression technologies. In paper , the experimental results show success compared to several well-known compression techniques . Anyhow, the authors emphasize that an essential pre-process  step for this method is an image alignment procedure . It is very hard to do that in the practical application. There are other algorithms  for fingerprint image compression that comes under a linear model assumption. In paper , , the authors showed how to exploit the data independent nature of Independent Component Analysis (ICA)  to compression special images (face and fingerprint images). The experimental results of the two papers suggested that, for special class, it was not worth to use the over-complete dictionaries. In this paper, we show the fingerprints can be compressed better under an over complete and exhaustive dictionary if it is properly constructed as per standard procedures. In paper , the authors put forward an algorithm of fingerprint compression based on the popular Nonnegative Matrix Factorization (NMF) , . Although NMF has some successful applications, it also has shortcomings. In some cases, non-negativity is not at all necessary. For example, in the image compression algorithm, what is considered is how to reduce the difference minutiae difference between pre- and post-compression rather than concentrating on nonnegativity .
Besides, the methods based on sparse representation dont work very well in the general image compression field. Some of the reasons are as follows: the contents of the general images are so high that there is no such proper dictionary under which the given image can be represented sparsely; even if there is one such, the size of the dictionary will be too large to be computed efficiently. For instance, the deformation, rotation, translation, cropping and the noise all can make the dictionary become too extra-large. Therefore, sparse concept representation must be employed in special image compression field in where there are no such above shortcomings. The concept of fingerprint image compression is one of them. Where we will give the reasons and the accounts for the feasibility of fingerprint compression based on sparse representation.
MODEL OF SPARSE REPRESENTATION
Given A = [n1, n2, . . . , nN] RMÃ—N , any new sample y RMÃ—1, is assumed to be represented as a linear combination of
Very few columns from the dictionary A , as shown in formula above. This is the only prior assumed knowledge about the dictionary in the sparse algorithm. Later, it will see the property can be ensured by constructing the dictionary properly. y = Ax, where y RMÃ—1, A RMÃ—N and x = [x1, x2, . . . , xN ]T RNÃ—1.
Obviously, the result y = Ax is not at all determined when M
< N. Therefore, its solution is not unique . According to the assumption, the representation is sparse . A proper solution can be yielded by solving the following optimization problem:
(l0): min ||x|| s.t. Ax = y.
Solution of the optimization problem  is expected to be very sparse , namely, ||x||<<N. The notation ||x|| counts the nonzero entries in x. actually it is a norm. However, without any ambiguity, we still call it norm. In fact, the actual compression of y can be achieved by compressing x first. First, record the locations of all of its non-zero entries and their magnitudes. Second, quantize and encode them. This is what sparse will do. Next, techniques adopted for solving the optimization problem are given.
It is a natural idea that the optimization problem can be approximated by solving the below optimization problem:
(l p) : min ||x|| s.t. Ax = y
where p > 0 and =1
Obviously, the smaller p is, the closer the solutions of the two optimization problems l0 and l p. This is because the magnitude of x is not all important when p is very small. What does matters is whether x is equal to 0 or not. Therefore, p is chosen as small as possible. However, the optimization problem is not convex  if 0 < p < 1. It makes p = 1 the most ideal situation, namely, the following problems.
(l1): min ||x|| s.t. Ax = y.
Recent developments in sparse coding and compressed sensing  reveal that the solution of the optimization problem  is equal to the solution of the second optimization problem if the optimal solution is sparse enough. The above problem can be solved by linear programming methods.
Here, in this section, we give all the details about how to use sparse coding representation to compress fingerprints. This part includes how to construct the dictionary, compression of a given fingerprint, quantization  and coding with the analysis of the algorithm complexity.
In the above paragraphs, it is mentioned that the size of the dictionary  will be too large when it contains as much information as possible . Therefore, to obtain a dictionary
with a modest size, the preprocessing  is required. Influenced by the concept of transformation , rotation and noise, the fingerprints of the same finger image may look very different. What we first think is that each fingerprint image is pre-aligned , independently of the others. The most common pre-alignment  technique is to translate and rotate the fingerprint  according to the position of the core point . Unfortunately, reliable detection  of the core is very difficult in fingerprint images with poor quality. Even if the core is correctly detected , the size of the dictionary may be overlarge  because the size of a whole fingerprint image is way too large. Compared with ordinary general natural images, the fingerprint images have very simpler structure . They are only composed of ridges and valleys. In the local regions, they look the same . Hence, to solve these two problems, the whole fingerprint image is sliced into square equal sized and non-overlapping small patches . For these equal sized small patches, there are no problems about transformation  and rotation. The size of the dictionary is not too large because the small blocks  are relatively smaller .
The dictionary is obtained from the dataset image. Choose the whole fingerprint images , cut them into fixed-size square patches.
The first patch is added to the ditionary, which is initially empty .
Then we check if the next patch is sufficiently similar to all patches in the dictionary . If yes, the next patch is tested ; otherwise, the patch is added into the dictionary. Here, the similarity measure  between two patches is calculated by solving the optimization problem:
F is the Frobenius norm . P1 and P2 are the corresponding matrices of two patches i.e p1 and p2. t, a parameter of the optimization problem , is a scaling factor.
Repeat the step 2 until all patches have been tested.
Given a new fingerprint, slice it into square patches  which have the same size. The size of each of the patches has a direct relationship on the compression efficiency .
The algorithm becomes more efficient as the size increases. However, the computation complexity  and the size of the dictionary also increase rapidly . The proper size should be chosen . In addition, to make the patches fit  the dictionary better, the mean of each patch needs to be calculated and subtracted from the patch . After that, evaluate the sparse representation for each patch by solving the l1 problem . Those values whose absolute values are less than a given threshold  are treated as zero. For each patch, four kinds of information need to be recorded . They are the mean value , the number about how many atoms to use, the coefficients and their locations . The tests show that many image patches require few coefficients .
Consequently, compared with the use of a fixed number of coefficients , the method reduces the coding complexity and improves the compression ratio 
Encoding and Quantization
Entropy coding of each patch , the mean value of each patch , the coefficients and the indexes is carried out by static arithmetic coders . The atom number of each patch is separately coded . The mean value of each patch is also separately coded . The quantization of coefficients is performed using the Lloyd algorithm , learnt off-line from the coefficients  which are obtained from the training dataset by over the dictionary . The first coefficient of each block is quantized with a larger number of bits than other coefficients  and entropy-coded using a separate arithmetic coder . The model for the indexes is estimated by using the source statistics  obtained off-line from the dataset. The first index and other remaining indexes are coded by the same arithmetic encoder. In the following experiments , the first coefficient is quantized with 6 bits and other coefficients are quantized  with 4 bits.
The system architecture is given below in Figure 1:
The sparse coding algorithm is implemented using Matlab. The algorithm is tested on a dataset consisting of 80 images. The dataset consists of images which are bad, average and are good in terms of quality. The experimental results prove that the compressed images can hold most of the minutiae for all types of images. The highest compression ratio is found to be 36.4%.
The selected fingerprint image is shown in Figure 2. This fingerprint image is divided into 64 patches as shown in Figure 3. The original and the compressed fingerprint image are shown in Figure 4.
Fig.2 Fingerprint Image
Fig. 3 Divided into 64 equal sized square patches
Fig.4 Original and compressed fingerprint images
A new concept of compression algorithm adapted to fingerprint images is introduced and developed. Despite the complexity of the proposed algorithm, it compares favorably with existing more sophisticated algorithms such as WSQ, especially at high compression ratios such as 36.9%. Due to the patch-by-patch processing mechanism, the algorithm has higher complexities while implementing.
The experimental results show that the block effect of the algorithm is very less serious than that of JPEG or KSVD. One of the main challenges in developing compression algorithms for fingerprints resides in the need for preserving the minutiae which are used in the identification. The experimental results show that the algorithm can hold most of the minutiae robustly even during the compression and while reconstruction. The algorithm achieved a highest compression ratio of 36.9% with an accuracy of 92%, while preserving most of the minutiae during compression and while reconstruction.
The authors would like to thank the Department of CSE AMC Engineering College and the anonymous reviewers for their valuable comments and suggestions that helped greatly improve the quality of this paper.
D. Maltoni, D. Miao, A. K. Jain, and S. Prabhakar, Handbook of Fingerprint Recognition, 2nd ed. London, U.K.: Springer-Verlag, 2009.
N. Ahmed, T. Natarajan, and K. R. Rao, Discrete cosine transform, IEEE Trans. Comput., vol. C-23, no. 1, pp. 9093, Jan. 1974.
C. S. Burrus, R. A. Gopinath, and H. Guo, Introduction to Wavelets and Wavelet Transforms: A Primer. Upper Saddle River, NJ, USA: Prentice- Hall, 1998.
W. Pennebaker and J. Mitchell, JPEGStill Image Compression Standard. New York, NY, USA: Van Nostrand Reinhold, 1993.
M. W. Marcellin, M. J. Gormish, A. Bilgin, and M. P. Boliek, An overview of JPEG-2000, in Proc. IEEE Data Compress. Conf., Mar. 2000, pp. 523541.
A. Skodras, C. Christopoulos, and T. Ebrahimi, The JPEG 2000 still image compression standard, IEEE Signal Process. Mag., vol. 11, no. 5, pp. 3658, Sep. 2001.
T. Hopper, C. Brislawn, and J. Bradley, WSQ gray-scale fingerprint image compression specification, Federal Bureau of Investigation, Criminal Justice Information Services, Washington, DC, USA, Tech. Rep. IAFIS-IC-0110-V2, Feb. 1993.
C. M. Brislawn, J. N. Bradley, R. J. Onyshczak, and T. Hopper, FBI compression standard for digitized fingerprint images, Proc. SPIE, vol. 2847, pp. 344355, Aug. 1996.
A. Said and W. A. Pearlman, A new, fast, and efficient image codec based on set partitioning in hierarchical trees, IEEE Trans. Circuits Syst. Video Technol., vol. 6, no. 3, pp. 243250, Jun. 1996.
R. Sudhakar, R. Karthiga, and S. Jayaraman, Fingerprint compression using contourlet transform with modified SPIHT algorithm, IJECE Iranian J. Electr. Comput. Eng., vol. 5, no. 1, pp. 310, 2005.
S. G. Mallat and Z. Zhang, Matching pursuits with timefrequency dictionaries, IEEE Trans. Signal Process. vol. 41, no. 12, pp. 3397 3415, Dec. 1993.
S. S. Chen, D. Donoho, and M. Saunders, Atomic decomposition by basis pursuit, SIAM Rev., vol. 43, no. 1, pp. 129159, 2001.
J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, Robust face recognition via sparse representation, IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 2, pp. 210227, Feb. 2009.
M. Elad and M. Aharon, Image denoising via sparse and redundant representation over learned dictionaries, IEEE Trans. Image Process., vol. 15, no. 12, pp. 37363745, Dec. 2006.
S. Agarwal and D. Roth, Learning a sparse representation for object detection, in Proc. Eur. Conf. Comput. Vis., 2002, pp. 113127.
J. Yang, J. Wright, T. Huang, and Y. Ma, Image super-resolution as sparse representation of raw image patches, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2008, pp. 18.
K. Skretting and K. Engan, Image compression using learned dictionaries by RLS-DLA and compared with K-SVD, in Proc. IEEE ICASSP, May 2011, pp. 15171520.
O. Bryt and M. Elad, Compression of facial images using the K-SVD algorithm, J. Vis. Commun. Image Represent. vol. 19, no. 4, pp. 270 282, 2008.
Y. Y. Zhou, T. D. Guo, and M. Wu, Fingerprint image compression algorithm based on matrix optimization, in Proc. 6th Int Conf. Digital Content, Multimedia Technol. Appl., 2010, pp. 1419.
A. J. Ferreira and M. A. T. Figueiredo, Class-adapted image compression using independent component analysis, in Proc. Int. Conf. Image Process., vol. 1. 2003, pp. 625628.
A. J. Ferreira and M. A. T. Figueiredo, On the use of independent component analysis for image compression, Signal Process. Image Commun., vol. 21, no. 5, pp. 378389, 2006.
P. Paatero and U. Tapper, Positive matrix factorization: A nonnegative factor model with optimal utilization of error estimates of data values, Environmetrics, vol. 5, no. 1, pp. 111126, 1994.
D. D. Leeand and H. S. Seung, Learning the parts of objects by nonnegative matrix factorization, Nature, vol. 401, pp. 799791, Oct. 1999.
E. Amaldi and V. Kann, On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems, Theoretical Comput. Sci., vol. 209, nos. 12, pp. 237260, 1998.
S. Mallat and Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Trans. Signal Process., vol. 41, no. 12, pp. 3397 3415, Dec. 1993.
Guan qi Shao, Yanping Wu, Yong A, Xiao Liu. Fingerprint Compression Based on Sparse Representation IEEE Transaction on Image Processing, VOL. 23, no. 2, Feb 2014.