Comparative Study of Different Low Level Feature Extraction Techniques

DOI : 10.17577/IJERTV3IS041589

Download Full-Text PDF Cite this Publication

Text Only Version

Comparative Study of Different Low Level Feature Extraction Techniques

Rupali Sharma1, Er. Navdeep Singp

1Research Scholar, Master of Technology, Department of Computer Engineering, Punjabi University, Patiala

2 Assistant Professor, Department of Computer Engineering, Punjabi University, Patiala

Abstract: Feature extraction performs the function of detecting and separating fundamental portions of digital images, and also simplifies a complex input data set into a set of features in a reduced order. Feature extraction is one of the most crucial steps for multimedia processing. It also has a lot of practical applications in virtually all areas of pattern recognition and image processing disciplines. In computer vision, how to extract ideal features that can reflect the intrinsic content of the images as complete as possible is still a challenging problem. However, in the last decades, very little research has paid attention to this problem. Dimension Reduction, Automatic Exploratory Data Analysis and Data Visualization are three reasons why feature extraction is an important problem in predictive modeling and modern data analysis. Feature extraction or selection is an important step in pattern classification, pattern recognition, data mining, machine learning etc.In this paper, we focus our review on various developments in the image feature extraction. This paper provides comparative study of different low level feature extraction techniques based on color, texture and shape features.

Keywords: Feature Extraction, Spectral texture, Spatial texture, Color Correlogram, Contourlet Transform, Wavelet Transform.

I.INTRODUCTION

The proverb:"A picture is worth one thousand words" has come from Confucius a Chinese philosopher about 2500 years ago. Now, the essence of these words is completely understood [2].Based on what we see and our background knowledge, we are able to tell a story from a picture. A basic question is always there that can a computer program discover semantic concepts from images? The answer is yes for this question. The first step for a computer program in semantic understanding is to extract efficient and productive features and develop models from them instead of human background knowledge. So we can see that in image processing, how to extract image low- level features and what kind of features will be extracted play a crucial role. As we know, the three common visual features are color, texture and shape. Most image retrieval and annotation systems have been made based on these features. Also, the performance of these systems is heavily dependent on the use of image features [1].

Now a days, the database of various stored images is growing at an enormous rate and how to search and to retrieve the images in which we are interested in, is a time consuming task: it causes a necessity for image retrieval systems. It is known that the visual features of the images provide a great description of their content. Based on it, Content-based image retrieval (CBIR) has emerged as a promising system for retrieving images and browsing large images databases. In recent years, CBIR has been a topic of exhaustive research. It is basically, the process of retrieving images from a collection of images based on some extracted features. Also, Feature extraction plays an important part in the fields of pattern recognition and data mining technology. It basically extracts the meaningful feature subset from original sets by some rules. It is done, however, to reduce the complexity of space and the time of machine training and, also to achieve the goal of dimensionality reduction [3].

Feature extraction techniques are used to represent patterns with minimal loss of important information.

These techniques can be divided into following four categories:

  1. Non transformed structural characteristics: It includes moments, model parameters, and power and phase information.

  2. Transformed structural characteristics: It includes frequency spectra and subspace mapping methods.

  3. Structural descriptions: It includes parsing techniques, formal languages and their grammars, and string matching techniques.

  4. Graph descriptors: It includes semantic networks, attributed graphs and relational graphs [4].

    This paper is organized as follows: Section II focuses on the significance of feature extraction techniques. It gives the view why feature extraction is needed and its various applications

    .Section III contains survey of the related work. Section IV discusses various image feature extraction techniques based on low level features: Color, texture and shape respectively and also further their methods are explained and then advantages and disadvantages of these methods is given.

    II.SIGNIFICANCE OF FEATURE EXTRACTION

    There are generally three reasons because of which feature extraction is an important problem in predictive modeling as well as in modern data analysis.

    1. Dimension Reduction:

      Almost all prediction models suffer from the curse of dimensionality because some problems have a large number of variables. Feature extraction is considered to act as a powerful dimension reduction agent. We can easily understand the curse of dimensionality in very simple terms such as it is naturally much harder to make a good decision, when a person or a machine learning program is given a lot of variables to consider, out of which a large number of variables are irrelevant or non-informative.So,it is necessary to select a much smaller number of important and relevant features. These problems of high dimensionality also cause problems for computation. Sometimes, there may be case that two variables might be equally informative, but can be highly correlated with each other; this often causes bad behavior in numerical computation, for example the problem of multicollinearity in ordinary least squares. Therefore, feature extraction plays a very important part in dimensionality reduction and thus reduction in computational time[5].

    2. Automatic Exploratory Data Analysis:

      In various classical applications, informative features are often selected a priori by field experts, i.e., investigators themselves select what they think are the important variables to make a model.In modern data-mining applications, however, there is a growing need for fully automated black-box type of prediction models which themselves can identify the important features . There are mainly two reasons because of which such automated systems are needed. Firstly, economic needs are there, for example to process large amounts of data in a short amount of time with little manual supervision. Secondly, there can be a case that the problem and the data are very new that there are no field experts who can understand the data well enough to be able to select the important variables prior to the analysis. In these two cases, automatic exploratory data analysis becomes the key. There is a need as well as interest to let the data speak for itself ,instead of relying on pre-conceived ideas [5].

    3. Data Visualization:

Data visualization is another application of feature extraction which shares some flavor of exploratory data analysis. The human eye has an amazing quality of recognizing systematic patterns in the data. But at the same time it is not able to make good sense of data if it is more than three dimensional. So, to maximize the use of the highly developed human faculty in visual identification, one always wish to identify two or three of the most informative featues in the data so that one can plot the data in a reduced space. Feature extraction is the crucial analytic step, to produce such plots.

Feature extraction is usually not the final goal of the analysis in automatic exploratory data analysis, dimension reduction, and data visualization. It is an exercise which provides help in model building and in computation as well. But on its own, feature extraction can also be an important scientific problem [5].

III.RELATED WORK

A number of previous works have been done addressing different techniques of feature extraction for different applications. In 2007, Chiang et al. [6],proposed a CBIR image based on color histogram and Grid Based Indexing in order to improve the performance of retrieval. Experiments have been performed on a database with 1,000 images. The experimental results show that, with this proposed Grid Based Indexing, the candidates can be reduced hugely and also the retrieval accuracy can be maintained.

In 2007, Chora´s[7],described a possible approach to map image content onto different low-level features. Investigations were done on various color, texture and shape features for feature extraction in CBIR and also in biometrics systems.

In 2008, Kachouri1 et al. [8] proposed a new hierarchical feature model so the classical employment of aggregated features for heterogeneous image database recognition can be replaced. This model combined color, texture and shape features to gain better query retrieval results. Precision-recall graph was used for comparing individual feature retrieval effectiveness results and combined feature retrieval effectiveness results.

In 2008, Ahmad et al.[9], explored several feature extraction techniques to see their effectiveness in retrieving medical images. The techniques which were implemented and compared are Gray Level Histogram and Gray Level Coherence Vector , Discrete Wavelet Frame ,Gabor Transform, Fourier Descriptor, Hu Moment Invariants. In this, experiments were performed on 3,032 CT images of human brain and promising results were analysed.Further,a block-based algorithm was made based on a simple gray level histogram with image partitioning algorithm.

In 2009, Verma et al.[10], presented comparison of state-of- the-art low-level color and texture feature extraction method . A new approach for image retrieval technique was proposed to improve retrieval performance, and to reduce the search times of extraction. The techniques were tested both generally for multi-component images and particularly for isolated color and texture.

In 2012, Chadha et al. [11], performed a comparison of various feature-extraction techniques. The techniques compared were: Color Moments, Local Color Histogram, Average RGB, Co- occurrence, Geometric Moment and Global Color Histogram

.The comparisons were done for content based image retrieval. These techniques individually result in poor performance. So, combinations of such techniques were used and results for these combinations of techniques were presented and optimized for each class of image query. Also, by optimizing the techniques for each class of images, resulting in an

adaptive retrieval system, they proposed a solution, which results in a better performance in terms of parameters such as accuracy, image retrieval time and redundancy factor.

IV.IMAGE FEATURE EXTRACTION

The feature is described as a function of one or more measurements and these measurements specifies some quantifiable property of an object, and is computed as if it quantifies some significant characteristics of the object.

Features can be classified as follows:

  • General features: These are application independent features for example color, texture, and shape. Also, according to the abstraction level, they can be further divided into three types:

    • Pixel-level features: Features that are calculated at each pixel, for example: color, location.

    • Local features: Features that are calculated over the results of subdivision of the image band on image segmentation or edge detection.

    • Global features: Features that are calculated over the whole image or just some regular sub-area of an image.

  • Domain-specific features: These are application dependent features such as human faces, fingerprints, and various conceptual features. These features are generally a synthesis of low-level features for a specific domain.

Also, all features can be widely classified into high-level features and low-level features. Low-level features can be extracted directly from the original images, and high-level feature extraction is based on low-level features [7].

Low level features are discussed under:

  1. Color Feature

    Color is one of the important feature that makes possible the recognition of images by humans. It is a property which is dependent on the reflection of light to the eye and also the processing of that information by the brain. We generally use color to tell the difference between some objects, places, and the time of day. Basically, colors are defined in three dimensional color spaces. There are color three models: RGB (Red, Green, and Blue), HSB (Hue, Saturation, and Brightness) and HSV (Hue, Saturation, and Value) .Among these, the last two are

    dependent on the human perception of hue, saturation, and brightness [12].

    First of all, color space is specified and once the color space is specified, color feature can be extracted from images or regions. There are a number of important color features that have been proposed in the literatures, including color histogram, color moments(CM) , color coherence vector (CCV) and color correlogram etc.

    Various Color Feature Methods are described below:

    1. Color Histogram: By quantizing the colors within the image and counting the number of pixels of each color, the color histogram for an image is constructed. In more detail, we can say that, given a color space (e.g. YUV), an image can be projected onto three color channels (Y, U, and V). So, an image can be divided into three color components, each of these components can be regarded as a gray level image under some color channel. Then, from the histograms of its color components, the feature vector of an image can be derived. Before, generating a histogram for a gray-level image, the bin number of the histogram, N, must be given in advance; such that each pixel in the image is grouped into the bin whose center is the nearest to the value of the pixel. Then, these

      numbers of pixels in the bin gives the value of the bar in the histogram. Then, a feature vector of an image is constructed; such that the value of the bars in the histogram corresponds to the coefficients in the feature vector. In fact, the number of bins can be set in the color histogram to obtain the feature vector of desired size[6].

    2. Color Moments: Color moments are used to overcome the quantization effects of the color histogram, as feature vectors for image retrieval. Any color distribution can be characterized by its moments. Most information is concentrated on the low- order moments, only the first moment (mean), the second moment (variance) and the third moment (skewness) are taken as the feature vectors. Due to a very reasonable size of feature vector, the computation is not expensive. However, the basic concept behind color moments lays in the supposition that the distribution of color in an image can be interpreted as a probability distribution. Its advantage is that, its skewness can be used as a measure of the degree of asymmetry in the distribution [11].

    3. Color Coherence Vector: It adds spatial information into the basic color histogram. Basically, it divides each histogram into two components: coherent and non-coherent components. In coherent component we have those pixels which are spatially connected and in non- coherent component we have those pixels that are isolated. As CCV incorporates spatial inormation as well, so it usually performs better than a color histogram. However, the dimension of a CCV is very large as compared to a conventional histogram (generally twice) [13].

    4. Color Correlogram: The color correlogram (CC) expresses that how the spatial co-relation of pairs of colors changes with distance. A CC for an image is described as a table indexed by color pairs, where the dth entry at location (i,j) is calculated by counting number of pixels of color j at a distance d from a pixel of color i in the image, divided by the total number of pixels in the image[10].It captures the spatial correlation between identical colors only. It is a subset of the correlogram which expresses how the spatial correlation of pairs of colors changes with distance [8].

    5. Average RGB: Color average can be described in the RGB color space by X,as follows:

      X =(R(avg), G(avg),B(avg))t

      where R(avg), G(avg), and B(avg) are respectively red image, green image and blue images average value [8].

      The objective of using this feature is to filter out images with larger distance at first stage when multiple feature queries are involved. Another reason of selecting this feature is the fact that it uses a small number of data values to represent the feature vector and it also uses less computation as compared to others. However, the accuracies of results could be significantly impact if this feature is not combined with other features [11].

    6. Scalable Color Descriptor (SCD): It is a histogram based descriptor. Basically, SCD is a histogram in HSV color space. It differs from the conventional histogram by its capability. The scalability can be achieved in two ways: First, by reducing the number of color bins with Haar transform and Second, by

      removing some least significant bits from the quantized (integer) representations of bin values. Also, the descriptor does not include any spatial information. Due to this reason, it has similar problem to the conventional histogram [13].

    7. Color Structure Descriptor (CSD): It is also a histogram based descriptor. The CSD histogram is generated by moving a structuring element (for example: square) throughout the image. In this, Bin i of the histogram implies how many times the structuring element contains at least one pixel with color i. The CSD is an ordinary histogram, if the window is of size 1 pixel. Also, the performance of CSD depends on the size and structure of the window, both of these are difficult to determine. However, it is computationally more expensive than SCD [13].

    8. Dominant Color Descriptor (DCD):It is also a descriptor based on histogram. DCD chooses a small number of colors from the highest bins of a histogram. The number of colors(bins) chosen as DCD depends on the bin heights threshold value.MPEG-7 suggests that 18 colors are sufficient to represent a region or image. However, the selected colors in DCD are modeled to the region instead of being fixed in the color space. This is why, the color representation with DCD is more valid and compact than the conventional histogram. However, many-to-many matching is needed for calculating the similarity or distance of two DCDs [13].

    TABLE I

    Color Method

    Comparison of different color methods

    Advantages

    Disadvantages

    Color Histogram

    Simple to compute, intitutive.

    High dimension, no spatial information,

    sensitive to noise.

    Color Moments

    Compact, Robust

    Not enough to describe all colors, no

    spatial information.

    Color

    Coherence Vector

    Spatial information

    High dimension, high computation cost.

    Color Correlogram

    Spatial information.

    Very high Computation cost, Sensitive to noise,

    rotation and scale.

    Average RGB

    Less computation cost.

    Less accurate if not combined with other

    features.

    Scalable Color

    Descriptor

    Compact, robust, Perceptual meaning.

    Need post-processing for spatial

    information.

    Color

    Structure Descriptor

    Spatial information.

    Sensitive to noise, rotation, scale.

    Dominant

    Color Descriptor

    Compact on need, Scalability.

    No spatial

    information, accurate if compact.

  2. Texture Feature

    Texture is one of the useful characterization for a wide variety of images. It is generally believed that for recognition and interpretation, human visual systems use texture. In general, it is said that color is usually a pixel property whereas texture can only be calculated from a group of pixels. There are a large number of techniques that have been proposed to extract texture features. They can be broadly classified into two types based on the domain from which the texture feature is extracted. Those are: spatial texture feature extraction methods and spectral texture feature extraction methods. In case of spatial texture feature extraction methods, texture features are generally extracted by calculating the pixel statistics or by finding the local pixel structures in original image domain. Whereas in case of spectral texture feature extraction method, firstly an image is transformed into frequency domain and then feature is calculated from the transformed image. Both of these features have their own advantages and disadvantages [1].

    TABLE II

    Texture Method

    Comparison of Spatial and Spectral

    texture

    Advantages

    Disadvantages

    Spatial Texture

    Meaningful, Easy

    Sensitive to noise

    to understand,

    and distortions.

    Can be extracted

    from any shape

    without losing

    information

    Spectral Texture

    Robust, Need less

    No semantic

    computation.

    meaning, Need

    square image

    regions with

    sufficient size.

    Spectral texture features are a desirable choice for images or regions with sufficient size. Whereas, for small images or regions, especially when the regions are irregular, spatial features are considered [13].

    Various Texture Feature Methods are described below:

    1. Gray Level Co-occurrence Matrix: Gray Level Co- occurrence Matrix (GLCM) is one of the well accepted representation for the texture in images. It generally incorporate a count of the number of times a given feature (for example, a given gray level) exists in a particular spatial relation to another given feature. GLCM is one of the most popular texture analysis methods, estimate various image properties related to second-order statistics. The process involved is as follows:

      1. First of all, co-occurrence matrices for the images in the database and also the query image are computed.

        For each image, four matrices will be generated.

      2. Next step is to build up a 4×4 features from the previous co- occurrence matrices. Four main features used in feature extraction are energy, entropy, contrast and homogeneity.[11]

    2. Steerable Pyramid: This pyramid splits recursively an image into a set of oriented sub-bands and a low pass residual. The image is decomposed into soe decimated low pass sub bands and a set of undecimated directional sub bands. It is a linear multi-orientation, multi-scale image decomposition that contributes to a useful front-end.The basis functions of the steerable pyramid are Kth-order directional derivative operators (K chosen randomly), which occur in different sizes and have K+1 orientation. They span a rotation-invariant subspace, as directional derivatives and they are designed and sampled in such a way that the whole transform forms a tight frame [10].

    3. Contourlet Transform: This is basically a combination of a Directional Filter Bank (DFB) and a Laplacian pyramid (LP) . Laplacian pyramid provides the multiscale decompositions and Directional filter bank provides multidirectional decompositions. Contourlet transform is considered as a double filter bank structure. It is usually implemented by the pyramidal directional filter bank (PDFB) which divides images into directional subbands at different scales. The LP is decomposition of original image into a hierarchy of images such that each level corresponds to a different band of image frequencies. The contourlet transform contributes to a sparse representation for two-dimensional piecewise smooth signals that resemble images [10].

      Texture Method

      Comparison of different texture methods

      Advantages

      Disadvantages

      GLCM based method

      Intitutive,Compact, Robust

      High Computation Cost,Not enough to

      describe all textures.

      Steerable

      Supports any

      Sub-bands

      pyramid

      number of

      undecimated,hence

      orientation.

      more computation

      and storage.

      Contourlet Transform

      Multi resolution,Multi orientation,Robust.

      Need rotation normalisation.

      Gabor

      Multi scale,multi

      Need rotation

      Wavelet

      orientation,Robust

      Normalisation,Losing

      Transform

      of spectral

      information due to

      incomplete cover of

      spectrum plane.

  3. Shape Feature

TABLE III

  1. Gabor Wavelet Transform: This transform basically dilates and rotates the two dimensional Gabor function. Then the image is convolved with each of the obtained Gabor functions [10]. Gabor filter is typically developed to sample the entire frequency domain of an image by characterizing its center frequency and orientation parameters. In this, the image is filtered with a bank of either Gabor filters or Gabor wavelets of various preferred spatial frequencies as well as orientations. Each of the wavelet encapsulates energy at a specific frequency and direction that provide a localized frequency as a feature vector. However, texture features can also be extracted from this group of energy distributions. Let us suppose, given an input image I(x,y), then the gabor wavelet transform convolves I(x,y) with a set of Gabor filters of different spatial frequencies as well as orientations. We can define a two-dimensional Gabor function g(x,y) as under:

where W is the center frequency, x and y are the scaling parameters of the filter (the standard deviations of the Gaussian envelopes) and determines the orientation of the filter[1].

Shape is one of the important visual feature and it is also considered as a primitive feature for image content description. As measuring the similarity between shapes is difficult, so shape content description is difficult to define. Shape descriptors are generally divided into two main categories: region based and contour-based methods. Region-based methods are those which use the whole area of an object for shape description, whereas, contour-based methods are those which use only the information present in the contour of an object.

Various Shape Feature Methods are described below:

  1. Geometric Moments: In various fields like computer vision, image processing, and other related fields, an image moment is considered as some particular weighted average (moment) of intensities of the image pixels , or as a function of these moments, generally chosen to have some attractive property or interpretation. Basically, image moments are useful to describe objects after segmentation. Some properties of the image which are found via image moments are area(or total intensity), orientation information and its centroid. The moments feature use only one value for the feature vector and the performance of current implementation is not scaled well, so it means that computation of the feature vector takes a large amount of time, when the image size becomes relatively large,. The benefits of using this feature are combined with other features so that there can be such co-occurrence, which can provide a better result to user [11].

  2. Algebraic Moment Invariants: The algebraic moment invariants are calculated from the first m central moments and these are considered as the eigen values of some predefined

    matrices, M[J,K],the elements of which are scaled factors of those m central moments . The algebraic moment invariants are invariant to affine transformations and also these can be constructed up to arbitrary order. So, on the objects with different configuration of outlines, algebraic moment invariants performed either very fine or very poorly [9].

  3. Zernike Moments: The crux of Zernike moments is basically, a set of orthogonal Zernike polynomials described over the polar

    coordinate space that is inside a unit circle. The Zernike moment descriptor is considered as one of the most suitable for shape similar-based retrieval calculated in terms of robustness, computation complexity, retrieval performance and compact representation. Also, orthogonal moments generally have some properties as well of being more robust in the presence of image noise. Zernike moments have the following benefits:

    • Robustness: They are robust to any minor variations in shape and noise,

    • Rotation invariance: The magnitude of Zernike moments has rotational invariant property,

    • Effectiveness: An image can be described in a better way by a small set of its Zernike moments instead of any other types of moments for example geometric moments.

    • Expressiveness: Because the basis is orthogonal, so they have minimum information redundancy,

    • Multilevel representation: A relatively small set of Zernike moments can specify the global shape of pattern. Among them, the lower order moments represent the global shape of pattern while the higher order moments represent the detail [7].

  4. Fourier Descriptor: Fourier Descriptors (FDs) is one of the robust feature for various boundaries and objects representation. Let us consider an N-point digital boundary; which starts from an arbitrary point (x0, y0) and then follows a steady counter clockwise direction along the boundary, then a set of coordinate pairs (x0, y0), (x1, y1),,(xN-1, yN-1) can be produced. These coordinates can be described in some complex form such as z(n) = x(n) + jy(n),n = 0,1,2,…,N -1

    The discrete Fourier transform (DFT) of z(n) given as:

    Where, the complex coefficients a(k) are called the Fourier Descriptors of the boundary. Generally, 64-point Discrete Fourier Transform (DFT) is used that results on 64-dimension

    of feature vector [9].

  5. Region-based Fourier Descriptor: The region-based FD is called as generic FD (GFD), which can be used in various applications. Basically, GFD is generted by applying a modified polar Fourier transform (MPFT) on the shape image . The polar shape image is considered as a normal rectangular image so as to apply MPFT. The steps thar are followed in it are:

    1. The approximated normalized image is rotated counter clockwise by an angular step which is sufficiently small.

    2. The pixel values along positive x-direction and starting from the center of the image are copied and pasted into a new matrix in the form of row elements.

    3. Above two steps,Steps 1 and 2 are repeated until the image is rotated by 360° [2].

  6. Wavelet Transform: A hierarchical planar curve descriptor is evolved by using the wavelet transform. This descriptor decomposes a curve into different scale components such that the coarsest scale components will carry the global approximation information and the finer scale components will contain the local detailed information. Also, the wavelet descriptor has many properties .These are: invariance, multi- resolution representation, stability, spatial localization, and uniqueness. This feature is considered as invariant to rotation, translation and scaling. Also, the matching process of wavelet descriptor can be achieved cheaply [2].

TABLE IV

Shape Methods

Comparison of different shape methods

Advantages

Disadvantages

Geometric

Translation, Scale

Affine transform and

Moments

and rotation

noise resistance is

invariant,

bad .

Computational

complexity is less.

Algebraic

Translation, Scale

Occultation

Moment

and rotation

resistance is low.

invariant,

Computational

complexity is less.

Affine transform is

good.

Computational

complexity is less.

Zernike

Translation, Scale

Computational

Moments

and rotation

complexity is high.

invariant, Noise

Affine transform is

resistance is good.

bad.

Fourier

Translation, Scale

Affine transform and

Descriptor

and rotation

noise resistance is

invariant,

bad.

Computational

complexity is less.

Region-based

Translation, Scale

Computational

Fourier

and rotation

complexity is high.

Descriptor

invariant, Affine

transform and noise

resistance is good

Wavelet

Translation, Scale

Occultation

Transform

and rotation

resistance is average.

invariant, Affine

transform is good,

Computational

complexity is less.

V.CONCLUSION

In this paper, survey of low level feature extraction techniques is done which includes color, texture and shape. Among the various color features, color moments are not considered as sufficient to represent the regions. While, histogram based descriptors are either too high dimensional or too expensive to compute.DCD is known as a good balance between these two extremes. However, DCD is sufficient to represent the color information of a region. Also, the computation is relatively inexpensive and the feature dimension of DCD is low. While, CCV, CSD and color correlogram color features are useful for whole image representation, but all of these have complex computation. Among texture features, Spatial features usually have semantic meaning understandable by humans and can be extracted from any shape without any lose of information. But, it is difficult to have sufficient number of spatial features for image representation, and also spatial features are sensitive to noise. When the regions are irregular, spatial features are a desirable choice. Whereas, Spectral texture features are more robust, and also take less computation because convolution in spatial domain is implemented as product in frequency domain which is done using Fast Fourier Transform. When images or regions are of sufficient size, spectral texture features are a better choice. Among shape based methods, the survey shows that the moment-based shape descriptors are usually concise, robust and easy to compute. Also, they are invariant to rotation

,scaling and translation of the object. Due to their global nature, the disadvantage of moment-based methods is that it is very difficult to correlate high order moments with some shape's salient features. Fourier Descriptor either in contours or regions are robust to noise, simple to compute and compact as well.FD has a lot of characteristics such as simple normalization ,simple derivation and also they are simple to do matching. The wavelet transform feature is also invariant to rotation, translation and scaling. However, the matching process of wavelet descriptor can be done cheaply.For shape description, there is always a trade off between accuracy and efficiency. In future, we will be implementing a novel technique for feature extraction in the field of biometric systems.

REFERENCES

  1. D.Tian,A Review on Image Feature Extraction and Representation Techniques, International Journal of Multimedia and Ubiquitous Engineering Vol. 8, No. 4, July, 2013.

  2. Y. Mingqiang,K. Kidiyo and R. Joseph, A survey of shape feature extraction techniques, Pattern Recognition, Peng-Yeng Yin (Ed.) (2008) 43-90.

  3. S. Liang,Z. Zhang and L. Cui, Feature Extraction Method Based PCA and KICA, Second International Conference on Computational Intelligence and Natural Computing (CINC),2012.

  4. E.J. Ciaccio,S.M. Dunn and M. Akay, BiosignaI Pattern Recognition And Interpretation Systems, IEEE ENGINEERING IN MEDICINE AND BIOLOGY, 0739-51 75/93/$3.0001993.

  5. M. Zhu, Feature Extraction and Dimension Reduction with Applications to Classification and the Analysis of Co-occurrence Data, http://www.stanford.edu/~hastie/THESES/mu_zhu.pdf.

  6. T.W. Chiang,T.Tsai and Y.P. Huang, An Efficient Indexing Method for Content-Based Image Retrieval, IEEE, 2007.

  7. R. S. Chora´s, Image Feature Extraction Techniques and Their Applications for CBIR and Biometrics Systems, International journal of biology and biomedical engineering,,Issue 1, Vol. 1, 2007.

  8. R. Kachouri, K. Djemal, H. Maaref, D. Sellami Masmoudil and N. Derbel, Feature extraction and relevance evaluation for heterogeneous image database recogni'ti:on Image Processing Theory, Tools & Applications, IEEE,2008.

  9. W.S.H.M.Wan Ahmad and M.F.A. Fauzi, Comparison of Different Feature Extraction Techniques in Content-Based Image Retrieval for CT Brain Images,IEEE,2008.

  10. N. Gupta and N. Bhargava, A New Approach For CBIR Using Coefficient Of Correlatio, International Conference on Advances in Computing, Control, and Telecommunication Technologies,IEEE,2009.

  11. A. Chadha,S. Mallik and R. Johar, Comparative Study and Optimization of Feature-Extraction Techniques for Content based Image Retrieval, International Journal of Computer Applications,Volume 52 No.20, August 2012.

  12. S.V Sakhare and V.G. Nasre, Design of Feature Extraction in Content Based Image Retrieval (CBIR) using Color and Texture, International Journal of Computer Science & Informatics, Volume-I, Issue-II, 2011.

  13. D. Zhang ,M.M. Islam and Guojun Lu, A review on automatic image annotation techniques, Pattern Recognition 45 (2012).

  14. N. Gupta and V.A. Athavale,Comparative Study of different low level feature extraction techniques for content based image retrieval,International Journal of Computer Technology and Electronics Engineering (IJCTEE),Volume 1,Issue 1,August 2011.

Leave a Reply