Nonsubsampled Contourlet Transform and Local Directional Binary Patterns for Texture Image Classification Using Support Vector Machine

DOI : 10.17577/IJERTV2IS101002

Download Full-Text PDF Cite this Publication

Text Only Version

Nonsubsampled Contourlet Transform and Local Directional Binary Patterns for Texture Image Classification Using Support Vector Machine

Nonsubsampled Contourlet Transform and Local Directional Binary Patterns for Texture Image Classification Using Support Vector Machine

P.S. Hiremath and Rohini A. Bhusnurmath

Dept. of P.G. Studies and Research in Computer Science, Gulbarga University, Gulbarga, Karnataka, India.

Abstract

Texture is a surface characteristic property of an object. Texture analysis is an important field of investigation that has received a great deal of interest from computer vision community. In this paper, a translation and rotation invariant texture classification method based on support vector machine is proposed. Texture features are extracted using nonsubsampled contourlet transform and local directional binary patterns. Co-occurrence features are extracted for three level nonsubsampled contourlet subbands. The principal component analysis (PCA) is used to reduce the dimensionality of feature set. The class separability is enhanced using linear discriminant analysis (LDA). Support vector machine is used as classifier. The classification performance of the proposed method is tested on a set of sixteen Brodatz textures. Experimental results indicate that the proposed approach yields higher classification accuracy.

Keywords-NSCT, Principal component analysis, LDBP, Linear discriminant analysis, SVM, Texture classification.

  1. Introduction

    Texture is a inherent property of most of natural images. It contains information about the structural arrangement of surface and their relationship to the surrounding environment. Texture characteristics play a very important role in texture analysis. Texture classification is one of the fundamental problem in computer vision and has a wide variety of potential applications. Weszka et al. [1] compared the classification performance of Fourier power spectrum, second order gray level co-occurrence matrix (GLCM),

    and first order statistics of gray level differences for terrain samples. It is observed that Fourier methods performed poorly. Harlick et al. [2] suggested the use of GLCM texture features to analyze remotely sensed images. Wan et al. [3] presented comparative study of four texture analysis methods namely gray level run length method, co-occurrence matrix method, histogram method and autocorrelation method, wherein co-occurrence method is found to be superior. Wavelet transform [4, 5] provides a multi resolution approach for the problem. Smith and Chang [6] used mean and variance extracted from wavelet subband coefficients, as the texture representation.

    Classification methods can be divided into categories namely: (i) parametric, (ii) non-parametric,

    (iii) stochastic methods and (iv) non-metric methods [7]. Classification task involves classifying images based on the feature vectors provided by the feature extraction methods. If there is no prior parameterized knowledge about the probability structure, then classification is based on non-parametric techniques. The classification so performed is based on information provided by training samples alone. These techniques include fuzzy classification, neural network approach, etc. Engin Avci [8] used multilayer perceptron neural network classifier to classify selected texture images. Turkoglu and Avci [9] presented a comparison of wavelet support vector machine and wavelet-adaptive network based fuzzy inference system approaches for texture image classification. Both the methods are used for classification of the 22 texture images. Schaefer et al. [10] used fuzzy classification for thermograph based breast cancer analysis using statistical features. Mukane et al. carried out the scale invariant, size invariant and rotation invariant [11, 12, 13] classification with wavelet and co-occurrence matrix based features using fuzzy logic classifier. Laine and Fan [14] implemented standard wavelet packet energy signature for texture

    IJERTV2IS101002

    www.ijert.org

    3881

    classification. Pun and Lee [15] used Log-polar wavelet signature with Mahalanobis classifier for scale and rotation invariant texture classification. Cui et al. [16] performed experiment for rotation invariant texture classification based on radon transform and multi-scale analysis with Mahalanobis classifier. Hiremath and Shivashankar

    1. proposed wavelet based co-occurrence features for texture classification with k-NN classifier. Arivazhagan et al. [18] used Gabor features for rotation invariant texture classification with minimum distance classifier. The wavelet transform offers a multiscale and time-frequency- localized presentation. However, the 2-D separable wavelet basis has limited directional information, which can not describe the multi direction of various textures. This gave rise to several successful joint statistical models such as steerable pyramid, brushlets, curvelet transform and contourlet transform. In particular, the contourlet transform proves to be optimal in dealing with images having smooth contours. Due to upsamplers and downsamplers present in the Laplacian pyramid and directional filter bank (DFB), the contourlet transform is not shift invariant [19]. Cunha et al. [20] developed the nonsubsampled contourlet transform (NSCT), which is a shift- invariant, multiscale and multidirection expansion. The NSCT has proven to be very efficient in image processing applications such as image denoising and image enhancement. Zhao et al. [21] have presented an approach of texture image classification based on nonsubsampled contourlet

      experimentation is done using Brodatz [23] images. The experimental results show that the proposed method exhibits optimal performance.

  2. In the nonsubsampled contourlet transform (NSCT) [20] method, the main focus is to avoid the frequency aliasing problem, enhance directional selectivity and shift-invariance. The construction of NSCT is a double filter bank, which combines nonsubsampled pyramid for multiscale and nonsubsampled directional filter bank structure for directional decomposition as shown in the Figure 1. Initially, the nonsubsampled pyramid split the input image into lowpass and highpass subband. Then a nonsubsampled DFB is applied to decompose the highpass subband into several directional subbands by increasing the number of directions with frequency. This step is repeatedly iterated on the lowpass subband. In NSCT, the multiresolution decomposition is done by shift invariant filter banks which satisfy Bezout identical equation. The lowpass subband has no frequency aliasing effect because of no downsampling in the pyramidal decomposition level. Hence, the bandwidth of lowpass filter is larger than rc/2. The NSCT has better frequency characteristics than the CT. The perfect reconstruction condition is given in the Eq. (1). The nonsubsampled pyramid and nonsubsampled DFB are depicted in the Figure 1.(a) and Figure 1.(b), respectively.

    transform, local binary patterns and support vector machine for classification. In [22], the approach

    H 0 z G0 z H1z G1z 1

    (1)

    which mainly consists of two learning steps: first extracting the features using three level NSCT and second is extraction of features using local directional binary patterns (LDBP), followed by classification using k-NN classifier is proposed. In general, optimal parameters can vary depending on the intrinsic scale and complexity of the texture patterns.

    This paper focuses on the problem of texture classification. In [22] a texture image classifiation problem using k-NN classifier is investigated. The aim of this paper is to improve the classification accuracy using support vector machine as classifier. The translation and rotation invariant texture classification method is proposed to extract the textural features. NSCT has translation invariability, while LDBP has rotation invariance. To reduce the dimensionality of feature set and enhance class discrimination, principal component analysis (PCA) and linear discriminant analysis (LDA) is implemented. The classification is performed using support vector machine. The

    Since, this condition can be easily satisfied than the perfect reconstruction condition for critically sampled filter banks, it is possible to design better filters.

    Figure 1. (a) nonsubsampled pyramid, (b) nonsubsampled DFB.

    The Figure 2. shows the two-level decomposition of NSCT, which provides multiscale, multidirection, and shift invariant image decomposition. The frequency division of a nonsubsampled pyramid is shown in the Figure 2.(a) and of nonsubsampled DFB is shown in the Figure 2.(b). The NSCT is the non separable two-

    channel filter bank composed of basis function oriented at various directions in multiple scales, with different aspect ratios. With this rich set of basis functions, it captures smooth contours that are the dominant features in textures effectively. Since the NSCT has desirable properties of shift invariance, it is used to extract features from the texture images.

    Figure 2. Frequency divisions of: (a) a nonsubsampled pyramid, (b) a nonsubsampled

    DFB.

    The input texture image is decomposed into subbands by the nonsubsampled contourlet transform at four different resolution levels as shown in the Figure 3. The block diagram and resulting frequency division of NSCT is shown in the Figure 3.(a) and Figure 3.(b). At each resolution

    values of a circularly symmetric neighbour set of pixels in a local neighbourhood, an operator is derived which is, by definition, invariant against any monotonic transformation of the gray scale. Rotation invariance is achieved by recognizing that this gray-scale invariant operator incorporates a fixed set of rotation invariant patterns. The main contribution lies in recognizing that certain local binary texture patterns are fundamental properties of local image texture. There are a limited number of transitions or discontinuities in the circular representation of the pattern. The most frequent binary patterns correspond to primitive micro features, such as edges, corners and spots; hence, they can be regarded as feature detectors that are triggered by the best matching pattern. The proposed texture operator allows for detecting local directional binary patterns at circular neighbourhoods of any quantization of the angular space and at any spatial resolution is shown in the Figure 4.

    The derivation of our gray scale and rotation invariant texture operator is done by defining texture T in a local neighbourhood of a monochrome texture image as the joint distribution of the gray levels of p (p>1) image pixels is r e E 2

    level, it is decomposed into 2n subbands, where n =

    epresented by th q. ( ):

    0, 1, 2, 3, 4, …. and is the order of the directional

    T t( fc , f0 ,…., f p1),

    (2)

    filter. As the transform is nonsubsampled, each resolution level corresponds to the actual size of the input block, i.e. 64×64. Since the features generated are in larger number, the procedure of involving the entire coefficients in classification is more time consuming.

    where gray value fc corresponds to the gray value of the center pixel of the local neighbourhood and

    f p p 0,………, p 1 correspond to the gray

    values of p equally spaced pixels on a circle of radius R (R>0) that form a circularly symmetric neighbour set.

    The first step toward gray-scale invariance is to subtract, without losing information, the gray value of the center pixel ( fc ) from the gray values of the circularly symmetric neighbourhood and to assume

    that difference

    f p fc

    is independent of

    fc ,

    which yields the Eq. (3):

    T t fc

    tf0 fc , f1 fc ,··, f7 fc

    (3)

    Figure 3. The NSCT: (a) block diagram, (b) resulting frequency division.

  3. A method based on local directional binary patterns (LDBP) is a theoretically and computationally simple approach which is robust in terms of gray scale variations. It is shown to discriminate a large range of rotated textures efficiently. A gray-scale and rotation invariant texture operator based on LDBP is described as follows: Starting from the joint distribution of gray

    In practice, an exact independence is not warranted; hence, the above distribution is only an approximation of the joint distribution. However, it is tolerable to accept the possible small loss in information as it achieves invariance with respect

    to shifts in gray scale. The distribution t fc in the

    Eq. (3) describes the overall luminance of the image, which is unrelated to local image texture and consequently, does not provide useful information for texture analysis. Hence, much of

    Threshold

    Multiply

    83 83 98

    40 80 84

    126 94 130

    1 1 1

    cos(1350)

    cos(900)

    cos(450)

    cos(1800)

    cos(00)

    cos(2250)

    cos(2700)

    cos(3150)

    cos(1350)

    cos(900)

    cos(450)

    cos(1800)

    cos(00)

    cos(2250)

    cos(2700)

    cos(3150)

    -0.7071

    0

    0.7071

    0

    1

    -0.7071

    0

    0.7071

    -0.7071

    0

    0.7071

    0

    1

    -0.7071

    0

    0.7071

    0 1

    1 1 1

    (a) (b) (c) (d)

    Figure 4. Transformation of neighborhood pixels to calculate central pixel weight in LDBP. (a) A sample neighborhood, (b) Resulting binary thresholded result, (c) LDBP mask,

    (d) Resultant weights after multiplying corresponding elements of (b) and (c)

    the information in the original joint gray level distribution is as shown in the Eq. (4):

    (4)

    image stays the same, the output of the LDBP operator remains constant.

    T t f0 fc , f1 fc ,··, f7 fc

    This is a highly discriminative texture operator. It records the occurrences of various patterns in the neighbourhood of each pixel in a P-dimensional histogram. For constant regions, the differences are zero in all directions. On a slowly sloped edge, the operator records the highest difference in the gradient direction and zero values along the edge. For a spot, the differences are high in all directions. Signed differences f p fc are not affected by

    changes in mean luminance; hence, the joint difference distribution is invariant against gray- scale shifts. We achieve invariance with respect to the scaling of the gray scale by considering just the signs of the differences instead of their exact values as in the Eq. (5):

    T t v f0 fc , v f1 fc ,··, v f7 fc (5) where

    JI1 x 0,

In practice, dimensionality reduction is important in handling high dimensional data since it mitigates the curse of dimensionality and other undesired properties of high dimensional spaces. The most widely used method is principal component analysis (PCA). The class seperability is guaranteed by the linear discriminant analysis (LA). The PCA is used to find a subspace, whose basis vectors correspond to the maximum variance direction in original space. The LDA method searches for the vectors in underlying space that best discriminate among classes. The LDA creates a linear combination of features of data which gives largest mean difference between the desired classes. For all classes, the following two measures are calculated using Eqs. (8) and (9):

L j

L j

i

i

    • Within class scatter matrix:

vx i

C

C

T

T

Il0

x 0.

(6)

Sw L

N j

yi

j y j j

(8)

By assigning a cosine factor cos for each

j 1 i1

p

p

c

c

i

i

j

j

sign vf f , we transform the Eq. (5) into a

where

y j is ith sample of class j, is the mean of

unique LDBP number that characterizes the spatial structure of the local image texture as given in the Eq. (7):

class j, C is the number of classes, of samples in class j.

N j is number

fb xc , xc

7 v f

L

L

k

  • fc cosk 45

    (7)

    • Between class scatter matrix:

C

k0

A local neighborhood is thresholded at the gray value of the center pixel into a binary pattern. The LDBP operator is by definition invariant against any monotonic transformation of the gray scale,

Sb L j j )T

i 1

where represents mean of all classes,

(9)

y

y

i

i

j is ith

i.e., as long as the order of the gray values in the

sample of class j, j is the mean of class j, C is the

number of classes, N j is number of samples in

class j. The objective is to maximize the between class measure while minimizing the within class measure.

The support vector machine (SVM) [24] is designed to work with only two classes by determining the hyperplane to divide two classes. The samples closest to the margin that were selected to determine the hyperplane is known as support vectors. Basic principle of support vector machines is that, first, samples of input space can be converted into linear samples of a high dimensional space by nonlinear transform, optimal linear classification surface can be done by calculation in high dimension space [25]. The nonlinear transform can be realized by the appropriate inner product function.

Different kernel functions can get different methods of support vector machines. At present, there are three main kinds of kernel function as follows:

  1. kernel function using polynomial is defined by the Eq. (10):

    The proposed method for texture image classification consists of two modules, namely, texture training module and classification module. The Figure 5. shows the block diagram of the proposed method. In the experimentation, sixteen texture images [22] from the Brodatz texture [23] are used for classification. Each image represents one texture class. Texture images are sampled to 256×256 size. Each texture image is divided into 16 equal sized nonoverlapping blocks of size 64×64, out of which 8 randomly chosen blocks are used as training samples and remaining blocks are used as test samples for each texture class.

      1. Texture training module

        Feature database is created using nonsubsampled contourlet transform upto third level of decomposition. We get fifteen subbands for level 3 of decomposition on NSCT. The Harlick features namely, contrast, energy, entropy, homogeneity, maximum probability, cluster shade and cluster prominence of each subband coefficients are calculated to obtain eigenvector F1. The LDBP weights of each block are calculated,

        K x, x x.x 1q

        (10)

        which are used as eigenvector F2, containing 3844

        i i features ( =62*62, since image edges are excluded).

        SVM is a polynomial classifier with q order.

  2. kernel function using the Gaussian radial basis function is represented as in the Eq. (11):

    The steps of the proposed training module are given in the Algorithm 1.

    Read the image

    Read the image

    I x x 2 lI

    i

    i

    K x, x exp i

    (11)

    NSCT

    NSCT

    LDBP

    LDBP

    lI 2 IJ

    SVM is a kind of radial basis function classifier.

    apply PCA

    apply PCA

  3. kernel function using Sigmoid function is given by the Eq. (12):

apply LDA

apply LDA

K x, xi tanhvx, xi c

(12)

Each kernel function has parameters whose value has to be changed and tuned according to the data set. Polynomial kernel function produces a polynomial separating hyperplane whereas Gaussian kernel function produces a Gaussian separating hyperplane. So, depending on the level of non separability of data set, the kernel function is chosen.

train the SVM

train the SVM

classify using SVM

classify using SVM

Figure 5. Block diagram of the proposed method

Algorithm 1: Training algorithm

Step 1: Input the training image block I of size 64×64

Step 2: Apply NSCT method to image I.

Step 3: Compute Harlick features (7 numbers) from each of the NSCT subbands (15 numbers) to obtain feature vector F1 with 105 (=7*15) features.

Step 4: Compute LDBP weights for image I to obtain feature vector F2 with 3844 features (=62*62, since the image edges are excluded)

Step 5: Form the feature vector F=(F1, F2), which contains 3949 (=105+3844) features and store F in the feature database.

Step 6: Repeat the Steps 1-5 for all the

Step 2: Apply NSCT method to image Itest. Step 3: Compute Harlick features (7 numbers)

from each of the NSCT subbands (15 numbers) to obtain feature vector F1test with 105 (=7*15) features.

Step 4: Compute LDBP weights for image Itest to obtain feature vector F2test with 3844 features (=62*62, since the image edges are excluded)

Step 5: Form the feature vector Ftest =(F1test, F2test), which contains 3949 (=105+3844) features and store Ftest in the feature database.

Step 6: Project Ftest on TFPCA components and obtain the weights FtestPCA which are considered as test image features.

Step 7: Project FtestPCA on TFLDA components and obtain the weights FtestLDA which are considered as reduced test image

training blocks of all the texture

features. Denote FtestLDA as

ftest .

Step 7:

class images and obtain the training set (TF) of feature vectors.

eature set

Step 8: (Classification) Apply SVM classifier (with polynomial kernel of order 9) to

Step 8:

Apply PCA on training f

(TF) of Step 6 to obtain reduced feature set (TFPCA).

Apply LDA on reduced feature set (TFPCA) of Step 7 to obtain the

classify the test image I to class m.

Step 9: Stop.

test

as belonging

discriminant feature set (TFLDA). Store TFLDA in the feature library, which is to be used for training SVM.

Step 9: SVM is trained using polynomial kernel of order 9 using the TFLDA to obtain the SVM structure TFSVM, which is to be used for texture classification.

Step 10: Stop

In step 9, the SVM with polynomial kernel of order

9 is used, since it is observed to yield optimal results as compared to Gaussian radial basis kernel and Sigmoid kernel function, which is indicated by the experimental results.

    1. Texture classification module

      The support vector machine (SVM) [24] classifier is used to realize the automatic texture classification, with polynomial function as kernel function as given by the Eq. (10). The steps of testing algorithm is given in the Algorithm 2.

      Algorithm 2: Testing algorithm (Classification of test images)

      Step 1: Input the testing mage block Itest of size 64×64

        1. Image database

          For the experimentation, sixteen different texture classes from Brodatz album [23] are used which are shown in the Figure 6. Each 256×256 images of texture classes are divided into 16 non overlapping block of pixel 64×64. Thus, there are 256 blocks in the experiment database. The 50% of blocks of each type image in experiment database are used as training samples, so there are 128 blocks which are used as training samples. Remaining 50% of blocks of each type image in experiment database are used as test samples, so remaining 128 blocks are used as test samples. In order to estimate the performance for classification, the training and testing sets should be independent and randomly divided. The good features should not be wasted with poor classifier, so we use SVM classifier to perform texture classification. The inputs to the systems are the digitized images from one of the texture classes. When each type sub- images of training samples are trained, at this time, this type sub-images are positive, other type sub- images of training samples are negative.

          D3 D4 D6 D11

          D16 D21 D24 D29

          D36 D51 D52 D68

          D71 D75 D82 D104

          Figure 6. Texture images from Brodatz album

        2. Experimental results

          The experimentation of the proposed method is carried out on Intel® Core i3-2330M @ 2.20 GHz with 4GB RAM using MATLAB 7.9 software. The texture image is decomposed into subbands upto three levels, at each resolution, it is decomposed into 2n subbands. Thus the input image is decomposed into fifteen subbands. As the

          transform is nonsubsampled, each subband corresponds to the actual size of texture image. The features for each level are derived using gray level co-occurrence matrix (GLCM) for distance vector

          d i, j with offset d 0,1. From the GLCM,

          Harlick features namely, contrast, energy, entropy, homogeneity, maximum probability, cluster shade and cluster prominence are calculated for each subband of the decomposed image.

          The implementation of the NSCT is based on pyramidal filtering and directional filtering. Experiments are carried out using different Laplacian pyramidal (LP) filters for each of the different directional filters (DFB). Four categories of pyramid filters, namely, 9-7, maxflat, pyr and pyrexc are considered, while fifteen categories of directional filters, namely, haar, dmaxflat4, dmaxflat5, dmaxflat6, dmaxflat7, qmf2, qmf, lax, pkva, ko, sinc, sk, vk, cd, dvmlp filters are considered. We have investigated all pairs of pyramidal filter and directional filter for level 3. The results of extensive experimental activity are summarised in the Table 1. The proposed method improves average classification accuracy to 100% on the

          image database. The proposed method achieves promising results in texture classification as compared to classification technique discussed in [22]. The Table 1. shows the average classification accuracy for the 16 texture categories of Brodatz

          [23] for level 3 decomposition of NSCT for all possible combinations of filters.

          The LDBP coefficients are used to represent the different textures. LDBP coefficients do not require additional comlex computation for feature extraction. An image in LDBP transformation is represented as sum of sinusoids of changing magnitudes and frequencies. The LDBP approach is used to extract the rotational invariant coefficients of the image (which produces 62*62=3844 features). The operator labels the pixels of an image by thresholding a 3×3 neighborhood of each pixel with center value and considering the results as binary number. Further 3844 labels computed over a region are used as a texture descriptor. The derived numbers (called local directional binary patterns or LDBP codes) codify local primitives including different types of curved edges, spots, flat areas etc. The feature set so obtained from NSCT co-occurrence features and LDBP has 3949 features for proposed method. To reduce the redundant information (i.e. the information contained in some highly correlated features) and to improve the class seperability, two statistical analysis techniques called PCA and LDA are used in the experimentation. Thus, the vast numbers of features are reduced in greater dimension and used for training the support vector machine.

          The support vector machine is employed to perform texture classification using the features extracted by the proposed method. The support vector machine is a theoretically superior learning methodology with increased classification accuracy for high dimensional datasets and has been found competitive with the best machine learning algorithm. The performance of SVM classifier depends on the type of kernel function and SVM parameters. SVMs have been tested and evaluated only as pixel based image classifier. The SVM method was designed to be applied only for two class problems [23-24]. For applying SVM to multiclass classification the basic idea is to reduce the multiclass to set of binary problems so that the SVM approach can be used. There is no fixed rule in the choice of kernel function. But it is seen that polynomial kernel function works generally well with non-separable data sets. By increasing the degree of the function, one can get zero misclassification for the training set at least. In the proposed study the polynomial kernel of order 9 is implemented.

          Table 1. Average classification accuracy (%) of proposed method using different directional filters and pyramidal filters of NSCT (level 3) for 16 texture categories of Brodatz [22].

          Sl.

          No.

          Directional Filters for NSCT

          Average Classification Accuracy (%)

          Pyramidal Filters of NSCT

          pyr

          maxflat

          9-7

          pyrexc

          1

          haar

          93.750

          85.156

          100

          100

          2

          dmaxflat4

          90.625

          100

          88.281

          100

          3

          dmaxflat5

          82.813

          100

          88.281

          93.750

          4

          dmaxflat6

          100

          100

          83.594

          100

          5

          dmaxflat7

          87.500

          100

          88.281

          83.594

          6

          qmf2

          96.094

          70.313

          100

          100

          7

          qmf

          100

          93.750

          94.531

          94.531

          8

          lax

          100

          96.094

          93.750

          64.063

          9

          pkva

          87.500

          93.750

          94.531

          92.969

          10

          ko

          93.750

          96.094

          100

          100

          11

          sinc

          79.688

          94.531

          95.313

          82.813

          12

          sk

          77.344

          100

          76.563

          100

          13

          vk

          100

          100

          88.281

          100

          14

          cd

          86.719

          100

          93.750

          93.750

          15

          dvmlp

          100

          100

          100

          83.594

          The Table 2. shows the pairs of 2-D directional filter and pyramidal filter used in the proposed method, whic yielded optimal result (100%).

          Table 2. The different pairs of 2-D directional filter and pyramidal filter, which yield optimal result (100%).

          13

          lax, pyr

          282.8375

          11.6374

          14

          ko, 9-7

          259.1459

          10.2003

          15

          ko, pyrexc

          259.8482

          10.2113

          16

          sk, maxflat

          259.9656

          10.8193

          17

          sk, pyrexc

          261.0526

          10.4756

          18

          vk, pyr

          259.9656

          10.2229

          19

          vk, maxflat

          261.0526

          10.3571

          20

          vk, pyrexc

          260.7513

          10.1785

          21

          cd, maxflat

          267.4015

          10.8224

          22

          dvmlp, pyr

          267.4773

          10.7117

          23

          dvmlp, maxflat

          267.2677

          10.7314

          24

          dvmlp, 9-7

          266.229

          10.6799

          The Table 3. shows the comparison of classification accuracies for each texture class obtained by the proposed method using haar and 9-

  1. as optimal pair of 2-D directional filter and pyramidal filter and other methods in the literature which are implemented on the experimental database.

Table 3. Comparison of classification accuracies (%) by different methods for 16 texture

Sl.

No.

Image Name (Brodatz)

Hiremath and Shivashankar

[17] with k- NN

(105 features)

Zhao et al. [21]

with k-NN (288

features)

Hiremath and Rohini [22]

with k- NN (15

features)

Zhao et al. [21]

with SVM (288

features)

Proposed Method with SVM (15

features)

1

D104

93.47

100

100

100

100

2

D11

84.38

25

100

100

100

3

D16

93.48

100

100

100

100

4

D21

100

100

100

100

100

5

D24

79.63

50

100

100

100

6

D29

84.92

37.5

100

100

100

7

D3

86.9

75

100

100

100

8

D36

72.34

75

87.5

87.5

100

9

D4

76.19

100

100

100

100

10

D51

59.81

50

100

100

100

11

D52

59.57

75

100

100

100

12

D6

91.67

87.5

100

100

100

13

D68

82.13

100

100

100

100

14

D71

100

100

100

100

100

15

D75

77.81

100

100

100

100

16

D82

56.23

87.5

87.5

100

100

Mean classification rate

81.158

78.906

98.437

99.210

100.0

Sl.

No.

Image Name (Brodatz)

Hiremath and Shivashankar

[17] with k- NN

(105 features)

Zhao et al. [21]

with k-NN (288

features)

Hiremath and Rohini [22]

with k- NN (15

features)

Zhao et al. [21]

with SVM (288

features)

Proposed Method with SVM (15

features)

1

D104

93.47

100

100

100

100

2

D11

84.38

25

100

100

100

3

D16

93.48

100

100

100

100

4

D21

100

100

100

100

100

5

D24

79.63

50

100

100

100

6

D29

84.92

37.5

100

100

100

7

D3

86.9

75

100

100

100

8

D36

72.34

75

87.5

87.5

100

9

D4

76.19

100

100

100

100

10

D51

59.81

50

100

100

100

11

D52

59.57

75

100

100

100

12

D6

91.67

87.5

100

100

100

13

D68

82.13

100

100

100

100

14

D71

100

100

100

100

100

15

D75

77.81

100

100

100

100

16

D82

56.23

87.5

87.5

100

100

Mean classification rate

81.158

78.906

98.437

99.210

100.0

categories

Sl.

No.

Pair of 2-D directional filter and pyramidal filter.

Time in sec.

Training

Testing

1

haar, 9-7

258.5018

10.4278

2

haar, pyrexc

261.8971

10.1643

3

dmaxxflat4, maxflat

301.3827

12.7742

4

dmaxxflat4, pyrexc

303.0670

12.9953

5

dmaxflat5, maxflat

329.2040

15.405

6

dmaxflat6, pyr

350.5524

16.3876

7

dmaxflat6, maxflat

351.6857

15.8956

8

dmaxflat6, pyrexc

352.2076

15.415

9

dmaxflat7, maxflat

384.4168

17.9869

10

qmf2, 9-7

262.6677

10.6049

11

qmf2, pyrexc

261.9525

10.3589

12

qmf, pyr

265.6508

10.5738

The systematic comparison of the experimental results demonstrate that the proposed algorithm yields better results. The SVM classifier helps to improve the classification accuracy. The proposed system performs the better, among other approaches in the literature, yielding an accuracy of 100%.

  1. on Experts Systems with Applications, 32, pp. 919-926, 2007.

    1. I. Turkoglu and E.Avci, Comparison of wavelet-SVM and Wavelet-adaptive network based fuzzy inference system for texture classification, Journal on Digital Signal Processing, 18, pp. 15-24, 2008.

    2. G. Schaefer, M. Zavisek, T.Nakashima, Thermography based breast cancer analysis using statistical features and fuzzy classification, Journal of Pattern Recognition,

      In this paper, a novel algorithm for texture image classification using support vector machine is proposed. Features are extracted using nonsubsampled contourlet transform and local directional binary patterns. To decrease the

    3. 47, pp. 1133-1137, 2009.

      S.M. Mukane, S.R. Gengaje, and D.S. Bormane, On Scale Invariance Texture Image Retrieval using Fuzzy Logic and Wavelet Co-occurrence based Features, Int. Journal of Computer Applications, vol. 18, no. 3, pp. 10-17, 2011.

      dimensionality of feature vector and to enhance the discriminality of classes, PCA, LDA techniques are used. Support vector machine is used to classify textured images. The classification performance is

    4. S.M. Mukane, D.S. Bormane, and S.R. Gengaje , On Size Invariance Texture Image Retrieval using Fuzzy Logic and Wavelet based Features, Int. Journal of Applied Engineering Research,

      tested on sixteen Brodatz textures. The SVM classifier is found to give high classification accuracy and a smaller misclassification rate as compared to the other classifier techniques. Experimental results show that the proposed approach enhances average precision of texture

    5. vol. 6 no. 6, pp. 1297-1310, 2011.

      S. M. Mukane, D. S. Bormane and S. R. Gengaje, Wavelet and Co-occurrence Matrix based Rotation Invariant Features for Texture Image Retrieval using Fuzzy Logic, Int. Journal of Computer Applications, vol. 24, no. 7, pp. 15, 2011.

      image classification.

    6. Laine A. and Fan J., Texture classification by wavelet packet signatures, IEEE Trans.on

  1. J. Weszka, C. Dyer, and A. Rosenfeld, A comparative study of texture measures for terrain classification, IEEE Trans. on Systems, Man, and Cybernetics, vol. 6 no. 4, 1976.

  2. R. M. Haralick, K. Shanmugam, and I. Dinstein, Textural features for image classification, IEEE Trans. on Systems, Man, and Cybernetics, 3, pp. 610-621, 1973.

  3. Y. Wan, J. Du, D. Huang, Z. Chi, Y. Cheung, X. Wang, G. Zhang, Bark Texture Feature Extraction Based on Statistical Texture Analysis, In Proceedings of 2004 Int. Sympo. On Intelligent multimedia, Video & Speech processing, Hong Kong, 2004.

  4. I. Daubechies, The wavelet transform, time- frequency localization and signal analysis, IEEE Trans. on Information Theory, 36, pp. 961- 1005, 1990.

  5. S. G. Mallat, A theory for multi-resolution signal decomposition: the wavelet representation, IEEE Trans. on PAMI, 11, pp. 674-693, 1989.

  6. J. R. Smith and S. F. Chang, Transform features for texture classification and discrimination in large image databases, In Proceedings of IEEE Int. Conference on Image Processing, 1994.

  7. R.O. Duda, P.E. Hart, D.G. Stork, Pattern Classification, John Wiley, and Sons, 2006.

  8. E.Avci, An expert system based on Wavelet Neural Network-Adaptive Norm Entropy for scale invariant texture classification, Journal

[15] [16] [17] [18] [19] [20] [21]

PAMI, vol. 15, no. 11, pp. 1186-1191, 1993.

Pun C. and Lee, M., Log-polar wavelet energy signatures for rotation and scale invariant texture classification, IEEE Trans. on PAMI, vol. 25, no. 5, pp. 590-603, 2003.

Cui P., Li J., Pan Q., Zhang H., Rotation and scaling invariant texture classification based on Radon transform and multi-scale analysis, Pattern Recognition Letters, 27, pp. 408-413,

2006.

P. S. Hiremath and S. Shivashankar, Texture classification using wavelet packet decomposition, ICGST Int. Journal on Graphics, Vision and Image Processing, vol. 6, no. 2, pp. 77-80, Sept. 2006.

S. Arivazhagan, L. Ganesan, S. Padam Priyal, Texture classification using Gabor wavelets based rotation invariant features, Pattern Recognition Letters, 27, pp. 1976-1982, 2006.

M. N. Do. and M. Vetterli, The contourlet transform: an efficient directional multiresolution image representation, IEEE Trans. on Image Processing, vol. 14 no. 12, pp. 2091-2106, Dec. 2005.

A. L. Cunha, J. P. Zhou and M.N. Do, The nonsubsampled contourlet transform: theory, design and applications, IEEE Trans. on Image Processing, vol.15, no. 10, pp. 3089-3101, 2006. Zhengli Zhu, Chunxia Zhao and Yingkun Hou, Texture image classification based on nonsubsampled contourlet transform and local binary patterns, Int. Journal of Digital Content Technology and Its Applications, vol. 4, no. 9, pp. 186-193, 2010.

  1. P. S. Hiremath and Rohini A. Bhusnurmath, Texture image classification using nonsubsampled contourlet transform and local directional binary patterns, Int. Journal of Applied Research in Computer Science and Software Engineering, vol. 3, no. 7, pp. 819- 827, July 2013.

  2. P. Brodatz, Textures: A photographic album of artists and designers, Dover publication, New York, 1966.

  3. V.N Vapnik, The nature of statistical learning theory, Springer- Verlag, New York, 1995.

  4. Ratnanjali Sood and Satish Kumar, The effect of kernel function on classification, XXXII National Systems Conference, NSC 2008, pp. 369-373, 2008

Leave a Reply