Efficient Content Based Image Search and Retrieval (CBIR) Using Sift Based On Multi Sort Indexing

DOI : 10.17577/IJERTCONV2IS05025

Download Full-Text PDF Cite this Publication

Text Only Version

Efficient Content Based Image Search and Retrieval (CBIR) Using Sift Based On Multi Sort Indexing

NCICCT' 14 Conference Proceedings

A.Rajasri,

Depart ment of CSE,

University College of Engineering (BIT Ca mpus), Anna University Chennai,

Trich irappalli-650624

shrirajasri17@gmail.com

  1. B. Shrira m, Depart ment of CSE,

    University College of Engineering (BIT Ca mpus),

    Anna University Chennai, Trich irappalli-650624 shriram.jb@gmail.com

    Abstract- A common approach to content-based image search and retrieval has been widely used to describe the process of retrieving desired image from a large collection on the basis features such as color, texture and shape. For efficient content search and retrieval many methods have been proposed but accuracy and search time has limiting factors in those methods such as hashing and tree based methods etc.. To overcome this drawback, the proposed system has been designed which includes extraction S IFT features on multi sort indexing. By analyzing the high dimensional value cardinalities and inherent character of descriptor quantization and normalization have been used in the extraction process. Since dimension with unique value cardinality have more discriminative power a multiple sort algorithms is used. Multi sort indexing algorithm reduce the dimension based on cardinality so that two similar images lie within a close constant range from the experiments conducted with INRIA holidays datasets.

    Keywords: SIFT, CBIR, multisort Indexing

    1. Introduction

      The accessibility of images over the internet become obvious to the address of the challenge of the content based searching, in turn to find visually similar content. Content-based image retrieval (CBIR), a technique for retrieving images on the basis of automatically-derived features such as color, texture and shape. CBIR operates on a totally different principle fro m keyword inde xing. Primitive features characterizing image content, such as color, te xtures, and shape, are computed for both stored and query images, and used to identify the stored images most closely matching the query.

      Content-based means contents of the image rather than the metadata such as tags, keywords, and/or descriptions associated with the image. The term content in this context might refer to textures, shapes, colors, or any other informat ion that can be derived fro m the image. The process of digitization does not in itself ma ke image collections easier to manage. Some form of c lassification and indexing is still essential the only difference being that much of the required information can now potentially be derived automatically fro m the images themselves. It can be grouped under three captions image co mpression, query specification and metadata description. CBIR is enviable because most web based image search engines depend purely on metadata and this produces a lot of garbage in the results.

      Since keywo rds manually entered by humans for images in a large database may be ineffic ient, e xpensive and sometimes do not capture every keyword that describes the image [1]. Thus a system that can filter images based on their content would provide better indexing and return more accurate results. Access to a desired image fro m a repository might thus involve a search for images depicting specific types of object or scene, evoking a particular mood, or simply containing a s pecific te xture or pattern. User wants to search for, say, many rose images. He/she submits an e xisting rose picture as query. He/she submits his own sketch of rose as query. The system will e xt ract image features for this query. It will co mpare these features with that of other images in a database. Relevant results will be displayed to the user.

    2. Related work

      In this section we present literatures of several content based image retrieval techniques and similarity searching mechanis ms.

      Renato O. Stehling [10] et al.Proposed BIC (Border/ Interior pixe l Classificat ion) a compact and efficient CBIR approach is provided which is suitable for broad image realm. It has three ma in co mponents:

      1. a simple and powerfu l image analysis algorithm that classifies image pixe ls as either border or interior,

      2. a new logarith mic distance (dLog) for co mparing

      histograms, and (3) a compact representation for the visual features ext racted fro m images. After the image pixe ls are c lassified, color histogram is co mputed one for border pixels, and another color histogram for interior pixe ls. After the quantization step, image pixe ls are classified as border or interior p ixe ls. A pixe l is classified as border if it is at the border of the image itself or if at least one of its 4-neighbors (top, bottom, left and right) has a diffe rent quantized color. Limitations of Pixe l classification sometimes they obtained regions are part of a real object, i.e., a user would like ly identify by looking at the image. The criterion of homogeneous visual properties usually leads to a super segmentation of the image.

      Van De Sande [5] et al. Presents Image category recognition, which is important to access visual informat ion on the level of objects and scene types. In both image retrieval and video retrieva l use mach ine learning based on image descriptions to distinguish object and scene categories. Changes in the illu mination of a scene can greatly affect the performance of object recognition if the descriptors used are not robust to these changes. The invariance properties and the distinctiveness of color descriptors are studied in a structured way. The taxo nomy is resultant of the diagonal model of illu mination change. Then, the distinctiveness of color descriptors is analyzed e xpe rimentally using two bench marks fro m the image doma in and the video doma in. Limitat ions of color Descriptors. The mo ments and histograms are not very distinct when compared to SIFT based descriptors: it contains too little re levant informat ion to be competitive with SIFT. The perfo rmance gets affected, when no prior knowledge about the dataset and object and scene categories is available.

  2. M iko lajc zyk [6]et al. propose a novel approach for detecting interest points invariant to scale and affine transformat ions. Scale and affine invariant detectors are based on the following results: Interest points extracted with the Harris detector can be adapted to affine transformations. The characteristic scale of a local structure is indicated by a local e xtre mu m over scale of normalized derivatives. The affine shape of a point neighborhood is estimated based on the second mo ment matrix. Limitations of the detector are the invariance to geometric and photometric affine transformat ions removes some of the information. The unreasonably high number of pints increases the probability of mis matches and the comple xity.

  3. Pauleve [7] et al. The hashing methods have proven to b e suitable for approximate similarity search, since they support efficient inde xing and data

compression. The basic idea of the hNaCsIhCiCnTg' 1m4eCtohnofedrsence Proceedings

is (a) to encode the distances between the data into the form of compres sed sequences of bits by using hash functions, and (b) to store the encoding distances into buckets, in order to ensure that the probability of collision is much higher for data that are close to each other than those that are far apart. Then, they approximate e xact simila rity measures by comparing has codes, using a hamming distance on binay codes or other measures.. Diverse strategies are followed during the preprocessing for the generation of the binary codes.

The existing hashing methods can be broadly categorized as data-independent hashing methods are Local sensitive Hashing (LSH), spherical Hashing, Multi-Probe LSH, Posteriori Multi-Probe LSH and Shift-Invariant Kernels Hashing. Limitations of Hashing are failing to achieve accuracy when the hashing functions are drawn independently from the data. Requires additional Memory usage. More Processing time is required for long binary code lengths.

    1. ong [4]et al. Instead of inde xing the data into the original h igh-dimensional space, dimensionality reduction methods aim at mapping the data into a lower-dimensional subspace. The main idea is to ma ke such a transformation without losing much informat ion and build an index on the subspace. Global dimensionality reduction methods map the whole dataset into a much-lower dimensional subspace. For exa mp le, the Locally Linear Embedding method projects the data to a low- dimensional space, while preserving local geomet ric properties.

      Dimensionality reduction methods can be used either for e xact or approximate simila rity search. In the first case, the simila rity search is performed only into the transformed subspace. In the second case, first the similarity search is performed into the transformed space, where lower bounds on the distances are used for filtering, then a resulting set of candidates is returned, and finally the candidates are refined in the original space with e xact search. Limitations of Dimensionality reduction methods is some of the limitations of the existing dimensionality reduction methods are the scatter gets ma ximized by the between-class scatter that is useful for classification, and also by the within-class scatters that, for classification purposes. Much of the variation fro m one image to the next is due to illu mination changes . Consequently, the points in the projected space will not be well clustered, and worse, the classes may be smeared together. Preprocessing cost of the transformation is high.

      The ma in strategy for all t ree-based indexing methods is to prune tree branches on the established bounding distances in order to reduce the node accesses. Tree-based indexing organizes the terms into a single tree. Each path into the tree represents

      general properties of the indexed terms, similar to classification trees or decision trees. The basic tree- based indexing method is discrimination tree inde xing. The tree re flects precisely the structure of terms. A mo re co mp le x tree based method is abstraction tree indexing. The nodes are tagged with lists of terms, in a manner that reveal the substitution of variables fro m one term to another: the domain of variable substitutions in a node is the co-domain of the substitutions in a sub node. A relatively tree- based method was substitution tree indexing. Substitution tree indexing reveals retrieval and deletion times faster than other tree-based indexing methods. Limitations of Tree based Indexing methods in high dimensional spaces tree based inde xing methods become ineffic ient. Requires additional time for insertion and deletion. So met imes requires more substitution which takes additional processing time.

      To overcome this drawback we designed efficient content based image search and retrieval using scale invariant feature transform based on multisort inde xing.

      III. Proposed system

      The main objective of our proposed system is providing effic ient content based image search and retrieval has become a d ifficu lt task due to the limit ing factors such as searching time and accuracy. To overcome this drawback proposed system has been designed with scale invariant feature extraction and Multisort indexing. SIFT is a method for detecting and extracting loca l feature descriptors. Multisort Indexing rearrange the dimensions of the descriptor vectors according to the cardinalit ies. By these results are combined and it produced from the above-mentioned approaches efficient search time and accurate retrieval is achieved.

      1. Sift Feature Extraction

        This method for e xtract ing distinctive invariant features fro m images that can be used to perform reliable matching between different vie ws of a scene or object. The features are invariant to image scale and image rotation, are e xposed to provide vital matching across a significant choice of affine distortion, and change in illu mination, change in 3D viewpoint and addition of noise. The features are e xtre me ly distinctive, for that, a single feature can be properly matched with high possibility against a la rge database of features from many images. It also describes an approach to using these features for object recognition. The identification takings by similar indiv idual features to a database of features fro m known objects using a fast nearest-neighbor algorith m, fo llo wed by a Hough transform to identify clusters belonging to a single object, and at last performing verification through least-squares solution for consistent cause parameters. Sca le Invariant Feature Transform is a method for detecting and

        e xtracting local feature descriptors whNicChICaCreT' i1n4vCaorinafenrtence Proceedings

        to changes in image noise, illu mination, scaling, rotation, and small changes in viewpoint.

        Stages detection for SIFT features: Scale-space

        e xtre me detection, Key point localization, Orientation assignment, Generat ion of key point descriptors.

      2. Scale space extreme Detection

        An important issue is to determine the frequency of sampling in the image and scale domains that are needed to reliably detect the extre me . Inadequately, it turns out that there is no minimu m spacing of samples that will detect all e xt re me, as the extre me can be impulsively c lose together. This can be seen by allow for a white circ le on a black background, which would have a single scale space ma ximu m where the circu lar positive central region of the diffe rence-of-Gaussian function matches the size and location of the circle. Interest points for SIFT features correspond to local e xtre me of d ifference-of-Gaussian filters at diffe rent scales, Given a Gaussian-blurred image.

        L(x, y, ) = G(x, y, ) * I(x, y)

        Where

        G(x, y, ) = 1/(2²)e xp(x2+y2)/2

        Is a variable scale Gaussian, and then the image is convolved with a diffe rence-of-Gaussian filter.

        The first step toward the detection of interest points is the convolution of the image with Gaussian filters at diffe rent Scales, in which the difference-of-Gaussian images are obtained from the difference of adjacent blurred images. The convolved images are grouped by octave (an octave corresponds to doubling-up the value of ), and the value of k is selected so that we obtain a fixed number of blurred images per octave. This also ma kes certain that we obtain the same number of difference-of-Gaussian images per octave.

      3. Key point Localization

        Interest pints (called key points in the SIFT fra me work) are identified as local ma xima or min ima of the Do G images across scales.

        Fig: Local Extreme Detection

        Each pixe l in the DoG images is compared to its 8 neighbors by the same scale, plus the 9 corresponding neighbors by the side of neigh boring scales. If the pixe l is a local minimu m or ma ximu m, it is preferred as a candidate key point.

        For each candidate key point:

        1. Interpolation of nearby data is used to accurately determine its position.

        2. Key points with lo w contrast are re moved

        3. Responses along edges are eliminated.

      4. Orientation Assignment

        A gradient orientation histogram is computed to determine the key point orientation, in the neighborhood of the key point (using the Gaussian image at the closest scale to the key points scale). Peaks in the h istogram correspond to dominant orientations. key pointwas created for the direction corresponding to the histogram ma ximu m, and any other direction within 80% of the ma ximu m value. All the properties of the key point are calculated corresponding to the key point orientation, which give invariance to rotation.

      5. Feature Descriptor

        Once a key point orientation has been selected, the feature descriptors are computed as a set of orientation histograms based on 4*4 p ixe l neighborhoods. The orientation histograms are virtual to the key point orientation, in wh ich the orientation informat ion comes from the Gaussian image which is closest in scale to the key points scale. The contribution of each pixe l is we ighted by the gradient magnitude, and by a Gaussian with 1:5 times the scale of the key point.

      6. Feature Matching

        Find nearest neighbor in a database of SIFT features from train ing images. For robustness, use ratio of nearest neighbor to ratio of second nearest neighbor. The matched image will be the neighb or with min imu m Euc lidean distance.

        1. Multisort Indexing

          Exa mines the value cardinalities of the SIFT descriptors dimensions. Reorder the dimensions of the descriptor vectors according to their value cardinalities. Supports dynamic inde xing and storing of the new image content. Multi-Sort Inde xing is an inde xing method used to reorder the storage positions of images descriptors according to value cardinalities of their dimensions, by performing a mult iple sort algorith m, in order to increase the probability of having two simila r images in storage positions that do

          not differ more than a specific globalNcCoICnCstTa' n1t4 Craonngfeere,nce Proceedings

          denoted by a parameter.

          Steps in Multi-Sort Inde xing are Preprocessing, Insertion, query Processing and Delet ion.

          1. Preprocessing

            For inde xing and storing of a pre-existing image dataset in the form of h igh dimensional descriptor vectors. Initia lly ca rdinality values are co mputed fro m the descriptor values for each dimension where three cases such as Integer values, Norma lized Real values and real values are identified based on the descriptor value type. Since dimensions with high value cardinalities correspond to dimensions with high discriminative power, the value cardinalities are sorted in a descending order. Finally priority inde x is created based on the cardinality values.

          2. Insertion

            Provides real-t ime inde xing and storing of a new (non-existing) image in the form of descriptor vector. Allocate the storage position for the new image D- dimensional descriptor vectors. Compare the dimensional vectors of new image (vi ) and existing image(yi).

          3. Query Processing

            Query processing provides searching of the top – similar images to a posed image query. Query image is inserted into the datasets. Allocate the position of the query image. Co mpute the distance between the query image and the database image using a distance measure d. The calculated distances are inserted into a minimu m heap structure Finally re move and retrieve the top-k results from the heap constituting the result set R.

          4. Deletion

          Delet ion of the image is performed by retriev ing the physical me mo ry address and clearing its content. Re moves already inde xed images.

        2. Conclusion

          The Proposed System provides the estimated method for effic ient Content-Based Image search and Retrieval by using Scale Invariant feature descriptors and Multi-Sort Inde xing. The content of the image is analyzed according to the value cardinalities that appear on the dimensions of the descriptive vectors. Able to performing the content-based search and retrieval in lo w time and handles dynamic operations

          of insertion and deletions. Experimental results show that the effic iency and effectiveness of the proposed method gets improved in terms of search time and retrieval accuracy. Fu rther to imp rove the retrieval accuracy and effic iency, the image search and retrieval can be performed by using various types of feature descriptors in a reduced dimensional subspace.

        3. References

  1. Eleftherios T iakas, Dimitrios Rafailidis, Anastasios Dimou, And Petros Daras, Msidx : Multi-sort Indexing For Efficient Content- based Image Search And Retrieval IEEE Transactions On Multimedia, Vol. 15, No. 6, October 2013.

  2. D. Lowe, Distinctive image features from scale-invariant keypoints ,Int. J. Comput. Vision, vol. 60, pp. 91110, 2004.

  3. Greg Pass, Ramin Zabih,Justin Miller, Comparing Images Using Color Coherence Vectors

  4. J. Song, Y. Yang, Z. Huang, H. T. Shen, and R. Hong, Multiple feature hashing for real-time large scale near-duplicate video retrieval, in Proc. of ACM Multimedia, pp. 423-432, 2011.

  5. K. E. A. Van De Sande, T. Gevers, and C. G. M. Snoek, Evaluating color descriptors for object and scene recognition, IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 9, pp. 1582 1596, 2010.

  6. K. Mikolajczyk and C. Schmid, Scale and Affine invariant interest point detectors, Int. Journal in Computer Vision, vol. 60, no. 1, pp. 63-86,2004

  7. L. Pauleve, H. Jegou, and L. Amsaleg, Locality sensitive hashing: A comparison of hash function types and querying mechanisms, Pattern Recognit. Lett., vol. 31, no. 11, pp. 1348 1358, 2010.

  8. O. Chum, J. Philbin, and A. Zisserman, Near duplicate image detection: Min-hash and tf-idf weighting, in Proc. British Machine Vision Conf., 2008.

  9. Q. Lv, W. Josephson, Z. Wang, M. Charikar, and K. Li, Multi- probe LSH: Efficient indexing for high-dimensional similarity search, in Proc. Int. Conf. Very Large Data Bases (VLDB), 2007, pp. 950961.

  10. R. O. Stehling, M. A. Nascimento, and A. X. Falcao, A compact and efficient image retrieval approach based on border/interior pixel classification,in Proc. CIKM, 2002.

  11. T. Chiueh, Content -based image indexing, in Proc. Int. Conf. Very Large Databases (VLDB), 1994, pp. 582593.

  12. Y. Gong, S. Lazebnik, A. Gordo, and F. Ferronnin,Iterative quantization: A procrustean approach to learning binary codes for large-image retrieval, IEEE Trans. Pattern Anal. Mach. Intell.

[13 Yixin Chen, James Z. Wang, Robert Krovetz, Content-Based Image Retrieval by Clustering,MIR03, November 7, 2003, Berkeley, California, USA.

NCICCT' 14 Conference Proceedings

Leave a Reply