Content Based Image Retrieval Based On Spatial Constraints Using Lab view

DOI : 10.17577/IJERTV2IS1232

Download Full-Text PDF Cite this Publication

Text Only Version

Content Based Image Retrieval Based On Spatial Constraints Using Lab view

T. Prathiba, N. M. Mary Sindhuja, S. Nisharani

Assistant Professor, Department of ECE, Kamaraj College of Engineering and Technology, Virudhunagar, Tamilnadu, India.

Abstract – Searching of relevant images from a large database has been a serious problem in the field of data management. Text based search methods doesnt meet the user requirement in most cases. Content based image retrieval (CBIR) involves searching of relevant images based on the features extracted from a query image. The process involves extraction of image features such as color, texture, shape, or spatial information. For better image retrieval, spatial information may be considered as a feature to be extracted. Spatial relationships between the images are compared and a corresponding match score is generated using Similarity Match (SIM) algorithm. This proposed algorithm provides both scale and rotational invariance in images.

INDEX TERMS Feature descriptor, Edge list, spatial orientation graph, Symbolic representation, SIM Algorithm.

  1. INTRODUCTION

    Content-based image retrieval (CBIR), also known as Query By Image

    In crime prevention, the police use visual information to identify people or to record the scenes of crime for evidence. Over the course of time, these photographic records become a valuable archive. Whenever a serious crime is committed, they can compare evidence from the scene of the crime for its similarity to records in their archives. This is an example of identify rather than similarity matching though since all such images vary over time. The CBIR is capable of searching an entire database to find the closest matching records. The results of crime prevention are also discussed in this paper.

  2. PROBLEM STATEMENT

    Retrieval based on color texture and shape does not provide any information about the spatial location of the feature. In most of the cases the retrieval becomes irrelevant. In order to overcome this, the features are retrieved based on its spatial location.

  3. WORK FLOW

    Content (QBIC) and Content-Based Visual Information Retrieval (CBVIR) is the

    Databa

    Resizin

    Gray

    application of computer vision techniques to

    Object detection

    Edge

    the image retrieval problem. Content Based Image Retrieval system is the system of

    detection

    retrieving images from the database based on the similarity measured between images in the database and query image. The features can be in the form of keywords to describe the image, or the visual features such as color, texture, shape etc.

    Compari Query image features

    Retrieved image

    Figure1: Block Diagram

  4. STEPS INVOLVED

    1. Fetching Image From Database

      A sample of 70 images is used for testing the working of this algorithm. The database images are shown below.

      Figure 2: Database Images

      Images stored in the database are to be transferred to Labview for processing. Initially Labview and MS-Access are connected together after which the images are sent to Labview for further processing and which is shown in figure 3.

      Figure 3: Block Diagram Data insert in

      database

      figure 4. The result obtained after resizing is shown in figure 5.

      Figure 4: Block Diagram Resizing

      Original image Resized image 280KB 189KB

      Figure 5: Original and resized images

      1. Grayscaling

        The spatial features of edges in the images are considered. Hence a gray scale image is preferred than a color image, to reduce processing complexity. The block diagram of gray scaling using LabVIEW is shown in figure 6. The result of the original image to gray scale image is shown in figure 7.

        B. Resizing

        Images consume large amount of memory space. Since the database stores many images, huge amount of memory space is occupied. To save memory space the images are resized using down sampling. The method preferred for this process is bilinear interpolation. This process will reduce the processing time, thereby increasing the speed of operation and is done in Labview environment as shown in

        Figure 6: Block Diagram Grayscaling

        Original images Gray scaled images

        Figure 7: Gray scale conversion

      2. Edge Detection

        Edge detection is used for detecting the discontinuities in gray level, which helps to find the number of objects present and also facilitates to calculate the Centroid of each object. For edge detection Canny Edge Operator is used. The block diagram of Edge detected using LabVIEW is shown in figure 8. The edge detected images using canny detector is shown in figure 9.

        Figure 8: Block Diagram Edge detection

        Original images Edge detected images

        Figure 9: Canny edge detected images

      3. SIM Similarity function for symbolic images (An Algorithm for Retrieval by Spatial Constraint)

        After applying the preprocessing steps of resizing, gray scaling and edge detection to the images, SIM algorithm is implemented.

        1. Object detection and Centroid calculation

          To obtain the objects in database image and query image, edge detection is performed. For each edge detected image, the number of objects present in the image is

          detected and the Centroids of each detected objects are calculated. The block diagram of object detection using LabVIEW with object calculation is shown in figure 10.

          Figure 10: Block diagram Object detection

        2. Spatial Orientation Graph

          Spatial orientation graph is a technique for representing spatial relationships among objects present in the images.

          An edge in the spatial orientation graph is a line connecting two centroids of the objects and the weight associated with the edge is the distance between the centroids. The collection of all such possible edges for an image constitutes the edge list for that image.

        3. Forming The Edge List

          The number of edges in the edge list for any image is

          (n (n-1))/2

          Where n is the number of objects in the image.

          Consider two images S1 and S2 and suppose if we compute the similarity of S2 with respect to S1. The image S1 is referred as the query image and the image S2 is referred as the database image. Let Eqr and Edb denote the edge lists corresponding to S1 and S2.

          If all the edges of Eqr are present in Edb, then maximum possible similarity is assigned to S2. Assuming a maximum

          possible similarity of 100.00, each edge in Edb that is also present in Eqr contributes a value of 200 / (n (n-1)) towards the similarity. Fewer the number of edges

          contributing to the similarity value, lower the similarity value obtained.

        4. Effects of angle between the edges

          The angle is defined as the smaller of the two angles between the line segments

          SIM= {(Eqr, Edb)} —-> R

          Assume, Similarity as 0.0 and n1 as the number of objects in the query image.

          For each edge ei in Eqr, find the corresponding edge ej in Edb

          If the corresponding edge is detected, calculate the angle between ei and ej. The similarity score can be calculates using

          as shown in figure 6. depends on

          Similartiy

          Similarity

          100 .0 1 cos( )

          orientation, vertex and edges.

          n1 (n1 1) / 2 2

          Figure 11: Orientation Graph

          The edges common to Eqr and Edb do not have the same slope or orientation. Depending upon the degree by which the corresponding edge orientations differ, the contriuting factor from an edge toward the similarity value has to be modified.

          The greater the difference in edge orientations, the higher the reduction in contributing factor. If the angle between two corresponding edges in Eqr and Edb is , then contributing factor from this edge pair is

          100(1+cos )/ (n (n-1))

          When = 0, the contributing factor is 200 / (n (n-1))

          When = 180, the contributing factor is 0.

        5. Similarity Match

          A SIM [2] returns a real number R based on the match between the images.

          The similarity score between the images is sent to the corresponding column in the database.

      4. Matching Results

        After the match score was being sorted in descending order, the corresponding images are displayed in the LabVIEW. The block diagram of matching process is shown in figure 12 using LabVIEW.

        Figure 12: Block diagram Matching process

        Figure 13 shows the matching score results of test images. The score value varies between 0 to 1000. For perfect match, a value of 1000 is resulted. For no match between images, 0 is returned. Based on the number of matches between the images, intermediate values are returned.

        Data base

        image

        Query image

        SCORE 1000 944.3 943.6 817.9 0.00

        Figure 13: Results of test images

        The block diagram of retrieval image is shown in figure 14 using LabVIEW. Figure 15 shows the retrieved images, for the query image from the database.

        in Portable Network Graphics (PNG) format and the results obtained are shown in figure 16. Gray scaling is not needed for binary images. The result of the retrieved query image from the database images are shown in figure 17.

        QUERY IMAGE

        TEST IMAGE

        SCORE 987.67 882.00 632.47 0.00

        Figure 16: Results of test images

        Figure 14: Block diagram Image Retrieval

        QUERY IMAGE

        RETRIEVED IMAGES

        QUERY IMAGE

        RETRIEVED IMAGES

        First 3 matches

        Second 3 matches

        Next 3 matches

        Figure 15: Retrieved images from the database

      5. Application – Crime Prevention

      The CBIR technique using spatial information is tested for a specific application of crime prevention with a database containing 50 human face images

      Next 3 matches

      Figure 17: Retrieved images

  5. CONCLUSION

The spatial similarity computation is based on exact match. SIM algorithm offers best retrieval of images using spatial constraints.

This work can be further extended by including some other feature along with the features used in the proposed system, to describe the image. By adding more features, the performance of the system will be improved. Also the accuracy of obtaining the retrieved images from large database or web will result into increase in the retrieval efficiency of the system.

VI REFERENCES

  1. Mrs. Saroj Shambharkar, Ms. Shubhangi

    C. Tirpude , Content Based Image Retrieval Using Texture and Color Extraction and Binary Tree Structure , International Journal of Computer Technology and Electronics Engineering (IJCTEE), ISSN 2249- 6343, page 51-56, 2011

  2. S.Nandagopalan, Dr.B.S.Adiga, N.Deepak, A Universal Model for Content-Based Image Retrieval, International Journal of Computer Science. 2009

  3. P.S.Suhasini, Dr.K.Sri Rama Krishna, Dr.I.V.Murali Krishna CBIR Using Color Histogram Processing, Journal of Theoretical and Applied Information Technology, 2005-2009.

  4. Ritendra Datta, Dhiraj Joshi, Jia Li, And James Z. Wang, Image Retrieval: Ideas, Influences, And Trends of the New Age, ACM Computing Surveys, vol. 40, no. 2, article 5, pp. 1-60, 2008.

  5. Ritendra Datta, Dhiraj Joshi, Jia Li, And James Z. Wang, Image Retrieval: Ideas, Influences, And Trends of the New Age, ACM Computing Surveys, vol. 40, no. 2, article 5, pp. 1-60, 2008.

  6. D.N.F.Awang Iskandar, James A.Thom, S.M.M.Tahaghoghi, Content-based Image Retrieval Using

    Image Regions as Query Examples,

    IEEE, .2008

  7. Venkat N.Gudivada and Vijay V.Raghavan , design and evaluation of algorithms for image retrieval using spatial similarity, IEEE trans on Pattern Recognition, Vol. 22, No. 7, pp. 575-600,2001.

  8. Smith, J., Chang, S, Tools and techniques for color image retrieval, In: Proc. Of Sto. and Retr. for Image. and Video. Databases, Vol 4,426 437,1996

  9. Dr.Fuhui Long Dr.Hongjiang Zhang and Prof.David Dagan Feng, Fundamentals of Content based image retrieval, International Journal of Computer Vision Journal of the ACM, Vol 45:891-923,1995.

  10. LabVIEW Database Connectivity Toolkit User Manual

Leave a Reply