Reverse Image Search using Discrete Wavelet Transform, Local Histogram and Canny Edge Detector

Download Full-Text PDF Cite this Publication

Text Only Version

Reverse Image Search using Discrete Wavelet Transform, Local Histogram and Canny Edge Detector

Vrujal Gandhi, Jimit Vaidya, Nikunj Rana, Denish Jariwala

Department of Electronics and Communication Engineering Uka Tarsadia University

Euclidean Distance Measurement

Surat, India

Abstract The technique of using images as queries in the field of image retrieval is a growing area of interest in recent years and being developed more and more with advanced techniques. Nowadays the developments of Internet has just not only caused a rapidly growing volume of digital images, but also provided people more ways to get those images. Therefore, for this uses Reverse Image Search technique is developed. It is a unique kind of technique for retrieving images on the basis of derived features such as color, texture and shape. It will be helpful and easy way to retrieve image from huge database. In order to find image from huge database it uses the extracted features of the database images. Thus, using matching and comparison algorithms, color, texture and shape features of one image are compared and matched to the corresponding features of another images of database. Discrete Wavelet Transform, Local Histogram and Canny Edge Detector are used combined and make the Reverse Image Search algorithm

Image Collection

Query Image and Image Database

DWT, Local Histogram and Canny

Feature Extraction

Mean and Normalization

Fig. 1 Block Diagram of Reverse Image Search

Images Sorting and Indexing

Similarity Measurement

Image Search Results

possible.

Index Terms Local Histogram, Discrete Wavelet Transform, Canny Edge Detector, Reverse Image Search Algorithm

INTRODUCTION

Images are the treasure of information. Images can explain more than words. Major services in web as well as in the real life world use images and graphics to describe everything for example objects and events alike things and also used to express and communicate.

The digital revolution and innovations have made image processing tasks much faster and easy to do. Images contain vast amount of information. Reverse Image Search uses these information from the images as features and makes the image based queries easy for retrieval. Taking an image as a query and search for relevant images from the whole image database is made easy with it. Reverse Image Search algorithm has three stages namely feature extraction, similarity and performance measurement parameters and image indexing. In the feature extraction, Local Histogram is used for color, Discrete Wavelet Transform is used for Texture and Canny Edge Detector is used for Shape.

Features are extracted and Mean is taken for the features database creation and Euclidean Distance is measured for similarity measurement and then images in results are sorted and indexed accordingly. [6][9][10]

Here, appropriate weightage are given to three feature vectors

i.e. 0.5 for DWT, 0.2 for Local Histogram and 0.3 for Canny.

[8] It will help to balance the amount of individual features effect on the results. As three different feature extraction techniques are working altogether in this algorithm, it is a kind of fusion.

FEATURE EXTRACTION

    1. Discrete Wavelet Transform

      Wavelet Transform is a method of transforming the adopted method of Fourier Transform and Short Time Fourier Transform (STFT). Like the STFT, Wavelet Transform in the time domain signal into signals in the time domain and frequency. Translation is a form of transformation from the time domain. Translation associated with the location of the window function, in which the window moveable along the incoming signal. Scale is a form of frequency transformation, where the scale value is inversely proportional to the frequency value. To analysis shape/ texture feature using Discrete Wavelet Transform.

      There are chosen for several reasons, namely:

      1. Distortions caused by the wavelet domain in high compression ratio is not too intrusive than other domains in the same bit rate.

      2. Bit-error rate is low. Bit-error rate is the ratio between the wrong bits extracted total bits inserted.

        Here is a graphic illustration image decomposition process:

        Fig. 2 Image on wavelet decomposition discrete Level 1, 2 and 3

        As seen in Figure that an image are processed discrete wavelet transform with decomposition level one ,basically it will divided four sub band, namely:

        1. Approximation coefficient (LL) is also called the LL sub band.

        2. Horizontal Detail Coefficient are also called the HL sub band.

        3. Vertical detail Coefficient also called LH Sub band Diagonal Detail Coefficient or also called HH sub band.

        This partition in the frequency domain of the 2D wavelet decomposition for one level can be seen LH, HL, and HH is the decomposition level 1. LL is not shown in the figure because it is directly decomposed again into LL1, LH1, HL1 and HH1.

        AF1

        AF1

        C

        AF2

        DLH

        X

    2. Color Histogram

Any color image can be represented by the three primary colors which are Red, Green and Blue. For this it can be represented as m x n x p, where m: number of rows, n: number of columns and p: 3 for the color representation. RGB color space is used in our approach. RGB color space does not corresponds to the human way of perceiving the colors but still RGB color space is used so that we can get the results according to the colors.

RGB and indexed images having high values that require more time for the calculation. So, the images are converted into the gray scale level. Color images are 3D components and Gray images are 2D components which values between 0 to 255. Hence, this process reduces the time of calculation and also reduces the processing power required for extracting features from an images. Images shown below with example.

Color Image Grayscale Image

Fig. 4 Color Image to Grayscale Image Conversion

  1. Global Color Histogram

    Global Color Histogram deals with the whole image, i.e. it will represents color into three different planes, R-plane, G- plane and B-plane. The Global color Histogram is calculated as following steps:

    Step 1: Differentiates color image into R, G and B planes. Step 2: Take mean of mean of each planes.

    Step 3: Make a vector of these three values.

    AF2

    AF2

    AF1

    DHL DHH

    Fig. 3 One stage in multi-resolution wavelet decomposition of a an image

    A 2D of the synthesis filter bank and analysis is implemented to use wavelet transform. The 1D analysis filter bank is first applied to the columns and then to the rows of the image. Suppose the image has A1 rows and A2 columns. After applying the 1d analysis bank to each column, we get two sub- band images. Each of them has A1/2 rows and A2 columns. And after applying the 1d analysis bank to each row of both of two sub-band images, we get four sub-band images. Each of them has A1/2 rows and A2/2 columns. The original image of size A1 by A2 is obtained by combining the 2D synthesis filter bank

    Drawback of Global Color Histogram is that if objects are too small and background is white then it will take white color and neglect other colors of objects. Hence, spatial distribution of color information ill be lost. So to overcome this issue, Local Color Histogram is used. [8]

  2. Local Color Histogram

For the Local Color Histogram, an image is divided into n x n blocks. Each blocks having three different plane values. Let n x n blocks then n x n x 3 size of vector. If the number of blocks increase then process time will increase. Therefore, 4 x 4 blocks are used and retrieved total 4 x 4 x 3 Planes = 48 size of vector for one image. And for each block, values are normalized, so that each features can be easily calculated.

Fig. 5 Division of Image with Local Histogram

Each block of Local Color Histogram is calculated same as the Global Color Histogram. [8]

neighborhood. If strong edge pixel is involved in the BLOB, that weak edge is preserved.

SIMILARITY AND PERFORMANCE MEASUREMENT PARAMETERS

3.1 Similarity Measurement Parameters

Euclidean Distance is used here as a Similarity Measurement Parameter in the Reverse Image Search. The algorithm will measure the Euclidean Distance between the Feature Vector made from the Query Image features and the Features Database made from the Image Database one by one.

Dab = n

( )2

(3)

2.3 Canny Edge Detector

Canny edge detector is used for accuracy in the edge detection. It has a lower error rate and reduction in the amount of data for processing. With the use of calculus of variations and characteristics like reliability, higher accuracy and easy implementation it stands out as a unique algorithm.

Canny edge detection algorithm has 5 steps process:

  1. Finding out the right edge of the image by smoothing an image. By applying a Gaussian filter, it also filters out noise.

  2. In an image, edge may point in various directions such as vertical, horizontal or diagonal. Therefore, to find the direction Canny uses variety of filters and operators e.g. Prewitt, Roberts, Sobel. To determine direction intensity gradient of the image is calculated. Those operators return

    i=1

    This is the formula to calculate the Euclidean Distance. a and b are the values from the Feature Vector of query image and Feature Database as following. [5]

      1. Similarity Measurement Parameters

        For performance measurement of the retrieved results for the given query image, Precision Rate and Recall Rate are calculated. [5]

        1. Precision Rate

          Precision Rate is the ratio of the total number of relevant images similar to the query image we get in the results to the total number of retrieved images displayed in the results.

          the first derivative in the vertical direction (Gy) and the horizontal direction (Gx).

          Where,

          Precision Rate = NA

          NR

          (4)

          G = G2 + G2 (1)

          NA = Total number of relevant images similar to the query

          x y image,

          = atan2(Gy, Gx) (2) These values are used in the formulas above to find out Gradient (G) and Direction ().

  3. Removal of spurious responses for edge detection by applying a method called non-maximum suppression. It is a kind of technique to thin up edges. It helps tp supress all the values of gradient to 0 except the local maximal, this way it

    NR = Total number of retrieved images displayed in the results.

        1. Recall Rate

    Recall Rate is the ratio of the total number of the relevant images similar to the query image we get in the results to the total number of relevant images available in the image database.

    NA

    indicates the location where there is the sharpest change of intensity value.

    Where,

    Recall Rate =

    NT

    (5)

  4. Double thresholding is applied to detect the potential edges. It helps to remove spurious responses which can cause color variations and noise. It cancels out edges having weak gradient values and maintains the edges with strong gradient values. As the name suggests two threshold values are defined high and low. In the edge, if the gradient value of the pixels is higher than the high threshold value then they are strong edge pixels and if lower than the low threshold then they are weak edge pixels. Weak edge pixels will get suppressed.

  5. Final step is Edge Tracking by Hysteresis. As the strong edge pixels belong to true edges, they are preserved. However, the weak edge pixels can either belong to the true edge or the color variations/noise. The weak edges are removed which were due to latter reasons. And if the weak edge pixels belong to true edges then they will get connected to the strong edge pixel. The edge connection can be tracked by Binary Large Object-analysis which is applied to a weak edge pixel and its 8-connected connected pixels in

NA = Total number of relevant images similar to the query image,

NT = Total number of relevant images available in the database.

RESULTS

4.1 Arrangement of the results

Here are some sample examples on how the image is being retrieved from the image database. We have several categories in the database such as Rose, Flower, Bus, Horses, Elephant, Food, Buildings, Nature, Beach, People, Dinosaurs, Mountains etc. The database contains total of 930 images.

Reverse Image Search algorithm searches out the most relevant images from the database. The most relevant image with the matching color, texture and shape will be given more priority and will be listed first and then the rest of the images. This indexing works on the basis of the similarity and performance measurement parameters.

Euclidean distances are calculated between the query image and all the images from the image database. The lower count of Euclidean Distance indicates more relevancy. Therefore, images are sorted in the ascending order. The image with lowest score is displayed first and then the rest of the images with decreasing relevancy.

In terms of the performance, the efficiency is calculated with the performance measurement parameters. The higher the efficiency is the more relevancy we get. With Reverse Image Search algorithm, we could able to get the Precision Rate from 80% to 100%. The performance may vary depending upon the quality of the image database.

  1. Red Rose:

    Here, the main highlight of a Rose is red color and texture of petals.

    Relevant images in the result = 15

    Total number of images displayed in the result = 16 Precision Rate = (15/16)*10 = 93.75%

  2. Flower:

    Here, Roses are in different colors available. The priority will be given to yellow color, textures and we got all the images as a flower.

    Query Image

    Query Image

    1 2 3 4

    5 6 7 8

    9 10 11 12

    13 14 15 16

    Fig. 6 Red Rose result

    1 2 3 4

    5 6 7 8

    9 10 11 12

    13 14 15 16

    Fig. 7 Flower result

    Relevant images in the result = 16

    Total number of images displayed in the result = 16 Precision Rate = (16/16)*10 = 100%

  3. Horse:

    Here, we get the most accurate results of Horses. As you can see below.

    Query Image

    Query Image

    1 2 3 4

    5 6 7 8

    9 10 11 12

    13 14 15 16

    Fig. 8 Horse result

    Relevant images in the result = 16

    Total number of images displayed in the result = 16 Precision Rate = (16/16)*10 = 100%

  4. Bus:

    Here, as we can see complex image of two red color buses is given as a query and all the red color bus images are displayed at high priority in the results.

    1 2 3 4

    5 6 7 8

    9 10 11 12

    13 14 15 16

    Fig. 9 Bus result

    Relevant images in the result = 12

    Total number of images displayed in the result = 16 Precision Rate = (12/16)*10 = 81.25%

    REFERENCES

    1. Amandeep Khokher and Rajneesh Talwar, Content-based Image Retrieval: Feature Extraction Techniques and Applications, International Conference on Recent Advances and Future Trends in Information Technology (iRAFIT2012)

    2. Amanbir Sandhu and Aarti Kochha, Content Based Image Retrieval using Texture, Color and Shape for Image Analysis International Journal of Computers & Technology ISSN: 2277-3061 Volume 3, No. 1, AUG, 2012

    3. Ms. K. Arthi and Mr. J. Vijayaraghavan, Content Based Image Retrieval Algorithm Using Colour Models International Journal of Advanced Research in Computer and Communication Engineering Vol. 2, Issue 3, March 2013

    4. Ahmed J. Afifi and Wesam M. Ashour, "Content-Based Image Retrieval Using Invariant Color and Texture Features" Digital Image Computing Techniques and Applications (DICTA), 2012 International Conference on, At Fremantle, Western Australia

    5. DANDGE S.S. AND BODKHE A.P., "Content Based Image Retrieval System" Journal of Signal and Image Processing, Volume 3, Issue 2, 2012

    6. Jun Yue, Zhenbo Li, Lu Liu and Zetian Fu,"Content-based image retrieval using color and texture fused features" Mathematical and Computer Modelling, Volume 54, Issues 34, August 2011

    7. Nidhi Singh,Kanchan Singh and Ashok K. Sinha, "A Novel Approach for Content Based Image Retrieval" 2nd International Conference on Computer, Communication, Control and Information Technology( C3IT-2012) on February 25 – 26, 2012

    8. Shankar M. Patil, "A Content Based Image Retrieval using color, texture and shape", International Journal of Computer Science and Engineering Technology", Vol. 3 , No. 9 ,pp. 404-410, 2012.

    9. Irena Valova and Boris Rachev,"A Content Based Image Retrieval System Based on Color Features" November 2004, CODATA,

      Berlin Germany

    10. Yong Rui, Thomas S. Huang, and Shih-Fu Chang, Image Retrieval: Current Techniques, Promising Directions and Open Issues Journal of Visual Communication and Image Representation 10, 3962 (1999) on Jan 7 1999.

Leave a Reply

Your email address will not be published. Required fields are marked *