Re-Ranked Image Search using Query Keyword

DOI : 10.17577/IJERTCONV3IS27091

Download Full-Text PDF Cite this Publication

Text Only Version

Re-Ranked Image Search using Query Keyword

Siddanna S. R. Assistant Professor Department of CSE SJBIT, Bangalore

Abhijeet Kumar

Aditya Biswas

Amitesh Kumar

Anand Jaiswal

B.E Student ,C.S.E

B.E Student ,C.S.E

B.E Student ,C.S.E

B.E Student ,C.S.E

SJBIT, Bangalore

SJBIT , Bangalore

SJBIT , Bangalore

SJBIT , Bangalore

Abstract- To obtain more accurate image search results the semantic search method is used. Images should be stored in the database from where it can be retrieved for query keyword on a text based information. The user need to give the input as the query keyword and the images will be retried according to their visual properties in ranking order. At the online stage, images are re-ranked by comparing their semantic signatures obtained from the visual semantic space specied by the query keyword. The new approach signicantly improves both the accuracy and efciency of image re-ranking. The query-specific semantic signatures extensively improve both the proper and efficiency of image re-ranking.

Keyword: Query keyword, image, re-ranking, semantic, signature


    The search of images from data base is getting less accurate as it is gets larger and more different. Retrieval of the desired image at less number of attempts is next to impossible. The semantic search method can be very useful to solve this problem. It is the retrieval of images which is based on users query keyword and the visual property of the image selected by the user. It not only improve the results but also increase the efficiency by making the attempts less.

    User have to provide query keyword and select some image. The system will performed images "synonyms" to the query keyword. The similarity used for search criteria could be colour distributed in images, region, shape of images attributes. The problem of the image retrieval are becoming enlarge quantity recognized, the search for solutions an increasingly active area for research and development.

    The user have to give a query keyword related to the image he wants to search. After the search, the pile of images which will be obatained, the user need to specify the image based on its visual properties. The selected image will be compared with all other images in the database images whose properties will be similar to the selected image will be displayed. Not only this the images will b re-ranked according to their number of similar properties. This will not only the desired image but also increase the options and time consumption will be less.

    Example, the user wants image of green apple, the query keyword may be apples. There will be pile of images related to query keyword apple. There will be different types of apple, let the user select the desired one from all the images. The one which is selected will have some visual properties such as shape, color, design, pixels and so on. All these will be compared with other images. Images which will have similar properties will be selected, the one which will have atleast one similar property. This will be selected by the algorithms such as RGB Algorithm. Once all the images are selected and are grouped then they are re- ranked and the list is divided into one with the relevant images and one with the irrelevant ones.

    Image searching engines have adopted this method. Its diagram is shown in Fig. 1.


    There is no maximize relevancy of image results in existing system .

    Most of the times, User cant get the search images of his interest.

    If user found some interesting Image and he wants to get more similar type of the images it is difficult to get it or it is not possible without providing correct keywords.


    Images are related to everywhere and its need is to everyone. For basic information, for need of there are many areas of business, management, university, and hospitals, large collections of images are being formed. There is also need of access of those images very frequently and very easily and more accurately.

    Now users needed to give the input number of times, this not only cause time consumption but also will be less accurate. Our contribution is to capture the users search intention from this one-click query image. Less words more accurate results. Only one click will give images of user desire and all of them similar to each other with re- ranked list. This image-based content retrieval and automatic image annotation are becoming more and more relevant to the ways in which large database of digital media are stored and accessed. The common ground for systems of search engine is to extract a signature for every image are based on its pixel values and to content based image searching retrieving defined a rule for comparing of this images.

    The re-ranking of images is not just only the way,the division of the list. The model receives a picture corpus P and a text query q. It should then rank the pictures of P such that the pictures relevant to q appear above the others. Contrary to previous approaches that generally rely on an image autoannotation framework, our learning procedure aims at selecting the model parameters likely to yield a high ranking performance. The images for testing the performance of re-ranking and the images of reference classes can be collected at different time and from different search engines. Given a query keyword, 1000 images are retrieved from the whole web using certain search engine.


    The system which exists has a one way text-based keyword expansion, making the textual description of the query more detailed. The text which is entered should be detailed with all the properties described in the text, then only the desired image is being obtained. If it is not the desired image may not be obtained,for example red apples form Kashmir . then only the images of the apple will be shown. Not only this there may be present of more synonym words in the query keyword given by the user.

    Existing linguistically-related methods find either synonyms or other linguistic-related words from thesaurus, or find words frequently co occurring with the query keywords.

    Disadvantages of Existing system:

    Most of the times, User cant get the search images of his interest.

    If user found some interesting Image and he wants to get more similar type of the images it is difficult to get it or it is not possible without providing correct keywords.


    It is proposed a novel image re-ranking framework, which automatically offline learns different semantic spaces for different query keywords.The visual features of images are projected into their related semantic spaces to get semantic signatures.At the online stage, images are re-ranked by comparing their semantic signatures obtained from the semantic space specified by the query keyword.

    Advantages of Work:-

    It learns query-specific semantic spaces to significantly improve the effectiveness and efficiency of online image re-ranking.

    The visual features of images are projected into their related semantic spaces automatically learned through keyword expansions offline.

    This approach uses both text based keywords and Visual Feature extraction concept together and yield accurate result according to user interest.


    The architectures of seaching the images and re-ranking of the images are as given as per the figures. The images

    which would be searched should be uploaded to the database first with tag names.


    In this paper, we propose a novel Internet image search approach which only requires one-click user feedback. Intention specific weight schema is proposed to combine visual features and to compute visual similarity adaptive to query images. Without additional human feedback, textual and visual expansions are integrated to capture user intention. Expanded keywords are used to extend positive example images and also enlarge the image pool to include more relevant images. This framework makes it possible for industrial scale image search by both text and visual content. The proposed new image re-ranking framework consists of multiple steps, which can be improved separately or replaced by other techniques equivalently effective.


  1. E. Bart and S. Ullman. Single-example learning of novel classes using representation by similarity. In Proc. BMVC, 2005.

  2. Y. Cao, C. Wang, Z. Li, L. Zhang, and L. Zhang. Spatial-bag-of- features. In Proc. CVPR, 2010.

  3. G. Cauwenberghs and T. Poggio. Incremental and decremental sup- port vector machine learning. In Proc. NIPS, 2001.

  4. J. Cui, F. Wen, and X. Tang. Intentsearch: Interactive on-line image search re-ranking. In Proc. ACM Multimedia. ACM, 2008.

  5. J. Cui, F. Wen, and X. Tang. Real time google and live image search re-ranking. In Proc. ACM Multimedia, 2008.

  6. N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Proc. CVPR, 2005.

  7. C. Lampert, H. Nickisch, and S. Harmeling. Learning to detectunseen object classes by between-class attribute transfer. In Proc.CVPR, 2005.

  8. D. Lowe. Distinctive image features from scale-invariant keypoints.Intl Journal of Computer Vision, 2004.

  9. B. Luo, X. Wang, and X. Tang. A world wide web based image search engine using text and image content features. In Proceedingsof the SPIE Electronic Imaging, 2003.

[10]. Duda RO, Hart PE (1973) Pattern classication and scene anal- ysis.

Wiley, NewYork

[11]. Forsyth DA, Fleck MM (1997) Finding people and animals by guided assembly. In: International Conf on Image Processing, Santa Barbara, CA

Leave a Reply