Survey of Semantic based & Adaptive weighting Web Image Ranking Approaches

DOI : 10.17577/IJERTV2IS100331

Download Full-Text PDF Cite this Publication

Text Only Version

Survey of Semantic based & Adaptive weighting Web Image Ranking Approaches

Sayali Baxi1, Sheetal.V.Dabhade2

1 Computer Engineering Department, SKNCOE, Vadgaon

2Electronics & Telecommunication, SKNCOE, vadgaon


Most of the web scale image search engines rely on surrounding text features. It becomes difficult for user to interpret user intention only on query keywords which leads to ambiguous results which are far different from users satisfaction [2]. It is difficult for users to interpret users search intention only by query keywords and this leads to ambiguous and noisy search results which are far away from satisfaction of user. The user is asked to select a query image from pool the remaining images are then re-ranked based on the similarity between query image & re-ranked images. Our paper also describes about the novel re-ranking approach in which automatically different semantic spaces for different query keywords are learnt offline. In this paper we have shown how the problem that the similarities of visual features do not well correlate with images semantic meanings which interpret users search intention is overcomed using semantic signatures.

KeywordsImage Search, Image re-ranking, Semantic space, Semantic signature, Keyword expansion.

  1. Introduction

    Web search engines mostly use keywords as queries, they suffer from ambiguous query keywords for example if we consider apple as query then they could belong to various

    categories such as green apple, red apple. apple logo apple ipods etc. In order to solve the ambiguity, additional information has to be used to capture users search intention. One way is to make text based keyword expansion. Another approach is to content-based image retrieval with relevance feedback. Multiple positive & negative image examples are labeled by user. A query-specific visual similarity metric is learned from the selected examples and used to rank images. Their are 2 basic re- ranking approaches 1) adaptive weighting technique used in the one click image re- ranking approach & semantic based technique used in one click image re-ranking approach. The main online computational cost is on comparing visual features. Images can be captured from various viewpoints, under different lightings or even with different compression artifacts. To avoid this we can more accurately model the semantic spaces to avoid noise & improve computational costs & accuracy of re-ranking[6].

  2. Existing Approach

    In Existing system, one way is text-based keyword expansion, making the textual description of the query more detailed. Existing linguistically-related methods find either synonyms or other linguistic-related words from thesaurus, or find words frequently co occurring with the query keywords.

    For example, Google image search provides the

    Related Searches feature to suggest likely keyword expansions. However, even with the same query keywords, the intention of users can be highly diverse and cannot be accurately captured by these expansions. Search by Image is optimized to work well for content that is reasonably well described on the web. For this reason, youll likely get more relevant results for famous landmarks or paintings than you will for more personal images like your toddlers latest finger painting.

  3. Traditional Re-ranking Framework


    1. Given a query keyword i/p by user a pool of images related to query keyword is retrieved.

    2. The word-image index file and visual features of images are pre-computed offline and stored.

    3. To achieve high efficiency, the visual feature vectors need to be short and their matching needs to be fast.


  4. One click Re-ranking based on Adaptive Weighting[6]

    In given system, we have shown a novel Internet image search approach. It requires the user to give only one click on a query image and images from a pool retrieved by text based search are re-ranked based on their visual and textual similarities to the query image. We believe that users will tolerate one-click interaction which has been used by many popular text-based search engines. For example, Google requires a user to select a suggested textual query expansion by one-click

    to get additional results. The key problem to be solved in this paper is how to capture user intention from this one-click query image.


    The major one click approach includes following steps:-

    1. The query image is categorized into one of the pre-defined adaptive weight categories which reflects user intention.

    2. Inside each category, a specific weight schema is used to combine visual features adaptive to this kind of images to better re-rank images.

    3. On basis of visual content of query image selected by user & image clustering is performed so that query keywords are expanded to capture user intention.

    4. New query specific visual and textual similarity metrics are learned, by expanding query keywords to improve content-based image re-ranking.[6]


  5. One click Re-ranking based on Semantic Based Approach


    1. Our second approach has two parts offline and online. In offline stage reference classes are automatically discovered. For e.g. for keyword apple keyword expansions i.e. red apple & apple macbook

    2. This keyword expansion is used to find different reference classes. [2]

    3. Image obtained from keyword expansions are less diverse than obtained from query image. These images obtained from keyword expansions are used to find reference classes. The reference classes for keyword apple can be for e.g.: apple macbook & apple fruit (i.e. red apple).

    4. In order to improve re-ranking redundant reference classes are need to be removed. For each query keyword, its reference classes forms the basis of its semantic space.A multiclass classifier is trained using reference classes.

    5. If there are K types of visual/textual features, such as color, texture, and shape, one could combine them together to train a single classifier, which extracts one semantic signature for an image.

  6. Discovering of Reference Classes

    Keyword Expansion

    For a keyword q, we define its reference classes by finding a set of keyword expansions E(q) most relevant to q. Keyword expansions are found from words extracted from images in S(q). For each image I S (q), all the images in S(q) are re-ranked according to their visual similarities to I.

    T most frequent words: – WI=W 1,W 2,.,W T


    1. User inserts a query keyword. [2, 6]

    2. Query keyword & query image clicked by user is checked against database.

    3. Here semantic search can be used to match features of image against the (reference classes) for eg:-apple can be apple fruit tree, apple, iPod, apple macbook etc

    4. There are different classifiers on whose basis reference classes are classified for eg:-they may be color, texture or shape.

    5. Finally semantic signatures can be computed over it.

    6. Based on semantic signature keyword expansion, visual query expansion & image expansion is done.

    7. Finally re-ranking result is obtained.


    8. Comparison between Adaptive Weight & Semantic Space Based Method

    Each selected image is used as a query image

    I I I

    amongst D re-ranked images are found. If a word w is among the top ranked image, it has a ranking score I (w) according to its ranking order; otherwise I (w) = 0.[2]

    I (w) =T-j w=wjI

    = 0 wwI

  7. Proposed Method

and the re-ranking results of Adaptive Weighting and our approach are shown to the user. The user is required to indicate whether our re-ranking result is Much Better,

Better, Similar, Worse, or Much Worse than that of Adaptive Weighting. The evaluation criteria are (1) the top ranked images belong to the same semantic category as the

query image and(2) candidate images which are more visual similar to the query image have higher ranks. In over 55% cases our approach delivers better results. Semantic space approach is worse only in fewer than 18% cases, which are often the noisy cases with few images relevant to the query image.

Expected Results:-

Fig:-Comparative results between Adaptive weighting & Semantic space approach.

Fig:-Histogram of Top 10 precisions by comparing QSVSS Multiple with Adaptive Weighting.

9. Conclusion & Future Work

We propose a novel framework, which learns query specific semantic spaces to significantly improve the effectiveness and efficiency of online image re-ranking. The visual features of images are projected into their related semantic spaces automatically learned through keyword expansions offline. Also we have shown comparison between the adaptive weighting & semantic space based approach & proved that the semantic space based approach is more efficient as compared to adaptive weighting in many cases. on re-ranking precisions over state- of-the-art methods. In the future work, our

framework can be improved along several directions. Finding the keyword expansions used to define reference classes can incorporate other metadata and log data besides the textual and visual features. For example, the co- occurrence information of keywords in user queries is useful and can be obtained in log data. Although the semantic signatures are already small, it is possible to make them more compact and to further enhance their matching efficiency using other technologies such as hashing.


1] B. Siddiquie, S. Feris, and L. Davis, Image ranking and retrieval based on multi-attribute queries, in Proc. CVPR, 2011.

2] X. Wang, K. Liu, and X. Tang, Query-specific visual semantic spaces for web image are re- ranking, in Proc. CVPR, 2010.

3] A. Kovashka, D. Parikh, and K. Grauman,

Whittle search: Image search with relative attribute feedback, in Proc. CVPR, 2012.

4] Q. Yin, X. Tang, and J. Sun, An associate-predict model for face recognition, in Proc. CVPR, 2011.

5] Bin Wang1, Zhiwei Li2, Mingjing Li2, Wei-Ying Ma2,LARGE-SCALE DUPLICATE DETECTION FOR WEB IMAGE SEARCH, IEEE,1424403677/06/$20.00 2006.

6] ) X. Tang, K. Liu, J. Cui, F. Wen, and X. Wang, Intent search: capturing user intention for one-click internet image search, IEEE Trans. on PAMI, vol. 34, pp. 13421353, 2012.

7] R. Datta, D. Joshi, and J. Z. Wang, Image retrieval: Ideas, influences, and trends of the new age, ACM Computing Surveys, vol. 40, pp. 160, 2007.

8] M. Fritz and B. Schiele, Decomposition, discovery and detection of visual categories using topic models, in Proc. CVPR, 2008.

9] R. Datta, D. Joshi, and J. Z. Wang, Image retrieval: Ideas, influences, and trends of the new age, ACM Computing Surveys,vol. 40, pp. 160, 2007.

10] G. Chechik, V. Sharma, U. Shalit, and S. Bengio,

Large scale online learning of image similarity through ranking, Journal of Machine Learning Research, vol. 11, pp. 11091135, 2010.

11] V. Jain and M. Varma, Learning to re-rank: Query-dependent image re-ranking using click data, in Proc. WWW, 2011.

12] K. Tieu and P. Viola, Boosting image retrieval, International Journal of Computer Vision, vol. 56, no. 1, pp. 1736, 2004.

Leave a Reply