Modern Information Retrieval
Chapter 10: User Interfaces and Visualization


Contents

next up previous
Next: 3. Fetching Relevant Information Up: 7. Using Relevance Judgements Previous: 1. Interfaces for Standard

    
2. Studies of User Interaction with Relevance Feedback Systems

relevance feedback interfaces!evaluation|(

Standard relevance feedback assumes the user is involved in the interaction by specifying the relevant documents. In some interfaces users are also able to select which terms to add to the query. However, most ranking and reweighting algorithms are difficult to understand or predict (even for the creators of the algorithms!) and so it might be the case that users have difficulties controlling a relevance feedback system explicitly.

A recent study was conductedto investigate directly to what degree user control of the feedback process is beneficial. Koenemann and Belkin [#!koenemann96!#] measured the benefits of letting users `under the hood' during relevance feedback. They tested four cases using the Inquery system [#!tc90!#]:

The 64 subjects were much more effective (measuring precision at a cutoff of top 5, top 10, top 30, and top 100 documents) with relevance feedback than without it. The penetrable group performed significantly better than the control, with the opaque and transparent performances falling between the two in effectiveness. Search times did not differ significantly among the conditions, but there were significant differences in the number of feedback iterations. The subjects in the penetrable group required significantly fewer iterations to achieve better queries (an average of 5.8 cycles in the penetrable group, 8.2 cycles in the control group, 7.7 cycles in the opaque group, and surprisingly, the transparent group required more cycles, 8.8 on average). The average number of documents marked relevant ranged between 11 and 14 for the three conditions. All subjects preferred relevance feedback over the baseline system, and several remarked that they preferred the `lazy' approach of selecting suggested terms over having to think up their own.

An observational study on a TTY-based version of an online catalog system [#!hancock-beaulieu92a!#] also found that users performed better using a relevance feedback mechanism that allowed manual selection of terms. However, a later observational study did not find overall success with this form of relevance feedback [#!hancock-beaulieu95!#]. The authors attribute these results to a poor design of a new graphical interface. These results may also be due to the fact that users often selected only one relevant document before performing the feedback operation, although they were using a system optimized from multiple document selection.

relevance feedback interfaces!evaluation|)


next up previous
Next: 3. Fetching Relevant Information Up: 7. Using Relevance Judgements Previous: 1. Interfaces for Standard


Modern Information Retrieval © Addison-Wesley-Longman Publishing co.
1999 Ricardo Baeza-Yates, Berthier Ribeiro-Neto