User Perspectives on Relevance Criteria: A Comparison among Relevant, Partially Relevant, and Not-Relevant Judgments

Maglaughlin, K. L. and Sonnenwald, D. H. User Perspectives on Relevance Criteria: A Comparison among Relevant, Partially Relevant, and Not-Relevant Judgments. Journal of the American Society for Information Science and Technology, 2002, vol. 53, n. 5, pp. 327-342. [Journal article (Paginated)]

[thumbnail of jasist-2002-maglaughlin-sonnenwald.pdf]
Preview
PDF
jasist-2002-maglaughlin-sonnenwald.pdf

Download (229kB) | Preview

English abstract

This study investigates the use of criteria to assess relevant, partially relevant and not relevant documents. Each study participant identified passages within 20 document representations that were used in making relevance judgments, judged each document representation as a whole to be relevant, partially relevant or not relevant to their information need, and explained their decisions in an interview. Analysis revealed 29 criteria, discussed positively and negatively, used by the participants when selecting passages that contributed or detracted from a document's relevance. These criteria can be grouped into 6 categories: author, abstract, content, full text, journal or publisher and personal. Results indicate that multiple criteria are used when making relevant, partially relevant and not relevant judgments. Additionally, most criteria can have both a positive or negative contribution to the relevance of a document. The criteria most frequently mentioned by study participants in this study was content, followed by criteria concerning the full text document. These findings may have implications for relevance feedback in information retrieval systems, suggesting that users give relevance feedback using multiple criteria and indicate positive and negative criteria contributions. Systems designers may want to focus on supporting content criteria followed by full text criteria as this may provide the greatest cost benefit.

Item type: Journal article (Paginated)
Keywords: searching, search term selection, professional librarian
Subjects: I. Information treatment for information services > IJ. Reference work.
A. Theoretical and general aspects of libraries and information. > AZ. None of these, but in this section.
H. Information sources, supports, channels. > HZ. None of these, but in this section.
Depositing user: Diane Sonnenwald
Date deposited: 20 Aug 2006
Last modified: 02 Oct 2014 12:04
URI: http://hdl.handle.net/10760/7997

References

Barry, C.L. (1993). A preliminary examination of clues to relevance criteria within document representations. Proceedings of the American Society for Information Science, Columbus, OH (pp. 81- 86). Medford, NJ: Learned Information, Inc.

Barry, C.L. (1994). User-defined relevance criteria: An exploratory study. Journal of the American Society for Information Science, 45(3), 149 - 159.

Barry, C.L., & Schamber, L. (1995). User-defined relevance criteria: A comparison of two studies. Proceedings of the American Society for Information Science, Chicago, IL (pp. 103-111). Medford, NJ: Infor- mation Today, Inc.

Barry, C.L., & Schamber, L. (1998). User criteria for relevance evaluation: A cross-situational comparison. Information Processing & Management, 34(2/3), 219 -236.

Bateman, J. (1998a). Changes in relevance criteria: A longitudinal study. Proceedings of the American Society for Information Science (pp. 23-32). Medford, NJ: Information Today, Inc.

Bateman, J. (1998b). Modeling changes in end-user relevance criteria: An information seeking study. Unpublished doctoral dissertation, University of North Texas, Denton, TX.

Bookstein, A. (1983). Outline of a general probabilistic retrieval model. Journal of Documentation, 39(2), 63-72.

Bruce, H.W. (1994). A cognitive view of the situational dynamism of user-centered relevance estimation. Journal of the American Society for Information Science, 45(5), 142-148.

Cohen, J. (1960). A coefficient of agreement for nominal scales. Education and Psychological Measurement, 20(1), 37- 46.

Cool, C., Belkin, N.J., Kantor, P.B., & Frieder, O. (1993). Characteristics of texts affecting relevance judgments. In M.E. Williams (Ed.), Proceed- ings of the 14th National Online Meeting (pp. 77- 84). Medford, NJ: Learned Information, Inc.

Cooper, W.S. (1971). A definition of relevance for information retrieval. Information Storage and Retrieval, 7(1), 19 -37.

Cooper, W.S. (1973). On selecting a measure of retrieval effectiveness, part 1: The “subjective” philosophy of evaluation. Journal of the Amer- ican Society for Information Science, 24(2), 87-100.

Cuadra, C.A., & Katter, R.V. (1967). Experimental studies of relevance judgments final report. volume 1: Project summary. Cleveland, OH: Case Western Reserve University, School of Library Science, Center for Documentation and Communication Research.

Eisenberg, M. (1986). Magnitude estimation and the measurement of relevance. Unpublished doctoral dissertation, Syracuse University, Syracuse, NY.

Eisenberg, M.B. (1988). Measuring relevance judgments. Information Processing & Management, 24(4), 373-389.

Eisenberg, M., & Hu, X. (1987). Dichotomous relevance judgments and the evaluation of information systems. Proceedings of the American Society for Information Science, Boston, MA (pp. 66 - 69). Medford, NJ: Learned Information.

Flick, U. (1998). An introduction to qualitative research. Thousand Oaks, CA: Sage.

Foskett, D.J. (1972). A note on the concept of “relevance.” Information Storage and Retrieval, 8(2), 77-78.

Froehlich, T.J. (1994). Relevance reconsidered—Towards an agenda for the 21st century: Introduction to special topic issue on relevance research. Journal of the American Society for Information Science, 45(3), 124 -133.

Harter, S.P. (1992). Psychological relevance and information science. Journal of the American Society for Information Science, 43(9), 602-615.

Howard, D.L. (1994). Pertinence as reflected in personal constructs. Journal of the American Society for Information Science, 45(3), 172-185.

Janes, J.W. (1991a). The binary nature of continuous relevance judgments: A case study of users' perceptions. Journal of the American Society for Information Science, 42(10), 754 -756.

Janes, J.W. (199lb). Relevance judgments and the incremental presentation of document representations. Information Processing & Management, 27(6), 629 - 646.

Janes, J.W. (1993). On the distribution of relevance judgments. Proceedings of the American Society for Information Science, Columbus, OH (pp. 104 -114). Medford, NJ: Learned Information, Inc.

Krippendorff, K. (1980). Content analysis: An introduction to its methodology. Beverly Hills, CA: Sage.

Marcus, R.S., Kugel, P., & Benenfeld, A.R. (1978). Catalog information and text as indicators of relevance. Journal of the American Society for Information Science, 29(1), 15-30.

Mizzaro, S. (1997). Relevance: The whole history. Journal of the American Society for Information Science, 48(9), 810 - 832.

Park, T.K. (1992). The nature of relevance in information retrieval: An empirical study. Unpublished doctoral dissertation, School of Library and Information Science, Indiana University, Bloomington, IN.

Park, T.K. (1993). The nature of relevance in information retrieval: An empirical study. The Library Quarterly, 63, 318 -351.

Park, T.K. (1994). Toward a theory of user-based relevance: A call for a new paradigm of inquiry. Journal of the American Society for Information Science, 45(3), 135-141.

Rees, A.M., & Schultz, D.G. (1967). A field experimental approach to the study of relevance assessments in relation to document searching: Final report: Volume 1. Cleveland, OH: Case Western Reserve University, School of Library Science, Center for Documentation and Communication Research. -

Regazzi, J.J. (1988). Performance measures for information retrieval systems: An experimental approach. Journal of the American Society for Information Science, 39(4), 235-251.

Saracevic, T. (1969). Comparative effects of titles, abstracts and full texts on relevance judgments. Proceedings of the American Society for Information Science, San Francisco, CA (pp. 293-299). Westport, CT: Greenwood Publishing Corporation.

Saracevic, T. (1975). Relevance: A review of and a framework for the thinking on the notion in information science. Journal of the American Society for Information Science, 26, 321-343.

Saracevic, T. (1976). Relevance: A review of the literature and a framework for thinking on the notion in information science. Advances in Librarianship, 6, 79 -138.

Schamber, L. (1991). Users' criteria for evaluation in a multimedia environment. Proceedings of the American Society for Information Science, Washington, DC (pp. 126 -133). Medford, NJ: Learned Information, Inc.

Schamber, L. (1994). Relevance and information behavior. Annual Review of Information Science and Technology, 29, 3-48.

Schamber, L., & Bateman, J. (1996). User criteria in relevance evaluation; Toward development of a measurement scale. Proceedings of the American Society for Information Science, Baltimore, MD (pp. 218 -225). Medford, NJ: Learned Information, Inc.

Schamber, L., Eisenberg, M.B., & Nilan, M.S. (1990). A re-examination ofs relevance: Toward a dynamic, situational definition. Information Processing & Management, 26(6), 755-776.

Smithson, S. (1994). Information retrieval evaluation in practice: A case study approach. Information Processing & Management, 30(2), 205-221.

Spink, A., & Greisdorf, H. (1997). Users' partial relevance judgments during online searching. Online and CDROM Review, 21(5) 271-279.

Spink, A., Greisdorf, H., & Bateman, J. (1998). Examining different regions of relevance: From highly relevant to not relevant. Proceedings of the American Society for Information Science, Columbus, OH (pp.3-12). Medford, NJ: Learned Information, Inc.

Spink, A., Greisdorf, H., & Bateman, J. (1999). From highly relevant to not relevant: Examining different regions of relevance. Information Processing & Management, 34(4), 599 - 621.

Stempel, G.H. (1981). Content analysis. In G.H. Stempel & B.H. Westley (Eds.), Research methods in mass communication (pp. 119-131). Englewood Cliffs, NJ: Prentice-Hall.

Su, L.T. (1993). Is relevance an adequate criterion for retrieval system evaluation: An empirical inquiry into user's evaluation. Proceedings of the American Society for Information Science (pp. 93-103). Medford, NJ: Learned Information, Inc.

Swanson, D.R. (1986). Subjective versus objective relevance in bibliographic retrieval systems. The Library Quarterly, 56, 389-398.

Tang, R., & Solomon, P. (1998). Toward an understanding of the dynamics of relevance judgment: An analysis of one person's search behavior. Information Processing & Management. 34(2/3), 237-256.

Tang, R., Vevea, J.L., & Shaw, W.M. (1999) Towards the Identification of the optimal number of relevance categories. Journal of the American Society for Information Science, 50(3), 254 -264.

Tessier, J.A., Crouch; W.W., & Atherton, P. (1977). New measures of user satisfaction with computer-based literature searches. Special Libraries, 68(11), 383-389.

Thompson, C.W.N. (1973). The functions of abstracts in the initial screening of technical documents by users. Journal of the American Society for Information Science, 24(4), 270-276.

Wang, P. (1994). A cognitive model of document selection of real users of information retrieval systems. Unpublished doctoral dissertation, University of Maryland, College Park, MD.

Wang, P.L., & White M.D. (1995). Document use during a research project: A longitudinal study. Proceedings of the American Society for Information Science, Columbus, OH (pp. 181-188). Medford, NJ: Learned Information, Inc.

Wilson, P. (1973). Situational relevance. Information Storage and Retrieval, 9(8), 457- 471.


Downloads

Downloads per month over past year

Actions (login required)

View Item View Item