Ranking the Research Productivity of LIS Faculty and Schools: An Evaluation of Data Sources and Research Methods

Meho, Lokman I. and Spurgin, Kristina M. Ranking the Research Productivity of LIS Faculty and Schools: An Evaluation of Data Sources and Research Methods. Journal of the American Society for Information Science and Technology, 2005, vol. 56, n. 12, pp. 1314-1331. [Journal article (Paginated)]

[thumbnail of meho-spurgin.pdf]
Preview
PDF
meho-spurgin.pdf

Download (165kB) | Preview

English abstract

This study evaluates the data sources and research methods used in earlier studies to rank the research productivity of Library and Information Science (LIS) faculty and schools. In doing so, the study identifies both tools and methods that generate more accurate publication count rankings as well as databases that should be taken into consideration when conducting comprehensive searches in the literature for research and curricular needs. With a list of 2,625 items published between 1982 and 2002 by 68 faculty members of 18 American Library Association– (ALA-) accredited LIS schools, hundreds of databases were searched. Results show that there are only 10 databases that provide significant coverage of the LIS indexed literature. Results also show that restricting the data sources to one, two, or even three databases leads to inaccurate rankings and erroneous conclusions. Because no database provides comprehensive coverage of the LIS literature, researchers must rely on a wide range of disciplinary and multidisciplinary databases for ranking and other research purposes. The study answers such questions as the following: Is the Association of Library and Information Science Education’s (ALISE’s) directory of members a reliable tool to identify a complete list of faculty members at LIS schools? How many and which databases are needed in a multifile search to arrive at accurate publication count rankings? What coverage will be achieved using a certain number of databases? Which research areas are well covered by which databases? What alternative methods and tools are available to supplement gaps among databases? Did coverage performance of databases change over time? What counting method should be used when determining what and how many items each LIS faculty and school has published? The authors recommend advanced analysis of research productivity to provide a more detailed assessment of research productivity of authors and programs.

Item type: Journal article (Paginated)
Keywords: Ranking of Research Productivity, Library and Information Science Faculty, Library and Information Science Schools, Database Coverage Evaluation, Research Methods, Overlap and Uniqueness Between Databases
Subjects: H. Information sources, supports, channels. > HL. Databases and database Networking.
B. Information use and sociology of information > BB. Bibliometric methods
Depositing user: Lokman I. Meho
Date deposited: 12 Nov 2006
Last modified: 02 Oct 2014 12:05
URI: http://hdl.handle.net/10760/8365

References

Association for Library and Information Science Education. (2003). LIS research areas classification scheme. Retrieved May 15, 2004, from http: //www. alise.org/research_class_guide.html Baird, L.M., & Oppenheim, C. (1994). Do citations matter? Journal of Information Science, 20(1), 2–15.

Bates, M.J. (1998). The role of publication type in the evaluation of LIS programs. Library & Information Science Research, 20(2), 187–198.

Biggs, M., & Bookstein, A. (1988). What constitutes a high-quality M.L.S. program? Forty-five faculty members’ views. Journal of Education for Library and Information Science, 29(1), 28–46.

Blake, V.L.P., & Tjoumas, R. (1988). Research as a factor in faculty evaluation: The rules are a-changin’. Journal of Education for Library and Information Science, 31(1), 3–24.

Bookstein, A., & Biggs, M. (1987). Rating higher education programs: The case of the 1986 White survey. The Library Quarterly, 57(4), 351–399.

Boyce, B.R., & Hendren C. (1996). Authorship as a measure of the productivity of schools of library and information science. Journal of Education for Library and Information Science, 37(3), 250–271.

Brace, W. (1992). Quality assessment of library and information science school faculties. Education for Information, 10(2), 115–123.

Budd, J.M. (2000). Scholarly productivity of U.S. LIS faculty: An update. The Library Quarterly, 70(2), 230–245.

Budd, J.M., & Seavey, C.A. (1996). Productivity of U.S. library and information science faculty: The Hayes study revisited. The Library Quarterly, 66(1), 1–20.

Buttlar, L. (1991). Analyzing the library periodical literature: Content and authorship. College & Research Libraries, 52(1), 38–53.

Chubin, D.E., & Hackett, E.J. (1990). Peerless science: Peer review and U.S. science policy. Albany: State University of New York Press.

Coblans, H. (1972). The literature of librarianship and documentation: The periodicals and their bibliographical control. Journal of Documentation, 28(1), 56–66.

Cole, S., & Cole, J.R. (1967). Scientific output and recognition. American Sociological Review 32(3), 377–390.

Cole, S.,&Cole, J.R. (1968).Visibility and the structural bases of awareness of scientific research. American Sociological Review, 33(3), 397–413.

Cronin, B. (2001). Bibliometrics and beyond: Some thoughts on Web-based citation analysis. Journal of Information Science, 27(1), 1–7.

Cronin, B., & Overfelt, K. (1994). Citation-based auditing of academic performance. Journal of the American Society for Information Science, 45(2), 61–72.

Cronin, B., & Overfelt, K. (1996). Postscript on program rankings. Journal of the American Society for Information Science, 47(2), 173–176.

Danton, J.P. (1983). Notes on the evaluation of library schools. Journal of Education for Librarianship, 24(2), 106–116.

Edwards, T. (1976). A comparative analysis of the major abstracting and indexing services for library and information science. UNESCO Bulletin for Libraries, 30(1), 18–25.

Elsbach, K.D., & Kramer, R.M. (1997). Members’ responses to organizational identity threats: Encountering and countering the Business Week rankings. Administrative Science Quarterly, 41(3), 442–476.

Ernest, D.J., Lange, H.R., & Herring, D. (1988). An online comparison of three library science databases. RQ, 28(2), 185–194.

Fogarty, T.J., & Saftner, D.V. (1993). Academic department prestige: Anew measure based on the doctoral student labor market. Research in Higher Education, 34(4), 427–449.

Garfield, E. (1983a). How to use citation analysis for faculty evaluations, and when is it relevant, part 1. Current Contents, 44, 5–13.

Garfield, E. (1983b). How to use citation analysis for faculty evaluations, and when is it relevant, part 2. Current Contents, 45, 5–14.

Garland, K. (1991). The nature of publications authored by library and information science faculty. Library & Information Science Research, 13(1), 49–60.

Gilbert, G.N. (1978). Measuring the growth of science: A review of indicators of scientific growth. Scientometrics, 1(1), 9–34.

Gilchrist, A. (1966). A survey of leading abstracts services in documentation and an identification of key journals. Aslib Proceedings, 18(3), 62–80.

Glanzel, W. (1996). The needs for standards in bibliometric research and technology. Scientometrics, 35(2), 167–176.

Gluck, M. (1990). A review of journal coverage overlap with an extension to the definition of overlap. Journal of the American Society for Information Science, 41(1), 43–60.

Goldstein, S. (1973). Statistical bibliography and library periodical literature: 4. 1972 Abstracting, indexing, and contents coverage of library and information science periodicals. CALL, 2(4), 3–13.

Gourman, J. (1985). The Gourman Report: A rating of graduate and professional programs in American and international universities (3rd ed.). Los Angeles: National Educational Standards.

Harter, S.P., & Serebnick, J. (1990). Dismayed at methodology. Library Journal, 115(11), 10.

Hawkins, D.T., & Miller, B. (1977). On-line data base coverage of the online information-retrieval literature. Online Review, 1(1), 59–64.

Hayes, R.M. (1983). Citation statistics as a measure of faculty rsearch productivity. Journal of Education for Librarianship, 23(3), 151–172.

Hernon, P. (1990). Questioning Wallace’s research. Library Journal, 115(113), 8.

Holmes, A., & Oppenheim, C. (2001). Use of citation analysis to predict the outcome of the 2001 Research Assessment Exercise for Unit of Assessment (UoA) 61: Library and Information Management. Information Research, 6(2). Retrieved January 15, 2004, from http://informationr.net/ ir/6-2/paper103.html..

Hood, W.W., & Wilson, C.S. (2001). The scatter of documents over databases in different subject domains: How many databases are needed? Journal of the American Society for Information Science and Technology, 52(14), 1242–1254.

Howard, M.M. (2002). Student use of rankings in national magazines in the college decision-making process. Unpublished doctoral dissertation, University of Tennessee, Knoxville.

Jacso, P. (1998). Analyzing the journal coverage of abstracting/indexing databases at variable aggregate and analytic levels. Library & information Science Research 20(2), 133–151.

King, J. (1987). A review of bibliometric and other science indicators and their role in research evaluation. Journal of Information Science, 13(5), 261–276.

Koenig, M.E.D. (1982). Determinants of expert judgment of research performance. Scientometrics, 4(5), 361–378.

Koenig, M.E.D. (1983). Bibliometric indicators versus expert opinion in assessing research performance. Journal of the American Society for Information Science, 34(2), 136–145.

Kostoff, R.N. (1996). Performance measures for government-sponsored research: Overview and background. Scientometrics, 36(3), 281–292.

LaBorie, T., & Halperin, M. (1981). The ERIC and LISA databases: How the sources of library science literature compare. Database, 4(3), 32–37.

LaBorie, T., Halperin, M., & White, H.D. (1985). Library and information science abstracting and indexing services: Coverage, overlap, and context.

Library & Information Science Research, 7(2), 183–195.

Lawani, S.M., & Bayer, A.E. (1983). Validity of citation criteria for assessing the influence of scientific publications: New evidence with peer assessment. Journal of the American Society for Information Science, 34(1), 59–66.

Machung, A. (1998). Playing the rankings game. Change, 30(4), 12–16.

MacRoberts, M.H., & MacRoberts, B.R. (1986). Quantitative measures of communication in science: A study of the formal level. Social Studies of Science, 16(1), 151–187. .

MacRoberts, M.H., & MacRoberts, B.R. (1989). Problems of citation analysis: A critical review. Journal of the American Society for Information Science, 40(5), 342–349.

MacRoberts, M.H., & MacRoberts, B.R. (1996). Problems of citation analysis. Scientometrics, 36(3), 435–444.

Martin, B.R. (1996). The use of multiple indicators in the assessment of basic research. Scientometrics, 36(3), 343–362.

McDonough, P.M., Antonio, A.L., Walpole, M., & Perez, L.X. (1998). College rankings: Democratized college knowledge for whom? Research in Higher Education, 39(5), 513–537.

McGrath, W.E. (1993). The reappearance of rankings: Reliability, validity, explanation, quality, and the mission of library and information science. The Library Quarterly, 63(2), 192–198.

Monks, J., & Ehrenberg, R.G. (1999). U.S. News & World Report’s college rankings: Why do they matter? Change, 31(6), 42–52.

Mulvaney, J.P. (1992). The characteristics associated with perceived quality in schools of library and information science. The Library Quarterly, 62(1), 1–27.

Mulvaney, J.P. (1993). The characteristics associated with perceived quality in schools of library and information science: An update and prediction. The Library Quarterly, 63(2), 189–191.

Narin, F. (1976). Evaluative bibliometrics: The use of publication and citation analysis in the evaluation of scientific activity. Cherry Hill, NJ: Computer Horizons.

Narin, F., & Hamilton, K.S. (1996). Bibliometric performance measures. Scientometrics, 36(3), 293–310.

Nisonger, T.E. (2004). Citation autobiography: An investigation of ISI database coverage in determining author citedness. College & Research Libraries, 65(2), 152–163.

Oppenheim, C. (1995). The correlation between citation counts and the 1992 Research Assessment Exercise ratings for British Library and information science university departments. Journal of Documentation, 51(1), 18–27.

Pettigrew, K.E., & Nicholls, P.T. (1994). Publication patterns of LIS faculty from 1982–1992: Effects of doctoral programs. Library & Information Science Research 16(2), 139–156.

Read, E.J., & Smith, R.C. (2000). Searching for library and information science literature: A comparison of coverage in three databases. Library Computing, 19(1–2), 118–126.

Reed, K.L. (1995). Citation analysis of faculty publication: Beyond Science Citation Index and Social Science Citation Index. Bulletin of the Medical Library Association, 83(4), 503–508.

Roush, W. (1995). Grad schools ratings rankle. Science 269(5231), 1660–1662.

Seglen, P.O. (1992). The skewness of science. Journal of the American Society for Information Science, 43(9), 628–638.

Seglen, P.O. (1998). Citation rates and journal impact factors are not suitable for evaluation of research. Acta Orthopaedica Scandinavica, 69(3), 224–229.

Seng, L.B., & Willett, P. (1995). The citedness of publications by United Kingdom library schools. Journal of Information Science, 21(1), 68–71.

Smith, L.C. (1981). Citation analysis. Library Trends, 30(1), 83–106.

Stock, W.A., & Alston, R.M. (2000). Effect of graduate-program rank on success in the job market. Journal of Economic Education, 31(4), 389–401.

Stuart, D.L. (1995). Reputational rankings: Background and development. In R.D. Walleri & M.K. Moss (Eds.), Evaluating and responding to college guidebooks and rankings (pp. 13–20). San Francisco: Jossey-Bass.

U.S. News & World Report. (1999). Library science: Top schools. Retrieved 1999, from http://www.usnews.com/usnews/edu/beyond/bcinfos.htm van Raan, A.F.J. (1996). Advanced bibliometric methods as quantitative core of peer-review based evaluation and foresight exercises. Scientometrics, 36(3), 397–420.

van Raan, A.F.J. (1997). Scientometrics: State of the art. Scientometrics, 38, 205–218.

Varlejs, J., & Dalrymple, P. (1986). Publication output of library and information science faculty. Journal of Education for Library and Information Science, 27(1): 71–89.

Wallace, D.P. (1990). The most productive faculty. Library Journal 115(8), 61–63.

Watson, P. (1985) Production of scholarly articles by academic librarians and library school faculty. College & Research Libraries, 46(4), 334–342.

White, H.D. (1990). Author co-citation analysis: Overview and defense. In C.L. Borgman (Ed.), Scholarly communication and bibliometrics (pp. 84–106). Newbury Park, CA: Sage.

White, H.S. (1987). Perceptions by educators and administrators of the ranking of library school programs: An update and analysis. The Library Quarterly, 57(3), 252–268.

White, H.S. (1993). Rankings of library and information science faculty and programs: The third in a series of studies undertaken at six-year intervals. The Library Quarterly, 63(2), 166–188.

Wilson, P. (1979). Factors affecting research productivity. Journal of Education for Library and Information Science, 20(1), 3–24.

Yerkey, A. (1983). A cluster analysis of several patterns among bibliographic databases. Journal of the American Society for Information Science, 34(5), 350–355.

Yerkey, A.N., & Glogowski, M. (1989). Bibliographic scatter of library and information science literature. Journal of Education for Library and Information Science, 30(2), 90–111.

Yerkey, A.N., & Glogowski, M. (1990). Scatter of library and information science topics among bibliographic databases. Journal of the American Society for Information Science, 41(4), 245–253.


Downloads

Downloads per month over past year

Actions (login required)

View Item View Item