The ILAS-ED: A Standards-Based Instrument for Assessing Pre-Service Teachers' Information Literacy Levels

UNSPECIFIED The ILAS-ED: A Standards-Based Instrument for Assessing Pre-Service Teachers' Information Literacy Levels., 2007 [Conference proceedings]

[img]
Preview
PDF
ILAS-ED.pdf

Download (75Kb) | Preview

English abstract

Few constituencies exist where it is more important to produce information literate individuals than teacher candidates, yet rarely is it suggested that practitioners newly entering the field are adequately prepared to teach and model information literacy to their students. As a result, information literacy has been established as a key outcome by a number of teacher education accrediting bodies and professional associations. Corollary to this initiative is the effort to develop valid instruments that assess information literacy skills of teacher candidates. This paper describes the development and validation of the Information Literacy Assessment Scale in Education (ILAS-ED). Funded in part by the Institute for Library and Information Literacy Education and the Institute of Museum and Library Services, the study is part of a national initiative spear-headed by the Project for the Standardized Assessment of Information Literacy Skills (SAILS). Test content is based on nationally recognized standards from the International Society for Technology in Education and the Association of College and Research Libraries. Procedures designed to enhance the scale's validity were woven throughout its development. 172 teacher education students at a large, metropolitan university completed a protocol consisting of 22 test items and 13 demographic and self-percept items. This instrument can be used to inform curricular and instructional decisions and to provide evidence of institutional effectiveness for program reviews.

Item type: Conference proceedings
Keywords: Assessment scale, library instruction
Subjects: A. Theoretical and general aspects of libraries and information. > AA. Library and information science as a field.
B. Information use and sociology of information. > BA. Use and impact of information.
C. Users, literacy and reading. > CB. User studies.
C. Users, literacy and reading. > CE. Literacy.
D. Libraries as physical collections. > DD. Academic libraries.
Depositing user: Penny Beile
Date deposited: 07 May 2012
Last modified: 02 Oct 2014 12:22
URI: http://hdl.handle.net/10760/16928

References

"SEEK" links will first look for possible matches inside E-LIS and query Google Scholar if no results are found.

American Association of School Librarians & Association for Educational

Communications and Technology (1998). Information literacy standards for student learning. Chicago, IL: American Library Association.

Association of College and Research Libraries (2000). Information Literacy

Competency Standards for Higher Education (Online). Retrieved June 8, 2004, from http://www.ala.org/acrl/ilcomstan.html.

Barclay D. (1993). Evaluating library instruction: Doing the best you can with what you have. RQ, 33, 195-202.

Beile, Penny M., David N. Boote, and Elizabeth K. Killingsworth. “A Microscope or a Mirror?: A Question of Study Validity Regarding the Use of Dissertation Citation Analysis for Evaluating Research Collections (in Education).” Journal of Academic Librarianship 30(5), (September 2004): 347-353.

Berk, R. A. (1986). A consumer’s guide to setting performance standards on criterion-referenced tests. Review of Educational Research, 56(1), 137-172.

Bober, C., Poulin, S., & Vileno, L. (1995). Evaluating library instruction in academic

libraries: A critical review of the literature, 1980-1993. In L. M. Martin (Ed.), Library instruction revisited: Bibliographic instruction comes of age (pp.53-71). New York: The Haworth Press.

Cameron, L. (2004). Assessing information literacy. In I. F. Rockman (Ed.), Integrating information literacy into the higher education curriculum: Practical models for transformation (pp. 207-236). San Francisco: Jossey Bass.

Chadley, O. & Gavryck, J. (1989). Bibliographic instruction trends in research

libraries. Research Strategies, 7, 106-113.

Clark, L. A. & Watson, D. (1995). Constructing validity: Basic issues in objective scale development. Psychological Assessment, 7(3), 309-319.

Daugherty, T. K. & Carter, E. W. (1997). Assessment of outcome-focused library

instruction in Psychology. Journal of Instructional Psychology, 24(1), 29-33.

Eadie, T. (1992). Beyond immodesty: Questioning the benefits of BI. Research

Strategies, 10, 105-110.

Ebel, R. L. (1972). Why is a longer test usually a more reliable test? Educational

and Psychological Measurement, 32, 24-253.

Edwards, S. (1994). Bibliographic instruction research: An analysis of the journal literature from 1977 to 1991. Research Strategies, 12, 68-78.

Fox, L. M. & Weston, L. (1993). Course-integrated instruction for nursing students: How effective?. Research Strategies, 11, 89-99.

Franklin, G. & Toifel, R. C. (1994). The effects of BI on library knowledge and skills among Education students. Research Strategies, 12, 224-237.

Gardner, P. L. (1970). Test length and the standard error of measurement. Journal

of Educational Measurement, 7, 271-273.

Grassian, E. S. & Kaplowitz, J. R. (2001). Information literacy instruction: Theory

and practice. New York: Neal-Schuman Publishers.

Gratch Lindauer, B. & Brown, A. (2004). Developing a tool to assess community

college students. In I. F. Rockman (Ed.), Integrating information literacy into the higher education curriculum: Practical models for transformation (pp. 165-206). San Francisco: Jossey Bass.

Greer, A., Weston, L. & Alm, M. L. (1991). Assessment of learning outcomes: A

measure of progress in library literacy. College & Research Libraries, 52, 549-557.

Hagner, P. A. & Hartman, J. L. (2004). Faculty engagement, support and scalability

issues in online learning. Paper presented at the Academic Impressions Web Conference, January 14, 2004. [Retrieved from compact disk video of conference].

International Society for Technology in Education (2000). National educational

technology standards for teachers (Online). Retrieved July 19, 2004, from: http://cnets.iste.org/teachers/t_stands.html.

Kehoe, J. (1995). Basic item analysis for multiple-choice tests. Practical

Assessment, Research & Evaluation, 4(10). Retrieved September 20, 2004, from http://pareonline.net/getvn.asp?v=4&n=10

Kennedy, M. M. (1997). The connection between research and practice. Educational

Researcher, 26(7), 4-12.

Leighton, G. B. & Markman, M. C. (1991). Attitudes of college freshmen toward

bibliographic instruction. College and Research Libraries News, 52, 36-38.

Lord, F. M. (1974). Quick estimates of the relative efficiency of two tests as a

function of ability level. Journal of Educational Measurement, 11, 247-254.

Maki, P. I. (2002). Developing an assessment plan to learn about student learning. Journal of Academic Librarianship, 28(1/2), 8-13.

Maughan, P. D. (2001). Assessing information literacy among undergraduates: A discussion of the literature and the University of California-Berkeley experience. College & Research Libraries, 62, 71-85.

Middle States Commission on Higher Education (2002). Characteristics of excellence in higher education: Eligibility requirements and standards for accreditation. Philadelphia, PA: Middle States Commission on Higher Education.

National Council for Accreditation of Teacher Education (2002). Professional

Standards for Accreditation of Schools, Colleges, and Departments of Education. Washington, DC: NCATE.

Nitko, A. J. (1970). Criterion-referenced testing in the context of instruction. In

Testing in turmoil: A conference on problems and issues in educational measurement. Greenwich, CT: Educational Records Bureau.

O'Connor, L. G., C. J. Radcliff, et al. (2002). "Applying systems design and item

response theory to the problem of measuring information literacy skills." College and Research Libraries, 63(6), 528-543.

Patterson, C. D., & Howell, D. W. (1990). Library user education: Assessing the

attitudes of those who teach. RQ, 29, 513-523.

Popham, W. J. & Husek, T. R. (1969). Implications of criterion-referenced

measurement. Journal of Educational Measurement, 6, 1-9.

Project SAILS (2001) Project SAILS: Project for the Standardized Assessment of

Information Literacy Skills. Retrieved March 5, 2004, from http://sails.lms.kent.edu/index.php.

Rader, Hannelore. (2000). A silver anniversary: 25 years of reviewing the literature related to user instruction. Reference Services Review, 28(3), 290-296.

Ren, W. H. (2000). Library instruction and college student self-efficacy in electronic

information searching. Journal of Academic Librarianship, 26, 323-328.

Schuck, B. R. (1992). Assessing a library instruction program. Research Strategies,

10, 152-160.

Simon, G. B. (1969). Comments on “Implications of criterion-referenced

measurement.” Journal of Educational Measurement, 6, 259-260.

Skager, R. W. (1974). Creating criterion-referenced tests from objectives-based

assessment systems: Unsolved problems in test development, assembly, and interpretation. In C. W. Harris, M. C. Alkin, & W. J. Popham (Eds.). Problems in criterion-referenced measurement [Center for the Study of Evaluation monograph series, No.3] (pp. 47-58). Los Angeles: Center for the Study of Evaluation.

Swaminathan, H., Hambleton, R. K., & Algina, J. (1974). Reliability of criterion-

referenced tests: A decision-theoretic formulation. Journal of Educational Measurement, 11, 263-267.

Tierno, M. J. & Lee, J. H. (1983). Developing and evaluating library research skills in education: A model for course-integrated bibliographic instruction. RQ, 22, 284-291.

Werking, R. H. (1980). Evaluating bibliographic education: A review and critique. Library Trends, 29, 153-172.

Zaporozhetz, L. E. (1987). The dissertation literature review: How faculty advisors

prepare their doctoral candidates. Doctoral dissertation: University of Oregon.


Actions (login required)

View Item View Item