Development and Validation of the Beile Test of Information Literacy for Education

Beile O'Neil, Penny Development and Validation of the Beile Test of Information Literacy for Education., 2005 PhD thesis thesis, University of Central Florida. [Thesis]

[thumbnail of B-tiled_diss.pdf]
Preview
PDF
B-tiled_diss.pdf

Download (781kB) | Preview

English abstract

This text describes the development and validation of an information literacy assessment scale specific to education. Information literacy has been recognized by a number of professional organizations and accrediting bodies as an essential skill for success in the 21st century, and consequently has become a key learning outcome of a number of education programs. Teacher candidates are expected to teach and model information literacy skills to their students, yet rarely is it suggested that they are adequately prepared to do so. The scale described herein was developed to measure information literacy skills in the field of education in an effort to inform curricular and instructional decisions and to provide evidence of institutional effectiveness for program reviews. Test content is based on standards from the International Society for Technology in Education (ISTE) and the Association of College and Research Libraries (ACRL). The work has been recognized for its technical merit in describing validation procedures and for revisiting the discussion of traditional versus modern approaches to validating cognitive instruments.

Item type: Thesis (UNSPECIFIED)
Keywords: B-TILED, information literacy, assessment, validation studies
Subjects: C. Users, literacy and reading. > CD. User training, promotion, activities, education.
Depositing user: Penny Beile
Date deposited: 03 Jul 2011
Last modified: 02 Oct 2014 12:19
URI: http://hdl.handle.net/10760/15856

References

Ackerson, L. G., Howard, J. G., & Young V. E. (1991). Assessing the relationship

between library instruction methods and the quality of undergraduate research.

Research Strategies, 9(2), 139-141.

American Association for Higher Education [AAHE]. (2005). 9 principles of good

practice for assessing student learning. Retrieved January 10, 2005 from

http://www.aahe.org/assessment/principl.htm

American Association of School Librarians [AASL] & Association for Educational

Communications and Technology [AECT]. (1998). Information literacy

standards for student learning. Chicago, IL: American Library Association.

American Psychological Association [APA]. (1999). Standards for educational and

psychological testing. Washington, DC: Author.

Anderson, T. H. (1974). Cloze measures as indices of achievement comprehension

when learning from extended prose. Journal of Educational Measurement, 11,

83-92.

Associated Colleges of the South [ACS]. (2003). Information fluency working

definition. Retrieved February 24, 2005, from:

http://www.colleges.org/~if/if_definition.html

Association of American Colleges and Universities [AACU]. (2005). Liberal education

outcomes: A preliminary report on student achievement in college. Washington,

DC: Author.

Association of College and Research Libraries [ACRL], Task Force on Information

Literacy Competency Standards in Higher Education. (2000). Information literacy

competency standards for higher education. Retrieved June 8, 2004, from

http://www.ala.org/acrl/ilcomstan.html

Bandura, A. J. (1977). Self-efficacy: Toward a unifying theory of behavioral

change. Psychological Review, 84, 191-215.

Barclay D. (1993). Evaluating library instruction: Doing the best you can with what

you have. RQ, 33, 195-202.

Beile, P. M. & Boote, D. N. (2002). Library instruction and graduate professional

development: Exploring the effect of learning environments on self-efficacy and learning

outcomes. Alberta Journal of Educational Research, 68(4), 364-367.

23 7

Beile, P. M., Boote, D. N., & Killingsworth, E. K. (2003). Characteristics of

education doctoral dissertation references: An inter-institutional analysis of

review of literature citations. Paper presented at the American Educational

Research Association Annual Conference, Chicago, IL. (ERIC Document

Reproduction Service No. ED478598).

Berk, R. A. (1986). A consumer’s guide to setting performance standards on

criterion-referenced tests. Review of Educational Research, 56(1), 137-172.

Bloom, B. S., Madaus, G. F., and Hastings, J. T. (1981). Evaluation to improve

learning. New York: McGraw-Hill.

Board, C. & Whitney, D. R. (1972). The effect of selected poor item-writing

practices on test difficulty, reliability and validity. Journal of Educational

Measurement, 9, 1972, 225-233.

Bober, C., Poulin, S., & Vileno, L. (1995). Evaluating library instruction in academic

libraries: A critical review of the literature, 1980-1993. In L. M. Martin (Ed.),

Library instruction revisited: Bibliographic instruction comes of age (pp.53-71).

New York: The Haworth Press.

Boote, D. N. & Beile, P. (2005). Scholars before researchers: On the centrality of the

dissertation literature review in research preparation. Educational Researcher,

34(6), 3-15.

Bradley, J. (1993). Methodological issues and practices in qualitative research.

Library Quarterly, 63(4), 431-449.

Breivik, P. S. (1985). A vision in the making: Putting libraries back in the

Information Society. American Libraries, 16(1), 723.

Breivik, P. S. (1991). Literacy in an Information Society. Information Reports and

Bibliographies, 20(3), 10-14.

Bren, B., Hillemann, B., & Topp, V. (1998). Effectiveness of hands-on instruction of

electronic resources. Research Strategies, 16(1), 41-51.

Brown, J. D. (2000). What is construct validity? JALT Testing & Evaluation SIG

Newsletter, 4(2), 7-10. Retrieved March 23, 2005, from

http://www.jalt.org/test/bro_8.htm

Budd, J. M. (1995). An epistemological foundation for library and information

science. Library Quarterly, 65(3), 295-318.

23 8

Cameron, L. (2004). Assessing information literacy. In I. F. Rockman (Ed.),

Integrating information literacy into the higher education curriculum: Practical

models for transformation (pp. 207-236). San Francisco: Jossey Bass.

Chadley, O. & Gavryck, J. (1989). Bibliographic instruction trends in research

libraries. Research Strategies, 7, 106-113.

Clark, L. A. & Watson, D. (1995). Constructing validity: Basic issues in objective

scale development. Psychological Assessment, 7(3), 309-319.

Cohen, J. & Cohen, P. (1983). Multiple regression/correlation in the behavioral

sciences. 2nd Ed. Hillsdale NY: Erlbaum and Associates.

Creswell, J. W. (1994). Research design: Qualitative and quantitative approaches.

Thousand Oaks, CA: Sage Publications.

Daugherty, T. K. & Carter, E. W. (1997). Assessment of outcome-focused library

instruction in Psychology. Journal of Instructional Psychology, 24(1), 29-33.

Davis, F. B., & Diamond, J. J. (1974). The preparation of criterion-referenced tests.

In C. W. Harris, M. C. Alkin, & W. J. Popham (Eds.). Problems in criterionreferenced

measurement [Center for the Study of Evaluation monograph series,

No.3] (pp. 116-138). Los Angeles: Center for the Study of Evaluation.

Dillman, D. A. (1999). Mail and telephone surveys: The tailored design method.

New York: John Wiley and Sons.

Dillman, D. A., Tortora, R. D., Conradt, J., & Bowker, D. (1998). Influence of plain

vs. fancy design on response rates for web surveys. Retrieved February 22, 2005,

from Washington State University, Social and Economic Sciences Research

Center Web site: http://survey.sesrc.wsu.edu/dillman/papers/asa98ppr.pdf

Eadie, T. (1992). Beyond immodesty: Questioning the benefits of BI. Research

Strategies, 10, 105-110.

Ebel, R. L. (1968). The value of internal consistency in classroom examinations.

Journal of Educational Measurement, 5, 71-73.

Ebel, R. L. (1972). Why is a longer test usually a more reliable test? Educational

and Psychological Measurement, 32, 24-253.

Eco, U. (1983). The name of the rose (W. Weaver, Trans.). San Diego, CA:

Harcourt, Brace, Jovanovich. (Original work published 1980).

23 9

Education and Behavioral Sciences Section [EBSS]. (1992). Information retrieval and

evaluation skills for education students. C&RL News, 9, 583-588.

Educational Testing Service [ETS]. (2004). ICT literacy. Retrieved February 25,

2005, from: http://www.ets.org/ictliteracy/

Edwards, S. (1994). Bibliographic instruction research: An analysis of the journal

literature from 1977 to 1991. Research Strategies, 12, 68-78.

Electronic Publishing Initiative at Columbia [EPIC]. (2004). Online survey of college

students: Executive summary. Retrieved June 22, 2004, from Columbia

University, Electronic Publishing Initiative at Columbia Web site:

http://www.epic.columbia.edu/eval/find09/find09.html

Elliot, S. N., Kratochwill, T. R., Littlefield, J. and Travers, J. F. (1996). Educational

psychology: Effective teaching, effective learning (2nd Ed.). Madison, WI:

Brown and Benchmark.

Erlendsson, J. (2003). Essays on Derek John de Solla Price. Retrieved

February 23, 2005, from: http://www.hi.is/~joner/eaps/sollapri1.htm

Fox, L. M. & Weston, L. (1993). Course-integrated instruction for nursing students:

How effective?. Research Strategies, 11, 89-99.

Franklin, G. & Toifel, R. C. (1994). The effects of BI on library knowledge and skills

among Education students. Research Strategies, 12, 224-237.

Gardner, P. L. (1970). Test length and the standard error of measurement. Journal

of Educational Measurement, 7, 271-273.

Grassian, E. S. & Kaplowitz, J. R. (2001). Information literacy instruction: Theory

and practice. New York: Neal-Schuman Publishers.

Gratch Lindauer, B. & Brown, A. (2004). Developing a tool to assess community

college students. In I. F. Rockman (Ed.), Integrating information literacy into the

higher education curriculum: Practical models for transformation (pp. 165-206).

San Francisco: Jossey Bass.

Greer, A., Weston, L. & Alm, M. L. (1991). Assessment of learning outcomes: A

measure of progress in library literacy. College & Research Libraries, 52, 549-

557.

24 0

Groves, R. M. (1989). Survey errors and survey costs. New York: John Wiley.

Hagner, P. A. & Hartman, J. L. (2004). Faculty engagement, support and scalability

issues in online learning. Paper presented at the Academic Impressions Web

Conference, January 14, 2004. [Retrieved from compact disk video of

conference.]

Hakstian, A. R. & Kansup, W. (1975). A comparison of several methods of

assessing partial knowledge in multiple-choice tests: II. Testing procedures.

Journal of Educational Measurement, 12, 231-239.

Harris, C. W. (1974a). Problems of objectives-based measurement. In C. W. Harris,

M. C. Alkin, & W. J. Popham (Eds.). Problems in criterion-referenced

measurement [Center for the Study of Evaluation monograph series, No.3] (pp.

83-94). Los Angeles: Center for the Study of Evaluation.

Harris, C. W. (1974b). Some technical characteristics of mastery tests. In C. W.

Harris, M. C. Alkin, & W. J. Popham (Eds.). Problems in criterion-referenced

measurement [Center for the Study of Evaluation monograph series, No.3] (pp

98-115). Los Angeles: Center for the Study of Evaluation.

Hatcher, L. (1994). A step-by-step approach to using SAS for factor analysis and

structural equation modeling. Cary, NC: SAS Institute.

Hernon, P. & Dugan, R. E. (2004). Outcomes assessment in higher education: Views

and perspectives. Westport, CT: Libraries Unlimited.

Herring, M. Y. (2001). 10 reasons why the Internet is no substitute for the library.

American Libraries, 32(4), 76-78.

Horn, J. L. (1966). Some characteristics of classroom examinations. Journal of

Educational Measurement, 3, 293-295.

Horn, J. L. (1968). Is it reasonable for assessments to have different psychometric

properties than predictors? Journal of Educational Measurement, 5, 75-77.

Ingwersen, P. (1982). Search procedures in the library – analysed from the cognitive

point of view. Journal of Documentation, 38, 165-191.

International Society for Technology in Education [ISTE], National Educational

Technology Standards for Teachers [NETS*T]. (2000). Educational technology

standards and performance indicators for all teachers. Retrieved July 19, 2004,

from: http://cnets.iste.org/teachers/t_stands.html

24 1

Interstate New Teacher Assessment and Support Consortium [INTASC]. (1992). Model

standards for beginning teacher licensing, assessment and development: A

resource for state dialogue. Retrieved May 22, 2004, from:

http://www.ccsso.org/content/pdfs/corestrd.pdf

James Madison University. (2004, June). Information literacy test. Retrieved July 15,

2005, from James Madison University, Center for Assessment and Research

Studies Web site: http://www.jmu.edu/assessment/wm_library/ILT.pdf

Katz, I. (2005, June). The ICT Literacy Assessment test: An update. Report presented at

the annual conference of the American Library Association, Chicago, IL.

Kehoe, J. (1995). Basic item analysis for multiple-choice tests. Practical

Assessment, Research & Evaluation, 4(10). Retrieved September 20, 2004, from

http://pareonline.net/getvn.asp?v=4&n=10

Kerlinger, F. N. & Lee, H. B. (2000). Foundations of behavioral research (4th Ed.).

Fort Worth, TX: Harcourt College Publishers.

Kennedy, M. M. (1997). The connection between research and practice. Educational

Researcher, 26(7), 4-12.

Kohl, D. F. & Wilson, L. A. (1986). Effectiveness of course-integrated bibliographic

instruction in improving coursework. RQ, 26, 206-211.

Kranich, N. C. (2000). Building partnerships for 21st-century literacy. American

Libraries, 31(8), 7.

Kuhlthau, C. C. (1993). Seeking meaning: A process approach to library and

information services. Norwood, NJ: Ablex.

Kunkel, L. R., Weaver, S. M., & Cook, K.N. (1996). What do they know?: An

assessment of undergraduate library skills. Journal of Academic Librarianship,

22, 430-434.

Kuyper, L. A. & Dziuban, C. D. (1984). Passing scores and the Medical Record

Administration Registration Examination. Journal of AMRA, 55, 29-30.

Linacre, J. M. (2004). Test validity and Rasch measurement: Construct, content, etc.

Retrieved, March 23, 2005, from http://www.rasch.org/rmt/rmt181h.htm

Linn, R. L. and Gronlund, N. (1995). Measurement and assessment in teaching (7th

Ed.). New York: Macmillan.

24 2

Lord, F. M. (1974). Quick estimates of the relative efficiency of two tests as a

function of ability level. Journal of Educational Measurement, 11, 247-254.

Lubans, Jr., J. (1983). Educating the public library user. Chicago: American

Library Association.

Mager, R. F. (1997). Preparing instructional objectives: A critical tool in the

development of effective instruction (3rd Ed.). Atlanta: Center for Effective

Performance.

Maki, P. I. (2002). Developing an assessment plan to learn about student learning.

Journal of Academic Librarianship, 28(1/2), 8-13.

Martin, B.L. (1989). A checklist for designing instruction in the affective domain.

Educational Technology, 29, 7-15.

Maughan, P. D. (2001). Assessing information literacy among undergraduates: A

discussion of the literature and the University of California-Berkeley experience.

College & Research Libraries, 62, 71-85.

Mensching, T. B. (1987). Reducing library anxiety and defining teaching. Research

Strategies, 5, 146-148.

Middle States Commission on Higher Education [MSCHE]. (2002). Characteristics of

excellence in higher education: Eligibility requirements and standards for

accreditation. Philadelphia, PA: Middle States Commission on Higher

Education.

Morner, C. J. (1993). A test of library research skills for education doctoral students.

(Doctoral dissertation, Boston College, 1993). Dissertation Abstracts

International A, 54/6, 2070.

Nahl-Jakobovits, D.& Jakobovits, L. A. (1993). Bibliographic instructional design

for information literacy: Integrating affective and cognitive strategies. Research

Strategies, 11, 73-88.

National Council for Accreditation of Teacher Education [NCATE]. (2002).

Professional standards for accreditation of schools, colleges, and departments of

education. Retrieved June 3, 2005, from

http://www.ncate.org/documents/unit_stnds_2002.pdf

National Research Council, Committee on Information Technology Literacy. (1999).

Being fluent with information technology. Washington, DC: National Academy

Press.

24 3

New England Association of Schools and Colleges [NEASC]. (2005). Standards for

accreditation. Retrieved August 5, 2005, from

http://www.neasc.org/cihe/standards_for_accreditation_2005.pdf

Nitko, A. J. (1970). Criterion-referenced testing in the context of instruction. In

Testing in turmoil: A conference on problems and issues in educational

measurement. Greenwich, CT: Educational Records Bureau.

Nitko, A. J. (1974). Problems in the development of criterion-referenced tests: The

IPI Pittsburgh experience. In C. W. Harris, M. C. Alkin, & W. J. Popham (Eds.).

Problems in criterion-referenced measurement [Center for the Study of

Evaluation monograph series, No.3] (pp. 59-82). Los Angeles: Center for the

Study of Evaluation.

Northwest Commission on Colleges and Universities [NWCCU]. (2003). Accreditation

standards. Retrieved July 15, 2005, from

http://www.nwccu.org/Standards%20and%20Policies/Accreditation%20Standards

/Accreditation%20Standards.htm

Novick, M. R. & Lewis, C. (1974). Prescribing test length for criterion-referenced

measurement. In C. W. Harris, M. C. Alkin, & W. J. Popham (Eds.). Problems

in criterion-referenced measurement [Center for the Study of Evaluation

monograph series, No.3] (pp. 139-158). Los Angeles: Center for the Study of

Evaluation.

O'Connor, L. G., Radcliff, C. J., & Gedeon, J. (2002). Applying systems design and

item response theory to the problem of measuring information literacy skills.

College and Research Libraries, 63(6), 528-543.

Patterson, C. D., & Howell, D. W. (1990). Library user education: Assessing the

attitudes of those who teach. RQ, 29, 513-523.

Pew Internet & American Life Project. (2005). Search engine users: Internet

searchers are confident, satisfied, and trusting – but they are also unaware and

naïve. Retrieved June 27, 2005, from:

http://www.pewinternet.org/pdfs/PIP_Searchengine_users.pdf

Popham, W. J. (1974). Selecting objectives and generating test items for objectivesbased

tests. In C. W. Harris, M. C. Alkin, & W. J. Popham (Eds.). Problems in

criterion-referenced measurement [Center for the Study of Evaluation monograph

series, No.3] (pp. 13-25). Los Angeles: Center for the Study of Evaluation.

Popham, W. J. & Husek, T. R. (1969). Implications of criterion-referenced

measurement. Journal of Educational Measurement, 6, 1-9.

24 4

Postman, N. (2004). The Information Age: A blessing or a curse? Harvard

International Journal of Press/Politics, 9(2), 3-10. (Reprinted from the Joan

Shorenstein Center on the Press, Politics and Public Policy, 1995).

Project SAILS (2001) Project SAILS: Project for the standardized assessment of

information literacy skills. Retrieved March 5, 2004, from Kent State University,

Project SAILS Web site: http://sails.lms.kent.edu/index.php

Rabine, J. L. & Cardwell, C. (2000). Start making sense: Practical approaches to

outcomes assessment for libraries. Research Strategies, 17(4), 319-335.

Radcliff, C. J. (2005, June). Project SAILS update. Report presented at the annual

conference of the American Library Association, Chicago, IL.

Rader, Hannelore. (1991). Information literacy: A revolution in the library. RQ, 31,

25-29.

Rader, Hannelore. (2000). A silver anniversary: 25 years of reviewing the literature

related to user instruction. Reference Services Review, 28(3), 290-296.

Reitz, J. M. (2004). Dictionary for library and information science. Westport, CT:

Libraries Unlimited.

Ren, W. H. (2000). Library instruction and college student self-efficacy in electronic

information searching. Journal of Academic Librarianship, 26, 323-328.

Rettig, J. & Hagen, S. K. (2003). Stakeholders and strategies in information fluency.

Transformations: Liberal Arts in the Digital Age, 1(1). Retrieved February 24,

2005, from: http://www.colleges.org/transformations/index.php?q=node/view/7

Rockman, I. F. (2004). Introduction: The importance of information literacy. In I.

F. Rockman (Ed.), Integrating information literacy into the higher education

curriculum: Practical models for transformation (pp. 1-28). San Francisco:

Jossey Bass.

Russell, M. & Haney, W. (1997). Testing writing on computers: An experiment

comparing student performance on tests conducted via computer and via paperand-

pencil. Education Policy Analysis Archives, 5(3). Retrieved March 15, 2005,

from: http://epaa.asu.edu/epaa\v5n3/

Salony, M. F. (1995). The history of bibliographic instruction: Changing trends from

books to the electronic world. The Reference Librarian, 51/52, 31-51.

Schuck, B. R. (1992). Assessing a library instruction program. Research Strategies,

10, 152-160.

24 5

Shapiro, J. J. & Hughes, S. K. (1996). Information as a liberal art. Educom Review,

31(2), 31-36.

Shulman, L. S. (1999). Professing educational scholarship. In E. C. Lagemann & L.

S. Shulman (Eds.). Issues in education research: Problems and possibilities (pp.

159-164). San Francisco: Jossey Bass.

Simon, G. B. (1969). Comments on “Implications of criterion-referenced

measurement.” Journal of Educational Measurement, 6, 259-260.

Skager, R. W. (1974). Creating criterion-referenced tests from objectives-based

assessment systems: Unsolved problems in test development, assembly, and

interpretation. In C. W. Harris, M. C. Alkin, & W. J. Popham (Eds.). Problems

in criterion-referenced measurement [Center for the Study of Evaluation

monograph series, No.3] (pp. 47-58). Los Angeles: Center for the Study of

Evaluation.

Sutton, B. (1993). The rationale for qualitative research: A review of principles and

theoretical foundations. Library Quarterly, 63(4), 411-430.

Swaminathan, H., Hambleton, R. K., & Algina, J. (1974). Reliability of criterionreferenced

tests: A decision-theoretic formulation. Journal of Educational

Measurement, 11, 263-267.

Thompson, B. & Daniel, L. G. (1996). Factor analytic evidence for the construct

validity of scores: A historical overview and some guidelines. Educational and

Psychological Measurement, 56(2), 197-208.

Tierno, M. J. & Lee, J. H. (1983). Developing and evaluating library research skills

in education: A model for course-integrated bibliographic instruction. RQ, 22,

284-291.

Tunon, J. (1999). Integrating bibliographic instruction for distance education

doctoral students into the Child and Youth Studies program at Nova Southeastern

University. Unpublished doctoral practicum, Nova Southeastern University, Fort

Lauderdale, FL. (ERIC Document Reproduction Service No. ED440639).

University of Central Florida [UCF], Office of Institutional Research. (2004). University

of Central Florida Headcount by Major and Level: Final fall 2004 (all students).

Retrieved May 22, 2005, from http://www.iroffice.ucf.edu/enrollment/2004-

05/1ffal04_allstudents_w.pdf

24 6

Varian, H. (2003). How much information? 2003: Executive summary.

Retrieved February 23, 2005, from University of California, Berkeley, School of

Information Management and Systems Web site:

http://www.sims.berkeley.edu/research/projects/how-much-info-2003

Varma, S. (n.d.). Computing and using point biserials for item analysis. Morgan

Hill, CA: Educational Data Systems.

Wilbur, P. H. (1970). Positional response set among high school students on

multiple-choice tests. Journal of Educational Measurement, 7, 161-163.

Zaporozhetz, L. E. (1987). The dissertation literature review: How faculty advisors

prepare their doctoral candidates. (Doctoral dissertation, University of Oregon,

1987). Dissertation Abstracts International A, 48/11, 2820.

Zurowski, P. G. (1974). The information service environment: Relationships and

priorities. Washington, DC: National Commission on Libraries and Information

Science.


Downloads

Downloads per month over past year

Actions (login required)

View Item View Item