NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Iowa1
Ireland1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Crabtree, Ashleigh R. – ProQuest LLC, 2016
The purpose of this research is to provide information about the psychometric properties of technology-enhanced (TE) items and the effects these items have on the content validity of an assessment. Specifically, this research investigated the impact that the inclusion of TE items has on the construct of a mathematics test, the technical properties…
Descriptors: Psychometrics, Computer Assisted Testing, Test Items, Test Format
Freedle, Roy; Kostin, Irene – 1993
Prediction of the difficulty (equated delta) of a large sample (n=213) of reading comprehension items from the Test of English as a Foreign Language (TOEFL) was studied using main idea, inference, and supporting statement items. A related purpose was to examine whether text and text-related variables play a significant role in predicting item…
Descriptors: Construct Validity, Difficulty Level, Multiple Choice Tests, Prediction
Martinez, Michael E.; Katz, Irvin R. – 1992
Contrasts between constructed response items and stem-equivalent multiple-choice counterparts typically have involved averaging item characteristics, and this aggregation has masked differences in statistical properties at the item level. Moreover, even aggregated format differences have not been explained in terms of differential cognitive…
Descriptors: Architecture, Cognitive Processes, Construct Validity, Constructed Response
Peer reviewed Peer reviewed
Bennett, Randy Elliot; And Others – Journal of Educational Measurement, 1991
The relationship of multiple-choice and free-response items on the College Board's Advanced Placement Computer Science Examination was studied using confirmatory factor analysis. Results with 2 samples of 1,000 high school students suggested that the most parsimonious fit was achieved using a single factor. Implications for construct validity are…
Descriptors: Chi Square, College Entrance Examinations, Comparative Testing, Computer Science
Melancon, Janet G.; Thompson, Bruce – 1989
Classical measurement theory was used to investigate the measurement (psychometric) characteristics of both parts of the Finding Embedded Figures Test (FEFT) administered in either a "no guessing" supply format or a multiple-choice selection format to undergraduate college students or to middle school students. Three issues were…
Descriptors: Comparative Testing, Construct Validity, Higher Education, Junior High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Curren, Randall R. – Theory and Research in Education, 2004
This article addresses the capacity of high stakes tests to measure the most significant kinds of learning. It begins by examining a set of philosophical arguments pertaining to construct validity and alleged conceptual obstacles to attributing specific knowledge and skills to learners. The arguments invoke philosophical doctrines of holism and…
Descriptors: Test Items, Educational Testing, Construct Validity, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures
Bolton, David L.; And Others – 1989
A study was conducted to assess the validity of translations of two different forms of a licensing examination for cosmetologists in Florida to ensure that both Spanish and English candidates have equal chances of being licensed. The LISREL computer program was used to test the equivalence of factor structure, units of measurement, and standard…
Descriptors: Construct Validity, Cosmetology, English, Factor Analysis
Ward, William C.; And Others – 1986
The keylist format (rather than the conventional multiple-choice format) for item presentation provides a machine-scorable surrogate for a truly free-response test. In this format, the examinee is required to think of an answer, look it up in a long ordered list, and enter its number on an answer sheet. The introduction of keylist items into…
Descriptors: Analogy, Aptitude Tests, Construct Validity, Correlation
Hendrickson, Amy; Patterson, Brian; Melican, Gerald – College Board, 2008
Presented at the Annual National Council on Measurement in Education (NCME) in New York in March 2008. This presentation explores how different item weighting can affect the effective weights, validity coefficents and test reliability of composite scores among test takers.
Descriptors: Multiple Choice Tests, Test Format, Test Validity, Test Reliability
Melancon, Janet G.; Thompson, Bruce – 1990
Classical measurement theory was used to investigate measurement characteristics of both parts of the Finding Embedded Figures Test (FEFT) when the test was: administered in either a "no guessing" supply format or a multiple-choice selection format; administered to either undergraduate college students or middle school students; and…
Descriptors: Comparative Testing, Construct Validity, Guessing (Tests), Higher Education
Murchan, Damian P. – 1989
The reliability, content validity, and construct validity were compared for two test formats in a public examination used to assess a secondary school geography course. The 11-item geography portion of the Intermediate Certificate Examination (essay examination) was administered in June 1987 to 400 secondary school students in Ireland who also…
Descriptors: Achievement Tests, Comparative Testing, Construct Validity, Content Validity