NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
DeCarlo, Lawrence T. – Journal of Educational Measurement, 2023
A conceptualization of multiple-choice exams in terms of signal detection theory (SDT) leads to simple measures of item difficulty and item discrimination that are closely related to, but also distinct from, those used in classical item analysis (CIA). The theory defines a "true split," depending on whether or not examinees know an item,…
Descriptors: Multiple Choice Tests, Test Items, Item Analysis, Test Wiseness
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A. – Language Testing, 2023
Researchers frequently evaluate rater judgments in performance assessments for evidence of differential rater functioning (DRF), which occurs when rater severity is systematically related to construct-irrelevant student characteristics after controlling for student achievement levels. However, researchers have observed that methods for detecting…
Descriptors: Evaluators, Decision Making, Student Characteristics, Performance Based Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Paul J. Walter; Edward Nuhfer; Crisel Suarez – Numeracy, 2021
We introduce an approach for making a quantitative comparison of the item response curves (IRCs) of any two populations on a multiple-choice test instrument. In this study, we employ simulated and actual data. We apply our approach to a dataset of 12,187 participants on the 25-item Science Literacy Concept Inventory (SLCI), which includes ample…
Descriptors: Item Analysis, Multiple Choice Tests, Simulation, Data Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Pelánek, Radek; Effenberger, Tomáš; Kukucka, Adam – Journal of Educational Data Mining, 2022
We study the automatic identification of educational items worthy of content authors' attention. Based on the results of such analysis, content authors can revise and improve the content of learning environments. We provide an overview of item properties relevant to this task, including difficulty and complexity measures, item discrimination, and…
Descriptors: Item Analysis, Identification, Difficulty Level, Case Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Huang, Sheng-Yun – Educational and Psychological Measurement, 2011
The one-parameter logistic model with ability-based guessing (1PL-AG) has been recently developed to account for effect of ability on guessing behavior in multiple-choice items. In this study, the authors developed algorithms for computerized classification testing under the 1PL-AG and conducted a series of simulations to evaluate their…
Descriptors: Computer Assisted Testing, Classification, Item Analysis, Probability
Peer reviewed Peer reviewed
Barrett, Richard S. – Public Personnel Management, 1992
The Content Validation Form is presented as a means of proving that occupational tests provide a representative work sample or knowledge, skill, or ability necessary for a job. It is best used during test construction by a panel of subject matter experts. (SK)
Descriptors: Content Validity, Item Analysis, Multiple Choice Tests, Occupational Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Jee-Seon – Journal of Educational Measurement, 2006
Simulation and real data studies are used to investigate the value of modeling multiple-choice distractors on item response theory linking. Using the characteristic curve linking procedure for Bock's (1972) nominal response model presented by Kim and Hanson (2002), all-category linking (i.e., a linking based on all category characteristic curves…
Descriptors: Multiple Choice Tests, Test Items, Item Response Theory, Simulation
Yen, Wendy M. – 1979
Three test-analysis models were used to analyze three types of simulated test score data plus the results of eight achievement tests. Chi-square goodness-of-fit statistics were used to evaluate the appropriateness of the models to the four kinds of data. Data were generated to simulate the responses of 1,000 students to 36 pseudo-items by…
Descriptors: Achievement Tests, Correlation, Goodness of Fit, Item Analysis
PDF pending restoration PDF pending restoration
Civil Service Commission, Washington, DC. Personnel Research and Development Center. – 1976
This pamphlet reprints three papers and an invited discussion of them, read at a Division 5 Symposium at the 1975 American Psychological Association Convention. The first paper describes a Bayesian tailored testing process and shows how it demonstrates the importance of using test items with high discrimination, low guessing probability, and a…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Oriented Programs, Computer Programs
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cohen, Andrew D.; Upton, Thomas A. – ETS Research Report Series, 2006
This study describes the reading and test-taking strategies that test takers used in the Reading section of the LanguEdge courseware (ETS, 2002a). These materials were developed to familiarize prospective respondents with the new TOEFL®. The investigation focused on strategies used to respond to more traditional single selection multiple-choice…
Descriptors: Reading Tests, Test Items, Courseware, Item Analysis