NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Schnipke, Deborah L. – 1996
When running out of time on a multiple-choice test, some examinees are likely to respond rapidly to the remaining unanswered items in an attempt to get some items right by chance. Because these responses will tend to be incorrect, the presence of "rapid-guessing behavior" could cause these items to appear to be more difficult than they…
Descriptors: Difficulty Level, Estimation (Mathematics), Guessing (Tests), Item Response Theory
Schnipke, Deborah L. – 1999
When running out of time on a multiple-choice test such as the Law School Admission Test (LSAT), some test takers are likely to respond rapidly to the remaining unanswered items in an attempt to get some items right by chance. Because these responses will tend to be incorrect, the presence of rapid-guessing behavior could cause these items to…
Descriptors: College Entrance Examinations, Difficulty Level, Estimation (Mathematics), Guessing (Tests)
Gershon, Richard C.; And Others – 1994
A 1992 study by R. Gershon found discrepancies when comparing the theoretical Rasch item characteristic curve with the average empirical curve for 1,304 vocabulary items administered to 7,711 students. When person-item mismatches were deleted (for any person-item interaction where the ability of the person was much higher or much lower than the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Elementary Education
Sykes, Robert C.; Ito, Kyoko – 1995
Whether the presence of bidimensionality has any effect on the adaptive recalibration of test items was studied through live-data simulation of computer adaptive testing (CAT) forms. The source data were examinee responses to the 298 scored multiple choice items of a licensure examination in a health care profession. Three 75-item part-forms,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Estimation (Mathematics)
DeMars, Christine – 1998
Using data from a pilot test of science and math from students in 30 high schools, item difficulties were estimated with a one-parameter model (partial-credit model for the multi-point items). Some items were multiple-choice items, and others were constructed-response items (open-ended). Four sets of estimates were obtained: estimates for males…
Descriptors: Constructed Response, Difficulty Level, Estimation (Mathematics), Goodness of Fit
Peer reviewed Peer reviewed
Westers, Paul; Kelderman, Henk – Psychometrika, 1992
A method for analyzing test-item responses is proposed to examine differential item functioning (DIF) in multiple-choice items within the latent class framework. Different models for detection of DIF are formulated, defining the subgroup as a latent variable. An efficient estimation method is described and illustrated. (SLD)
Descriptors: Chi Square, Difficulty Level, Educational Testing, Equations (Mathematics)
Peer reviewed Peer reviewed
Wise, Steven L.; And Others – Journal of Educational Measurement, 1992
Performance of 156 undergraduate and 48 graduate students on a self-adapted test (SFAT)--students choose the difficulty level of their test items--was compared with performance on a computer-adapted test (CAT). Those taking the SFAT obtained higher ability scores and reported lower posttest state anxiety than did CAT takers. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level