Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 4 |
Descriptor
| Computation | 5 |
| Maximum Likelihood Statistics | 5 |
| Models | 5 |
| Multiple Choice Tests | 5 |
| Item Response Theory | 4 |
| Test Items | 4 |
| Simulation | 3 |
| Comparative Analysis | 2 |
| Responses | 2 |
| Scoring | 2 |
| Ability | 1 |
| More ▼ | |
Author
| Bolt, Daniel M. | 1 |
| Burket, George | 1 |
| Chen, Li-Sue | 1 |
| Chia, Mike | 1 |
| Gao, Furong | 1 |
| Ramsay, James O. | 1 |
| Shu, Lianghua | 1 |
| Suh, Youngsuk | 1 |
| Wang, Zhen | 1 |
| Wiberg, Marie | 1 |
| Wothke, Werner | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 5 |
| Reports - Evaluative | 2 |
| Reports - Research | 2 |
| Reports - Descriptive | 1 |
Education Level
| Higher Education | 1 |
| Postsecondary Education | 1 |
Audience
Location
| Sweden | 1 |
| United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 1 |
What Works Clearinghouse Rating
Ramsay, James O.; Wiberg, Marie – Journal of Educational and Behavioral Statistics, 2017
This article promotes the use of modern test theory in testing situations where sum scores for binary responses are now used. It directly compares the efficiencies and biases of classical and modern test analyses and finds an improvement in the root mean squared error of ability estimates of about 5% for two designed multiple-choice tests and…
Descriptors: Scoring, Test Theory, Computation, Maximum Likelihood Statistics
Wang, Zhen; Yao, Lihua – ETS Research Report Series, 2013
The current study used simulated data to investigate the properties of a newly proposed method (Yao's rater model) for modeling rater severity and its distribution under different conditions. Our study examined the effects of rater severity, distributions of rater severity, the difference between item response theory (IRT) models with rater effect…
Descriptors: Test Format, Test Items, Responses, Computation
Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike – Journal of Educational and Behavioral Statistics, 2011
It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…
Descriptors: Educational Assessment, Item Response Theory, Computation, Maximum Likelihood Statistics
Suh, Youngsuk; Bolt, Daniel M. – Psychometrika, 2010
Nested logit item response models for multiple-choice data are presented. Relative to previous models, the new models are suggested to provide a better approximation to multiple-choice items where the application of a solution strategy precedes consideration of response options. In practice, the models also accommodate collapsibility across all…
Descriptors: Computation, Simulation, Psychometrics, Models
van Barneveld, Christina – Alberta Journal of Educational Research, 2003
The purpose of this study was to examine the potential effect of false assumptions regarding the motivation of examinees on item calibration and test construction. A simulation study was conducted using data generated by means of several models of examinee item response behaviors (the three-parameter logistic model alone and in combination with…
Descriptors: Simulation, Motivation, Computation, Test Construction

Peer reviewed
Direct link
