Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 5 |
Descriptor
| Adaptive Testing | 5 |
| Computation | 5 |
| Item Response Theory | 5 |
| Statistical Analysis | 5 |
| Computer Assisted Testing | 4 |
| Bayesian Statistics | 2 |
| Classification | 2 |
| Comparative Analysis | 2 |
| Simulation | 2 |
| Test Items | 2 |
| Test Length | 2 |
| More ▼ | |
Source
| Applied Psychological… | 2 |
| Educational and Psychological… | 1 |
| Journal of Educational… | 1 |
| Journal of Educational and… | 1 |
Author
| Finkelman, Matthew David | 1 |
| Kim, Sooyeon | 1 |
| Liu, Chen-Wei | 1 |
| Matteucci, Mariagiulia | 1 |
| Moses, Tim | 1 |
| Sinharay, Sandip | 1 |
| Veldkamp, Bernard P. | 1 |
| Wang, Wen-Chung | 1 |
| Yoo, Hanwook | 1 |
| de Jong, Martijn G. | 1 |
Publication Type
| Journal Articles | 5 |
| Reports - Evaluative | 2 |
| Reports - Research | 2 |
| Reports - Descriptive | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook – Journal of Educational Measurement, 2015
This inquiry is an investigation of item response theory (IRT) proficiency estimators' accuracy under multistage testing (MST). We chose a two-stage MST design that includes four modules (one at Stage 1, three at Stage 2) and three difficulty paths (low, middle, high). We assembled various two-stage MST panels (i.e., forms) by manipulating two…
Descriptors: Comparative Analysis, Item Response Theory, Computation, Accuracy
Sinharay, Sandip – Journal of Educational and Behavioral Statistics, 2016
Meijer and van Krimpen-Stoop noted that the number of person-fit statistics (PFSs) that have been designed for computerized adaptive tests (CATs) is relatively modest. This article partially addresses that concern by suggesting three new PFSs for CATs. The statistics are based on tests for a change point and can be used to detect an abrupt change…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Goodness of Fit
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G. – Applied Psychological Measurement, 2013
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
Descriptors: Test Construction, Test Items, Item Banks, Automation
Wang, Wen-Chung; Liu, Chen-Wei – Educational and Psychological Measurement, 2011
The generalized graded unfolding model (GGUM) has been recently developed to describe item responses to Likert items (agree-disagree) in attitude measurement. In this study, the authors (a) developed two item selection methods in computerized classification testing under the GGUM, the current estimate/ability confidence interval method and the cut…
Descriptors: Computer Assisted Testing, Adaptive Testing, Classification, Item Response Theory
Finkelman, Matthew David – Applied Psychological Measurement, 2010
In sequential mastery testing (SMT), assessment via computer is used to classify examinees into one of two mutually exclusive categories. Unlike paper-and-pencil tests, SMT has the capability to use variable-length stopping rules. One approach to shortening variable-length tests is stochastic curtailment, which halts examination if the probability…
Descriptors: Mastery Tests, Computer Assisted Testing, Adaptive Testing, Test Length

Peer reviewed
Direct link
