Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 5 |
Descriptor
Computation | 5 |
Computer Software | 5 |
Item Response Theory | 3 |
Models | 3 |
Test Items | 3 |
Markov Processes | 2 |
Monte Carlo Methods | 2 |
Accuracy | 1 |
Adaptive Testing | 1 |
Algorithms | 1 |
College Entrance Examinations | 1 |
More ▼ |
Source
Journal of Educational… | 5 |
Author
Wang, Wen-Chung | 2 |
Briggs, Derek C. | 1 |
Emre Gonulates | 1 |
Harold Doran | 1 |
He, Wei | 1 |
Jiao, Hong | 1 |
Jin, Kuan-Yu | 1 |
Qiu, Xue-Lan | 1 |
Ted Diaz | 1 |
Testsuhiro Yamada | 1 |
Vanessa Culver | 1 |
More ▼ |
Publication Type
Journal Articles | 5 |
Reports - Research | 4 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
China | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Harold Doran; Testsuhiro Yamada; Ted Diaz; Emre Gonulates; Vanessa Culver – Journal of Educational Measurement, 2025
Computer adaptive testing (CAT) is an increasingly common mode of test administration offering improved test security, better measurement precision, and the potential for shorter testing experiences. This article presents a new item selection algorithm based on a generalized objective function to support multiple types of testing conditions and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Wang, Wen-Chung; Jin, Kuan-Yu; Qiu, Xue-Lan; Wang, Lei – Journal of Educational Measurement, 2012
In some tests, examinees are required to choose a fixed number of items from a set of given items to answer. This practice creates a challenge to standard item response models, because more capable examinees may have an advantage by making wiser choices. In this study, we developed a new class of item response models to account for the choice…
Descriptors: Item Response Theory, Test Items, Selection, Models
Wang, Wen-Chung; Wu, Shiu-Lien – Journal of Educational Measurement, 2011
Rating scale items have been widely used in educational and psychological tests. These items require people to make subjective judgments, and these subjective judgments usually involve randomness. To account for this randomness, Wang, Wilson, and Shih proposed the random-effect rating scale model in which the threshold parameters are treated as…
Descriptors: Rating Scales, Models, Statistical Analysis, Computation
Jiao, Hong; Wang, Shudong; He, Wei – Journal of Educational Measurement, 2013
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
Descriptors: Computation, Item Response Theory, Models, Monte Carlo Methods
Briggs, Derek C.; Wilson, Mark – Journal of Educational Measurement, 2007
An approach called generalizability in item response modeling (GIRM) is introduced in this article. The GIRM approach essentially incorporates the sampling model of generalizability theory (GT) into the scaling model of item response theory (IRT) by making distributional assumptions about the relevant measurement facets. By specifying a random…
Descriptors: Markov Processes, Generalizability Theory, Item Response Theory, Computation