Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 3 |
Descriptor
| Bayesian Statistics | 5 |
| Difficulty Level | 5 |
| Comparative Analysis | 3 |
| Item Response Theory | 3 |
| Test Items | 3 |
| Computation | 2 |
| Accuracy | 1 |
| Achievement Tests | 1 |
| Adaptive Testing | 1 |
| Check Lists | 1 |
| Cluster Grouping | 1 |
| More ▼ | |
Source
| Journal of Educational… | 5 |
Author
| Ames, Allison | 1 |
| Bejar, Isaac I. | 1 |
| De Boeck, Paul | 1 |
| Frederickx, Sofie | 1 |
| Kim, Sooyeon | 1 |
| Li, Yuan H. | 1 |
| Lissitz, Robert W. | 1 |
| Magis, David | 1 |
| Moses, Tim | 1 |
| Smith, Elizabeth | 1 |
| Tuerlinckx, Francis | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 5 |
| Reports - Research | 3 |
| Reports - Descriptive | 1 |
| Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ames, Allison; Smith, Elizabeth – Journal of Educational Measurement, 2018
Bayesian methods incorporate model parameter information prior to data collection. Eliciting information from content experts is an option, but has seen little implementation in Bayesian item response theory (IRT) modeling. This study aims to use ethical reasoning content experts to elicit prior information and incorporate this information into…
Descriptors: Item Response Theory, Bayesian Statistics, Ethics, Specialists
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook – Journal of Educational Measurement, 2015
This inquiry is an investigation of item response theory (IRT) proficiency estimators' accuracy under multistage testing (MST). We chose a two-stage MST design that includes four modules (one at Stage 1, three at Stage 2) and three difficulty paths (low, middle, high). We assembled various two-stage MST panels (i.e., forms) by manipulating two…
Descriptors: Comparative Analysis, Item Response Theory, Computation, Accuracy
Frederickx, Sofie; Tuerlinckx, Francis; De Boeck, Paul; Magis, David – Journal of Educational Measurement, 2010
In this paper we present a new methodology for detecting differential item functioning (DIF). We introduce a DIF model, called the random item mixture (RIM), that is based on a Rasch model with random item difficulties (besides the common random person abilities). In addition, a mixture model is assumed for the item difficulties such that the…
Descriptors: Test Bias, Models, Test Items, Difficulty Level
Peer reviewedBejar, Isaac I. – Journal of Educational Measurement, 1980
Two procedures are presented for detecting violations of the unidimensionality assumption made by latent trait models without requiring factor analysis of inter-item correlation matrices. Both procedures require that departures from unidimensionality be hypothesized beforehand. This is usually possible in achievement tests where several content…
Descriptors: Achievement Tests, Bayesian Statistics, Cluster Grouping, Content Analysis
Li, Yuan H.; Lissitz, Robert W. – Journal of Educational Measurement, 2004
The analytically derived asymptotic standard errors (SEs) of maximum likelihood (ML) item estimates can be approximated by a mathematical function without examinees' responses to test items, and the empirically determined SEs of marginal maximum likelihood estimation (MMLE)/Bayesian item estimates can be obtained when the same set of items is…
Descriptors: Test Items, Computation, Item Response Theory, Error of Measurement

Direct link
