Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 3 |
Descriptor
| Decision Making | 3 |
| Probability | 3 |
| Test Items | 3 |
| Item Response Theory | 2 |
| Models | 2 |
| Ability | 1 |
| Accuracy | 1 |
| Classification | 1 |
| Computer Assisted Testing | 1 |
| Difficulty Level | 1 |
| Elementary Secondary Education | 1 |
| More ▼ | |
Author
| Emons, Wilco H. M. | 1 |
| Hauser, Carl | 1 |
| He, Wei | 1 |
| Jones, Phil | 1 |
| Kruyen, Peter M. | 1 |
| Ma, Lingling | 1 |
| Shulruf, Boaz | 1 |
| Sijtsma, Klaas | 1 |
| Thum, Yeow Meng | 1 |
| Turner, Rolf | 1 |
Publication Type
| Journal Articles | 3 |
| Reports - Research | 3 |
Education Level
| Elementary Secondary Education | 1 |
| Higher Education | 1 |
| Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Shulruf, Boaz; Jones, Phil; Turner, Rolf – Higher Education Studies, 2015
The determination of Pass/Fail decisions over Borderline grades, (i.e., grades which do not clearly distinguish between the competent and incompetent examinees) has been an ongoing challenge for academic institutions. This study utilises the Objective Borderline Method (OBM) to determine examinee ability and item difficulty, and from that…
Descriptors: Undergraduate Students, Pass Fail Grading, Decision Making, Probability
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making
Kruyen, Peter M.; Emons, Wilco H. M.; Sijtsma, Klaas – International Journal of Testing, 2012
Personnel selection shows an enduring need for short stand-alone tests consisting of, say, 5 to 15 items. Despite their efficiency, short tests are more vulnerable to measurement error than longer test versions. Consequently, the question arises to what extent reducing test length deteriorates decision quality due to increased impact of…
Descriptors: Measurement, Personnel Selection, Decision Making, Error of Measurement

Peer reviewed
Direct link
