Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 2 |
Descriptor
Source
| ETS Research Report Series | 2 |
Author
| Deng, Weiling | 1 |
| Dorans, Neil J. | 1 |
| Kane, Michael T. | 1 |
| Liu, Jinghua | 1 |
| Moses, Tim | 1 |
| Tan, Adele | 1 |
| Tannenbaum, Richard J. | 1 |
Publication Type
| Journal Articles | 2 |
| Reports - Evaluative | 1 |
| Reports - Research | 1 |
Education Level
| Elementary Secondary Education | 1 |
| Higher Education | 1 |
| Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Praxis Series | 1 |
| SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Tannenbaum, Richard J.; Kane, Michael T. – ETS Research Report Series, 2019
Testing programs are often classified as high or low stakes to indicate how stringently they need to be evaluated. However, in practice, this classification falls short. A high-stakes label is taken to imply that all indicators of measurement quality must meet high standards; whereas a low-stakes label is taken to imply the opposite. This approach…
Descriptors: High Stakes Tests, Testing Programs, Measurement, Evaluation Criteria
Moses, Tim; Liu, Jinghua; Tan, Adele; Deng, Weiling; Dorans, Neil J. – ETS Research Report Series, 2013
In this study, differential item functioning (DIF) methods utilizing 14 different matching variables were applied to assess DIF in the constructed-response (CR) items from 6 forms of 3 mixed-format tests. Results suggested that the methods might produce distinct patterns of DIF results for different tests and testing programs, in that the DIF…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Item Analysis

Peer reviewed
