Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 4 |
| Since 2017 (last 10 years) | 5 |
Descriptor
| Guidelines | 5 |
| Test Length | 5 |
| Comparative Analysis | 4 |
| Test Items | 4 |
| Item Analysis | 3 |
| Sample Size | 3 |
| Simulation | 3 |
| Computer Assisted Testing | 2 |
| Correlation | 2 |
| Error of Measurement | 2 |
| Item Response Theory | 2 |
| More ▼ | |
Source
| Educational and Psychological… | 2 |
| Journal of Educational… | 1 |
| Language Testing | 1 |
| Quality Assurance in… | 1 |
Author
| Choi, Youn-Jeng | 1 |
| Goodrich, J. Marc | 1 |
| Guo, Wenjing | 1 |
| Huang, Feifei | 1 |
| Isbell, Dan | 1 |
| Koziol, Natalie A. | 1 |
| Kárász, Judit T. | 1 |
| Lee, Won-Chan | 1 |
| Li, Yixing | 1 |
| Li, Zonglong | 1 |
| Széll, Krisztián | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 5 |
| Reports - Research | 4 |
| Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| ACTFL Oral Proficiency… | 1 |
What Works Clearinghouse Rating
Guo, Wenjing; Choi, Youn-Jeng – Educational and Psychological Measurement, 2023
Determining the number of dimensions is extremely important in applying item response theory (IRT) models to data. Traditional and revised parallel analyses have been proposed within the factor analysis framework, and both have shown some promise in assessing dimensionality. However, their performance in the IRT framework has not been…
Descriptors: Item Response Theory, Evaluation Methods, Factor Analysis, Guidelines
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Wang, Shaojie; Zhang, Minqiang; Lee, Won-Chan; Huang, Feifei; Li, Zonglong; Li, Yixing; Yu, Sufang – Journal of Educational Measurement, 2022
Traditional IRT characteristic curve linking methods ignore parameter estimation errors, which may undermine the accuracy of estimated linking constants. Two new linking methods are proposed that take into account parameter estimation errors. The item- (IWCC) and test-information-weighted characteristic curve (TWCC) methods employ weighting…
Descriptors: Item Response Theory, Error of Measurement, Accuracy, Monte Carlo Methods
Koziol, Natalie A.; Goodrich, J. Marc; Yoon, HyeonJin – Educational and Psychological Measurement, 2022
Differential item functioning (DIF) is often used to examine validity evidence of alternate form test accommodations. Unfortunately, traditional approaches for evaluating DIF are prone to selection bias. This article proposes a novel DIF framework that capitalizes on regression discontinuity design analysis to control for selection bias. A…
Descriptors: Regression (Statistics), Item Analysis, Validity, Testing Accommodations
Isbell, Dan; Winke, Paula – Language Testing, 2019
The American Council on the Teaching of Foreign Languages (ACTFL) oral proficiency interview -- computer (OPIc) testing system represents an ambitious effort in language assessment: Assessing oral proficiency in over a dozen languages, on the same scale, from virtually anywhere at any time. Especially for users in contexts where multiple foreign…
Descriptors: Oral Language, Language Tests, Language Proficiency, Second Language Learning

Peer reviewed
Direct link
