Publication Date
| In 2026 | 0 |
| Since 2025 | 8 |
Descriptor
| Computer Assisted Testing | 8 |
| Item Response Theory | 8 |
| Adaptive Testing | 5 |
| Test Items | 5 |
| Test Format | 3 |
| Accuracy | 2 |
| Achievement Tests | 2 |
| Artificial Intelligence | 2 |
| Evaluation Methods | 2 |
| Foreign Countries | 2 |
| Item Analysis | 2 |
| More ▼ | |
Source
| Journal of Educational… | 3 |
| Educational Measurement:… | 1 |
| Grantee Submission | 1 |
| Journal of Education and… | 1 |
| Journal of Educational and… | 1 |
| SAGE Open | 1 |
Author
| Ahmed Al - Badri | 1 |
| Andreas Frey | 1 |
| Aron Fink | 1 |
| Beyza Aksu Dünya | 1 |
| Chia-Wen Chen | 1 |
| Christoph König | 1 |
| Deborah J. Harris | 1 |
| Goodwin Amanda | 1 |
| Jorge Salas | 1 |
| Kylie Gorney | 1 |
| Mark D. Reckase | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 8 |
| Journal Articles | 7 |
Education Level
| Higher Education | 2 |
| Postsecondary Education | 2 |
| Secondary Education | 1 |
Audience
Location
| Oman | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Program for International… | 1 |
What Works Clearinghouse Rating
Ye Ma; Deborah J. Harris – Educational Measurement: Issues and Practice, 2025
Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Yi-Ling Wu; Yao-Hsuan Huang; Chia-Wen Chen; Po-Hsi Chen – Journal of Educational Measurement, 2025
Multistage testing (MST), a variant of computerized adaptive testing (CAT), differs from conventional CAT in that it is adapted at the module level rather than at the individual item level. Typically, all examinees begin the MST with a linear test form in the first stage, commonly known as the routing stage. In 2020, Han introduced an innovative…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Format, Measurement
Beyza Aksu Dünya; Stefanie A. Wind; Mehmet Can Demir – SAGE Open, 2025
The purpose of this study was to generate an item bank for assessing faculty members' assessment literacy and to examine the applicability and feasibility of a Computerized Adaptive Test (CAT) approach to monitor assessment literacy among faculty members. In developing this assessment using a sequential mixed-methods research design, our goal was…
Descriptors: Assessment Literacy, Item Banks, College Faculty, Adaptive Testing
Kylie Gorney; Mark D. Reckase – Journal of Educational Measurement, 2025
In computerized adaptive testing, item exposure control methods are often used to provide a more balanced usage of the item pool. Many of the most popular methods, including the restricted method (Revuelta and Ponsoda), use a single maximum exposure rate to limit the proportion of times that each item is administered. However, Barrada et al.…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Sun-Joo Cho; Goodwin Amanda; Jorge Salas; Sophia Mueller – Grantee Submission, 2025
This study incorporates a random forest (RF) approach to probe complex interactions and nonlinearity among predictors into an item response model with the goal of using a hybrid approach to outperform either an RF or explanatory item response model (EIRM) only in explaining item responses. In the specified model, called EIRM-RF, predicted values…
Descriptors: Item Response Theory, Artificial Intelligence, Statistical Analysis, Predictor Variables
Yang Du; Susu Zhang – Journal of Educational and Behavioral Statistics, 2025
Item compromise has long posed challenges in educational measurement, jeopardizing both test validity and test security of continuous tests. Detecting compromised items is therefore crucial to address this concern. The present literature on compromised item detection reveals two notable gaps: First, the majority of existing methods are based upon…
Descriptors: Item Response Theory, Item Analysis, Bayesian Statistics, Educational Assessment
Andreas Frey; Christoph König; Aron Fink – Journal of Educational Measurement, 2025
The highly adaptive testing (HAT) design is introduced as an alternative test design for the Programme for International Student Assessment (PISA). The principle of HAT is to be as adaptive as possible when selecting items while accounting for PISA's nonstatistical constraints and addressing issues concerning PISA such as item position effects.…
Descriptors: Adaptive Testing, Test Construction, Alternative Assessment, Achievement Tests
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory

Peer reviewed
Direct link
