Publication Date
| In 2026 | 0 |
| Since 2025 | 18 |
| Since 2022 (last 5 years) | 90 |
| Since 2017 (last 10 years) | 189 |
| Since 2007 (last 20 years) | 437 |
Descriptor
| Adaptive Testing | 1050 |
| Computer Assisted Testing | 1050 |
| Test Items | 447 |
| Item Response Theory | 284 |
| Test Construction | 274 |
| Item Banks | 230 |
| Simulation | 195 |
| Comparative Analysis | 139 |
| Foreign Countries | 117 |
| Higher Education | 104 |
| Test Format | 99 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Location
| Taiwan | 11 |
| Australia | 8 |
| Netherlands | 8 |
| New York | 8 |
| Turkey | 8 |
| United Kingdom | 8 |
| California | 7 |
| Spain | 6 |
| Canada | 5 |
| China | 5 |
| Denmark | 5 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 7 |
| Education Consolidation… | 1 |
| Every Student Succeeds Act… | 1 |
| Race to the Top | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Wyse, Adam E.; McBride, James R. – Journal of Educational Measurement, 2021
A key consideration when giving any computerized adaptive test (CAT) is how much adaptation is present when the test is used in practice. This study introduces a new framework to measure the amount of adaptation of Rasch-based CATs based on looking at the differences between the selected item locations (Rasch item difficulty parameters) of the…
Descriptors: Item Response Theory, Computer Assisted Testing, Adaptive Testing, Test Items
The Effect of Item Pools of Different Strengths on the Test Results of Computerized-Adaptive Testing
Kezer, Fatih – International Journal of Assessment Tools in Education, 2021
Item response theory provides various important advantages for exams carried out or to be carried out digitally. For computerized adaptive tests to be able to make valid and reliable predictions supported by IRT, good quality item pools should be used. This study examines how adaptive test applications vary in item pools which consist of items…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Wang, Wenhao; Kingston, Neal M.; Davis, Marcia H.; Tiemann, Gail C.; Tonks, Stephen; Hock, Michael – Educational Measurement: Issues and Practice, 2021
Adaptive tests are more efficient than fixed-length tests through the use of item response theory; adaptive tests also present students questions that are tailored to their proficiency level. Although the adaptive algorithm is straightforward, developing a multidimensional computer adaptive test (MCAT) measure is complex. Evidence-centered design…
Descriptors: Evidence Based Practice, Reading Motivation, Adaptive Testing, Computer Assisted Testing
Chen, Chia-Wen; Wang, Wen-Chung; Chiu, Ming Ming; Ro, Sage – Journal of Educational Measurement, 2020
The use of computerized adaptive testing algorithms for ranking items (e.g., college preferences, career choices) involves two major challenges: unacceptably high computation times (selecting from a large item pool with many dimensions) and biased results (enhanced preferences or intensified examinee responses because of repeated statements across…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Wyse, Adam E. – Educational and Psychological Measurement, 2021
An essential question when computing test--retest and alternate forms reliability coefficients is how many days there should be between tests. This article uses data from reading and math computerized adaptive tests to explore how the number of days between tests impacts alternate forms reliability coefficients. Results suggest that the highest…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Reliability, Reading Tests
Betts, Joe; Muntean, William; Kim, Doyoung; Kao, Shu-chuan – Educational and Psychological Measurement, 2022
The multiple response structure can underlie several different technology-enhanced item types. With the increased use of computer-based testing, multiple response items are becoming more common. This response type holds the potential for being scored polytomously for partial credit. However, there are several possible methods for computing raw…
Descriptors: Scoring, Test Items, Test Format, Raw Scores
Yang, Lihong; Reckase, Mark D. – Educational and Psychological Measurement, 2020
The present study extended the "p"-optimality method to the multistage computerized adaptive test (MST) context in developing optimal item pools to support different MST panel designs under different test configurations. Using the Rasch model, simulated optimal item pools were generated with and without practical constraints of exposure…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Wise, Steven L.; Soland, James; Dupray, Laurence M. – Journal of Applied Testing Technology, 2021
Technology-Enhanced Items (TEIs) have been purported to be more motivating and engaging to test takers than traditional multiple-choice items. The claim of enhanced engagement, however, has thus far received limited research attention. This study examined the rates of rapid-guessing behavior received by three types of items (multiple-choice,…
Descriptors: Test Items, Guessing (Tests), Multiple Choice Tests, Achievement Tests
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Wyse, Adam E.; McBride, James R. – Measurement: Interdisciplinary Research and Perspectives, 2022
A common practical challenge is how to assign ability estimates to all incorrect and all correct response patterns when using item response theory (IRT) models and maximum likelihood estimation (MLE) since ability estimates for these types of responses equal -8 or +8. This article uses a simulation study and data from an operational K-12…
Descriptors: Scores, Adaptive Testing, Computer Assisted Testing, Test Length
Ghio, Fernanda Belén; Bruzzone, Manuel; Rojas-Torres, Luis; Cupani, Marcos – European Journal of Science and Mathematics Education, 2022
In the last decades, the development of computerized adaptive testing (CAT) has allowed more precise measurements with a smaller number of items. In this study, we develop an item bank (IB) to generate the adaptive algorithm and simulate the functioning of CAT to assess the domains of mathematical knowledge in Argentinian university students…
Descriptors: Test Items, Item Banks, Adaptive Testing, Mathematics Tests
Cole, Shelbi K.; Swanson, Carey – Smarter Balanced Assessment Consortium, 2022
Over the past few years, several states have begun to explore or pilot different through-year assessments to serve as replacements to the traditional end-of-year summative assessments that are currently the predominant source of information used by states to meet federal accountability requirements. While there are several different assessment…
Descriptors: Instructional Materials, Computer Assisted Testing, Adaptive Testing, Student Evaluation
Morris, Scott B.; Bass, Michael; Howard, Elizabeth; Neapolitan, Richard E. – International Journal of Testing, 2020
The standard error (SE) stopping rule, which terminates a computer adaptive test (CAT) when the "SE" is less than a threshold, is effective when there are informative questions for all trait levels. However, in domains such as patient-reported outcomes, the items in a bank might all target one end of the trait continuum (e.g., negative…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Banks, Item Response Theory
Gardner, John; O'Leary, Michael; Yuan, Li – Journal of Computer Assisted Learning, 2021
Artificial Intelligence is at the heart of modern society with computers now capable of making process decisions in many spheres of human activity. In education, there has been intensive growth in systems that make formal and informal learning an anytime, anywhere activity for billions of people through online open educational resources and…
Descriptors: Artificial Intelligence, Educational Assessment, Formative Evaluation, Summative Evaluation
Turner, Megan I.; Van Norman, Ethan R.; Hojnoski, Robin L. – Journal of Psychoeducational Assessment, 2022
Star Math (SM) is a popular computer adaptive test (CAT) schools use to screen students for academic risk. Despite its popularity, few independent investigations of its diagnostic accuracy have been conducted. We evaluated the diagnostic accuracy of SM based upon vendor provided cut-scores (25th and 40th percentiles nationally) in predicting…
Descriptors: Accuracy, Adaptive Testing, Computer Assisted Testing, High Stakes Tests

Peer reviewed
Direct link
