Publication Date
| In 2026 | 1 |
| Since 2025 | 236 |
| Since 2022 (last 5 years) | 1369 |
| Since 2017 (last 10 years) | 2827 |
| Since 2007 (last 20 years) | 4817 |
Descriptor
| Computer Assisted Testing | 7214 |
| Foreign Countries | 2053 |
| Test Construction | 1112 |
| Student Evaluation | 1065 |
| Evaluation Methods | 1061 |
| Test Items | 1058 |
| Adaptive Testing | 1053 |
| Educational Technology | 904 |
| Comparative Analysis | 835 |
| Scores | 830 |
| Higher Education | 825 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 182 |
| Researchers | 146 |
| Teachers | 122 |
| Policymakers | 40 |
| Administrators | 36 |
| Students | 15 |
| Counselors | 9 |
| Parents | 4 |
| Media Staff | 3 |
| Support Staff | 3 |
Location
| Australia | 170 |
| United Kingdom | 153 |
| Turkey | 126 |
| China | 117 |
| Germany | 108 |
| Canada | 106 |
| Spain | 94 |
| Taiwan | 89 |
| Netherlands | 73 |
| Iran | 72 |
| United States | 68 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 5 |
Wise, Steven L.; Soland, James; Dupray, Laurence M. – Journal of Applied Testing Technology, 2021
Technology-Enhanced Items (TEIs) have been purported to be more motivating and engaging to test takers than traditional multiple-choice items. The claim of enhanced engagement, however, has thus far received limited research attention. This study examined the rates of rapid-guessing behavior received by three types of items (multiple-choice,…
Descriptors: Test Items, Guessing (Tests), Multiple Choice Tests, Achievement Tests
Ayfer Sayin; Sabiha Bozdag; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The purpose of this study is to generate non-verbal items for a visual reasoning test using templated-based automatic item generation (AIG). The fundamental research method involved following the three stages of template-based AIG. An item from the 2016 4th-grade entrance exam of the Science and Art Center (known as BILSEM) was chosen as the…
Descriptors: Test Items, Test Format, Nonverbal Tests, Visual Measures
New York State Education Department, 2022
The instructions in this manual explain the responsibilities of school administrators for the New York State Testing Program (NYSTP) Grades 3-8 English Language Arts and Mathematics Paper-Based Field Tests. School administrators must be thoroughly familiar with the contents of the manual, and the policies and procedures must be followed as written…
Descriptors: Testing Programs, Mathematics Tests, Test Format, Computer Assisted Testing
Clements, Douglas H.; Sarama, Julie; Tatsuoka, Curtis; Banse, Holland; Tatsuoka, Kikumi – Journal of Research in Childhood Education, 2022
We report on an innovative computer-adaptive assessment, the Comprehensive Research-based Early Math Ability Test (CREMAT), using the case of 1st- and 2nd-graders' understanding of geometric measurement. CREMAT was developed with multiple aims in mind, including: (1) be administered with a reasonable number of items, (2) identify the level(s) of…
Descriptors: Cognitive Tests, Diagnostic Tests, Adaptive Testing, Computer Assisted Testing
Computerized Adaptive Assessment of Understanding of Programming Concepts in Primary School Children
Hogenboom, Sally A. M.; Hermans, Felienne F. J.; Van der Maas, Han L. J. – Computer Science Education, 2022
Background and Context: Valid assessment of understanding of programming concepts in primary school children is essential to implement and improve programming education. Objective: We developed and validated the Computerized Adaptive Programming Concepts Test (CAPCT) with a novel application of Item Response Theory. The CAPCT is a web-based and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Programming, Knowledge Level
Ramsey Lee Cardwell – ProQuest LLC, 2022
The emergence of digital-first assessments is prompting reconsideration of, and innovation in, aspects of psychometrics, test validation, and test use. Using the Duolingo English Test (DET) as an example, this three-paper series seeks to address issues concerning the estimation of classification consistency and the reporting of results for such…
Descriptors: Classification, Reliability, Language Proficiency, Computer Assisted Testing
Chen, Fu; Lu, Chang; Cui, Ying; Gao, Yizhu – IEEE Transactions on Learning Technologies, 2023
Learning outcome modeling is a technical underpinning for the successful evaluation of learners' learning outcomes through computer-based assessments. In recent years, collaborative filtering approaches have gained popularity as a technique to model learners' item responses. However, how to model the temporal dependencies between item responses…
Descriptors: Outcomes of Education, Models, Computer Assisted Testing, Cooperation
Henderson, Michael; Chung, Jennifer; Awdry, Rebecca; Ashford, Cliff; Bryant, Mike; Mundy, Matthew; Ryan, Kris – International Journal for Educational Integrity, 2023
Discussions around assessment integrity often focus on the exam conditions and the motivations and values of those who cheated in comparison with those who did not. We argue that discourse needs to move away from a binary representation of cheating. Instead, we propose that the conversation may be more productive and more impactful by focusing on…
Descriptors: College Students, Computer Assisted Testing, Cheating, Ambiguity (Semantics)
Panachanok Chanwaiwit; Lalida Wiboonwachara – rEFLections, 2025
Chiang Mai Rajabhat University Test of English Proficiency (CMRU-TEP) is a required English proficiency test for all CMRU students before graduation. Despite its meticulous design, there is an opportunity for students to improve their scores through focused efforts and targeted support. This study employs an explanatory sequential mixed-methods…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Language Proficiency
Chen, Chia-Wen; Wang, Wen-Chung; Chiu, Ming Ming; Ro, Sage – Journal of Educational Measurement, 2020
The use of computerized adaptive testing algorithms for ranking items (e.g., college preferences, career choices) involves two major challenges: unacceptably high computation times (selecting from a large item pool with many dimensions) and biased results (enhanced preferences or intensified examinee responses because of repeated statements across…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Tzu-Hua Wang; Yu Sun; Nai-Wen Huang – Computer Assisted Language Learning, 2023
This research applied a web-based dynamic assessment system, GPAM-WATA system, to help low English achievers to perform self-directed learning of junior high school English grammar. A quasi-experimental design was adopted. A total of 124 seventh graders from four classes participated in this research. Participants were randomly divided into the…
Descriptors: Foreign Countries, Junior High School Students, English (Second Language), Second Language Learning
Jenalee A. Hinds – ProQuest LLC, 2020
This study's primary purpose was to determine how students with a learning disability (LD) in mathematics react in different testing environments, and whether changing the testing parameters can improve the outcomes. Quantitative data was collected through questionnaires, heart rates, and math fluency probes. These data were used to test the…
Descriptors: Stress Variables, Test Anxiety, Learning Disabilities, Mathematics Tests
Student Approaches to Generating Mathematical Examples: Comparing E-Assessment and Paper-Based Tasks
George Kinnear; Paola Iannone; Ben Davies – Educational Studies in Mathematics, 2025
Example-generation tasks have been suggested as an effective way to both promote students' learning of mathematics and assess students' understanding of concepts. E-assessment offers the potential to use example-generation tasks with large groups of students, but there has been little research on this approach so far. Across two studies, we…
Descriptors: Mathematics Skills, Learning Strategies, Skill Development, Student Evaluation
Victoria Crisp; Sylvia Vitello; Abdullah Ali Khan; Heather Mahy; Sarah Hughes – Research Matters, 2025
This research set out to enhance our understanding of the exam techniques and types of written annotations or markings that learners may wish to use to support their thinking when taking digital multiple-choice exams. Additionally, we aimed to further explore issues around the factors that contribute to learners writing less rough work and…
Descriptors: Computer Assisted Testing, Test Format, Multiple Choice Tests, Notetaking
Melodie Philhours; Kelly E. Fish – Research & Practice in Assessment, 2025
This study leverages data from direct assessments of learning (AoL) to build a dynamic model of student performance in competency exams related to computer technology. The analysis reveals three key predictors that strongly influence student success: performance on a practice exam, whether or not a student engaged in practice testing beforehand,…
Descriptors: Technological Literacy, Success, Tests, Drills (Practice)

Peer reviewed
Direct link
