Publication Date
| In 2026 | 0 |
| Since 2025 | 217 |
| Since 2022 (last 5 years) | 1347 |
| Since 2017 (last 10 years) | 2805 |
| Since 2007 (last 20 years) | 4795 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 182 |
| Researchers | 146 |
| Teachers | 122 |
| Policymakers | 39 |
| Administrators | 36 |
| Students | 15 |
| Counselors | 9 |
| Parents | 4 |
| Media Staff | 3 |
| Support Staff | 3 |
Location
| Australia | 169 |
| United Kingdom | 153 |
| Turkey | 126 |
| China | 117 |
| Germany | 108 |
| Canada | 106 |
| Spain | 93 |
| Taiwan | 89 |
| Netherlands | 73 |
| Iran | 71 |
| United States | 68 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 5 |
Harold Doran; Testsuhiro Yamada; Ted Diaz; Emre Gonulates; Vanessa Culver – Journal of Educational Measurement, 2025
Computer adaptive testing (CAT) is an increasingly common mode of test administration offering improved test security, better measurement precision, and the potential for shorter testing experiences. This article presents a new item selection algorithm based on a generalized objective function to support multiple types of testing conditions and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Peter Baldwin; Victoria Yaneva; Kai North; Le An Ha; Yiyun Zhou; Alex J. Mechaber; Brian E. Clauser – Journal of Educational Measurement, 2025
Recent developments in the use of large-language models have led to substantial improvements in the accuracy of content-based automated scoring of free-text responses. The reported accuracy levels suggest that automated systems could have widespread applicability in assessment. However, before they are used in operational testing, other aspects of…
Descriptors: Artificial Intelligence, Scoring, Computational Linguistics, Accuracy
Ebru Balta; Arzu Uçar – International Journal of Assessment Tools in Education, 2025
Unproctored Computerized Adaptive Testing (CAT) is gaining traction due to its convenience, flexibility, and scalability, particularly in high-stakes assessments. However, the lack of proctor can give rise to aberrant testing behavior. These behaviors can impair the validity of test scores. This paper explores the use of a verification test to…
Descriptors: Adaptive Testing, Computer Assisted Testing, Paper and Pencil Tests, Test Validity
Mounia Machkour; Latifa Lamalif; Sophia Faris; Khalifa Mansouri – Educational Process: International Journal, 2025
Background/purpose: This study addresses the problem of demotivation generated by traditional assessment methods, which are often standardized, unengaging, and ill-suited to individual differences. In an increasingly digitized educational context, the primary objective is to assess the ability of an adaptive assessment system, developed on the…
Descriptors: Foreign Countries, High School Seniors, Student Evaluation, Student Motivation
Chioma Udeozor; Fernando Russo Abegão; Jarka Glassey – British Journal of Educational Technology, 2024
Digital games (DGs) have the potential to immerse learners in simulated real-world environments that foster contextualised and active learning experiences. These also offer opportunities for performance assessments by providing an environment for students to carry out tasks requiring the application of knowledge and skills learned in the…
Descriptors: Educational Technology, Computer Assisted Testing, Game Based Learning, Test Construction
Karyssa A. Courey; Frederick L. Oswald; Steven A. Culpepper – Practical Assessment, Research & Evaluation, 2024
Historically, organizational researchers have fully embraced frequentist statistics and null hypothesis significance testing (NHST). Bayesian statistics is an underused alternative paradigm offering numerous benefits for organizational researchers and practitioners: e.g., accumulating direct evidence for the null hypothesis (vs. 'fail to reject…
Descriptors: Bayesian Statistics, Statistical Distributions, Researchers, Institutional Research
Pauline Frizelle; Ana Buckley; Tricia Biancone; Anna Ceroni; Darren Dahly; Paul Fletcher; Dorothy V. M. Bishop; Cristina McKean – Journal of Child Language, 2024
This study reports on the feasibility of using the Test of Complex Syntax- Electronic (TECS-E), as a self-directed app, to measure sentence comprehension in children aged 4 to 5 ½ years old; how testing apps might be adapted for effective independent use; and agreement levels between face-to-face supported computerized and independent computerized…
Descriptors: Language Processing, Computer Software, Language Tests, Syntax
Chen, Fu; Lu, Chang; Cui, Ying; Gao, Yizhu – IEEE Transactions on Learning Technologies, 2023
Learning outcome modeling is a technical underpinning for the successful evaluation of learners' learning outcomes through computer-based assessments. In recent years, collaborative filtering approaches have gained popularity as a technique to model learners' item responses. However, how to model the temporal dependencies between item responses…
Descriptors: Outcomes of Education, Models, Computer Assisted Testing, Cooperation
Henderson, Michael; Chung, Jennifer; Awdry, Rebecca; Ashford, Cliff; Bryant, Mike; Mundy, Matthew; Ryan, Kris – International Journal for Educational Integrity, 2023
Discussions around assessment integrity often focus on the exam conditions and the motivations and values of those who cheated in comparison with those who did not. We argue that discourse needs to move away from a binary representation of cheating. Instead, we propose that the conversation may be more productive and more impactful by focusing on…
Descriptors: College Students, Computer Assisted Testing, Cheating, Ambiguity (Semantics)
Student Approaches to Generating Mathematical Examples: Comparing E-Assessment and Paper-Based Tasks
George Kinnear; Paola Iannone; Ben Davies – Educational Studies in Mathematics, 2025
Example-generation tasks have been suggested as an effective way to both promote students' learning of mathematics and assess students' understanding of concepts. E-assessment offers the potential to use example-generation tasks with large groups of students, but there has been little research on this approach so far. Across two studies, we…
Descriptors: Mathematics Skills, Learning Strategies, Skill Development, Student Evaluation
Victoria Crisp; Sylvia Vitello; Abdullah Ali Khan; Heather Mahy; Sarah Hughes – Research Matters, 2025
This research set out to enhance our understanding of the exam techniques and types of written annotations or markings that learners may wish to use to support their thinking when taking digital multiple-choice exams. Additionally, we aimed to further explore issues around the factors that contribute to learners writing less rough work and…
Descriptors: Computer Assisted Testing, Test Format, Multiple Choice Tests, Notetaking
Angela Chamberlain; Emily D'Arcy; Andrew J. O. Whitehouse; Kerry Wallace; Maya Hayden-Evans; Sonya Girdler; Benjamin Milbourn; Sven Bölte; Kiah Evans – Journal of Autism and Developmental Disorders, 2025
Purpose: The PEDI-CAT (ASD) is used to assess functioning of children and youth on the autism spectrum; however, current psychometric evidence is limited. This study aimed to explore the reliability, validity and acceptability of the PEDI-CAT (ASD) using a large Australian sample. Methods: Caregivers of 134 children and youth on the spectrum…
Descriptors: Autism Spectrum Disorders, Children, Youth, Test Reliability
Kylie Gorney; Mark D. Reckase – Journal of Educational Measurement, 2025
In computerized adaptive testing, item exposure control methods are often used to provide a more balanced usage of the item pool. Many of the most popular methods, including the restricted method (Revuelta and Ponsoda), use a single maximum exposure rate to limit the proportion of times that each item is administered. However, Barrada et al.…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Melodie Philhours; Kelly E. Fish – Research & Practice in Assessment, 2025
This study leverages data from direct assessments of learning (AoL) to build a dynamic model of student performance in competency exams related to computer technology. The analysis reveals three key predictors that strongly influence student success: performance on a practice exam, whether or not a student engaged in practice testing beforehand,…
Descriptors: Technological Literacy, Success, Tests, Drills (Practice)
Peer reviewedAndreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension

Direct link
