Publication Date
| In 2026 | 1 |
| Since 2025 | 202 |
| Since 2022 (last 5 years) | 1094 |
| Since 2017 (last 10 years) | 2168 |
| Since 2007 (last 20 years) | 3304 |
Descriptor
Source
Author
Publication Type
Education Level
Location
| Australia | 111 |
| Turkey | 108 |
| China | 93 |
| United Kingdom | 93 |
| Germany | 87 |
| Iran | 71 |
| Spain | 66 |
| Taiwan | 66 |
| Canada | 65 |
| Indonesia | 57 |
| Netherlands | 54 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 2 |
| Meets WWC Standards with or without Reservations | 2 |
| Does not meet standards | 4 |
Joanna Williamson – Research Matters, 2025
Teachers, examiners and assessment experts know from experience that some candidates annotate exam questions. "Annotation" includes anything the candidate writes or draws outside of the designated response space, such as underlining, jotting, circling, sketching and calculating. Annotations are of interest because they may evidence…
Descriptors: Mathematics, Tests, Documentation, Secondary Education
Ethan Roy; Mathieu Guillaume; Amandine Van Rinsveld; Project iLead Consortium; Bruce D. McCandliss – npj Science of Learning, 2025
Arithmetic fluency is regarded as a foundational math skill, typically measured as a single construct with pencil-and-paper-based timed assessments. We introduce a tablet-based assessment of single-digit fluency that captures individual trial response times across several embedded experimental contrasts of interest. A large (n = 824) cohort of…
Descriptors: Arithmetic, Mathematics Skills, Tablet Computers, Grade 3
Michael Bass; Scott Morris; Sheng Zhang – Measurement: Interdisciplinary Research and Perspectives, 2025
Administration of patient-reported outcome measures (PROs), using multidimensional computer adaptive tests (MCATs) has the potential to reduce patient burden, but the efficiency of MCAT depends on the degree to which an individual's responses fit the psychometric properties of the assessment. Assessing patients' symptom burden through the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Patients, Outcome Measures
Yi-Jui I. Chen; Yi-Jhen Wu; Yi-Hsin Chen; Robin Irey – Journal of Psychoeducational Assessment, 2025
A short form of the 60-item computer-based orthographic processing assessment (long-form COPA or COPA-LF) was developed. The COPA-LF consists of five skills, including rapid perception, access, differentiation, correction, and arrangement. Thirty items from the COPA-LF were selected for the short-form COPA (COPA-SF) based on cognitive diagnostic…
Descriptors: Computer Assisted Testing, Test Length, Test Validity, Orthographic Symbols
Xuefan Li; Marco Zappatore; Tingsong Li; Weiwei Zhang; Sining Tao; Xiaoqing Wei; Xiaoxu Zhou; Naiqing Guan; Anny Chan – IEEE Transactions on Learning Technologies, 2025
The integration of generative artificial intelligence (GAI) into educational settings offers unprecedented opportunities to enhance the efficiency of teaching and the effectiveness of learning, particularly within online platforms. This study evaluates the development and application of a customized GAI-powered teaching assistant, trained…
Descriptors: Artificial Intelligence, Technology Uses in Education, Student Evaluation, Academic Achievement
Olaf Lund; Rune Raudeberg; Hans Johansen; Mette-Line Myhre; Espen Walderhaug; Amir Poreh; Jens Egeland – Journal of Attention Disorders, 2025
Objective: The Conners Continuous Performance Test-3 (CCPT-3) is a computerized test of attention frequently used in clinical neuropsychology. In the present factor analysis, we seek to assess the factor structure of the CCPT-3 and evaluate the suggested dimensions in the CCPT-3 Manual. Method: Data from a mixed clinical sample of 931 adults…
Descriptors: Factor Structure, Factor Analysis, Attention Span, Measures (Individuals)
Ilhama Mammadova; Fatime Ismayilli; Elnaz Aliyeva; Narmin Mammadova – Educational Process: International Journal, 2025
Background/purpose: Artificial Intelligence (AI) is increasingly shaping assessment practices in higher education, promising faster feedback and reduced instructor workload while also raising concerns about fairness and transparency. This study examines how AI technologies are transforming assessment processes and the experiences of stakeholders.…
Descriptors: Artificial Intelligence, Student Evaluation, Technology Uses in Education, Undergraduate Students
Wallace N. Pinto Jr.; Jinnie Shin – Journal of Educational Measurement, 2025
In recent years, the application of explainability techniques to automated essay scoring and automated short-answer grading (ASAG) models, particularly those based on transformer architectures, has gained significant attention. However, the reliability and consistency of these techniques remain underexplored. This study systematically investigates…
Descriptors: Automation, Grading, Computer Assisted Testing, Scoring
Nathaniel Owen; Ananda Senel – Review of Education, 2025
Transparency in high-stakes English language assessment has become crucial for ensuring fairness and maintaining assessment validity in language testing. However, our understanding of how transparency is conceptualised and implemented remains fragmented, particularly in relation to stakeholder experiences and technological innovations. This study…
Descriptors: Accountability, High Stakes Tests, Language Tests, Computer Assisted Testing
Sun-Joo Cho; Goodwin Amanda; Jorge Salas; Sophia Mueller – Grantee Submission, 2025
This study incorporates a random forest (RF) approach to probe complex interactions and nonlinearity among predictors into an item response model with the goal of using a hybrid approach to outperform either an RF or explanatory item response model (EIRM) only in explaining item responses. In the specified model, called EIRM-RF, predicted values…
Descriptors: Item Response Theory, Artificial Intelligence, Statistical Analysis, Predictor Variables
Mncedisi Christian Maphalala; Ntombikayise Nkosi – Open Praxis, 2025
This conceptual study explores and proposes strategies for enhancing security and academic integrity within the Open and Distance e-learning (ODeL) context, adhering to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocols. As higher education continues to evolve, the reliance on online assessments has become more…
Descriptors: Literature Reviews, Meta Analysis, Supervision, Computer Assisted Testing
Amie J. Dirks-Naylor – Advances in Physiology Education, 2025
Artificial intelligence (AI) tools like ChatGPT offer new opportunities to enhance student learning through active recall and self-directed inquiry. This study aimed to determine student perceptions of a classroom assignment designed to develop proficiency in using ChatGPT for these strategies. First-semester Doctor of Pharmacy students in a…
Descriptors: Artificial Intelligence, Technology Uses in Education, Recall (Psychology), Inquiry
Lahza, Hatim; Smith, Tammy G.; Khosravi, Hassan – British Journal of Educational Technology, 2023
Traditional item analyses such as classical test theory (CTT) use exam-taker responses to assessment items to approximate their difficulty and discrimination. The increased adoption by educational institutions of electronic assessment platforms (EAPs) provides new avenues for assessment analytics by capturing detailed logs of an exam-taker's…
Descriptors: Medical Students, Evaluation, Computer Assisted Testing, Time Factors (Learning)
Buczak, Philip; Huang, He; Forthmann, Boris; Doebler, Philipp – Journal of Creative Behavior, 2023
Traditionally, researchers employ human raters for scoring responses to creative thinking tasks. Apart from the associated costs this approach entails two potential risks. First, human raters can be subjective in their scoring behavior (inter-rater-variance). Second, individual raters are prone to inconsistent scoring patterns…
Descriptors: Computer Assisted Testing, Scoring, Automation, Creative Thinking
Ersen, Rabia Karatoprak; Lee, Won-Chan – Journal of Educational Measurement, 2023
The purpose of this study was to compare calibration and linking methods for placing pretest item parameter estimates on the item pool scale in a 1-3 computerized multistage adaptive testing design in terms of item parameter recovery. Two models were used: embedded-section, in which pretest items were administered within a separate module, and…
Descriptors: Pretesting, Test Items, Computer Assisted Testing, Adaptive Testing

Peer reviewed
Direct link
