Publication Date
| In 2026 | 0 |
| Since 2025 | 4 |
| Since 2022 (last 5 years) | 14 |
| Since 2017 (last 10 years) | 24 |
| Since 2007 (last 20 years) | 24 |
Descriptor
| Learning Analytics | 24 |
| Test Items | 24 |
| Computer Assisted Testing | 12 |
| Item Analysis | 10 |
| Computer Software | 8 |
| Difficulty Level | 8 |
| Mathematics Tests | 7 |
| Multiple Choice Tests | 7 |
| Scores | 7 |
| Foreign Countries | 5 |
| Test Construction | 5 |
| More ▼ | |
Source
Author
| Aditya Shah | 1 |
| Agard, Christopher | 1 |
| Ajay Devmane | 1 |
| Almeida, Daniela | 1 |
| Anna Lucia Paoletti | 1 |
| Azevedo, Jose Manuel | 1 |
| Bakla, Arif | 1 |
| Baptiste Moreau-Pernet | 1 |
| Baron, Patricia | 1 |
| Beites, Patrícia Damas | 1 |
| Bernard Veldkamp | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 18 |
| Reports - Research | 17 |
| Reports - Descriptive | 4 |
| Speeches/Meeting Papers | 2 |
| Dissertations/Theses -… | 1 |
| Information Analyses | 1 |
| Numerical/Quantitative Data | 1 |
| Reports - Evaluative | 1 |
Education Level
| Secondary Education | 6 |
| Elementary Education | 4 |
| Higher Education | 4 |
| Junior High Schools | 4 |
| Middle Schools | 4 |
| Postsecondary Education | 4 |
| Early Childhood Education | 2 |
| Grade 8 | 2 |
| High Schools | 2 |
| Grade 4 | 1 |
| Intermediate Grades | 1 |
| More ▼ | |
Audience
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 3 |
What Works Clearinghouse Rating
Aditya Shah; Ajay Devmane; Mehul Ranka; Prathamesh Churi – Education and Information Technologies, 2024
Online learning has grown due to the advancement of technology and flexibility. Online examinations measure students' knowledge and skills. Traditional question papers include inconsistent difficulty levels, arbitrary question allocations, and poor grading. The suggested model calibrates question paper difficulty based on student performance to…
Descriptors: Computer Assisted Testing, Difficulty Level, Grading, Test Construction
Eva de Schipper; Remco Feskens; Franck Salles; Saskia Keskpaik; Reinaldo dos Santos; Bernard Veldkamp; Paul Drijvers – Large-scale Assessments in Education, 2025
Background: Students take many tests and exams during their school career, but they usually receive feedback about their test performance based only on an analysis of the item responses. With the increase in digital assessment, other data have become available for analysis as well, such as log data of student actions in online assessment…
Descriptors: Problem Solving, Mathematics Instruction, Learning Analytics, Identification
Lucia M. Reyes; Michael A. Cook; Steven M. Ross – Center for Research and Reform in Education, 2025
In March of 2025, brightwheel, a San Francisco-based educational technology company, partnered with the Center for Research and Reform in Education (CRRE) at Johns Hopkins University to test brightwheel's product, the Experience Assessment. The assessment was designed to provide early childhood educators with an objective and systematic way to…
Descriptors: Psychometrics, Educational Technology, Early Childhood Education, Young Children
Owen Henkel; Hannah Horne-Robinson; Maria Dyshel; Greg Thompson; Ralph Abboud; Nabil Al Nahin Ch; Baptiste Moreau-Pernet; Kirk Vanacore – Journal of Learning Analytics, 2025
This paper introduces AMMORE, a new dataset of 53,000 math open-response question-answer pairs from Rori, a mathematics learning platform used by middle and high school students in several African countries. Using this dataset, we conducted two experiments to evaluate the use of large language models (LLM) for grading particularly challenging…
Descriptors: Learning Analytics, Learning Management Systems, Mathematics Instruction, Middle School Students
Valentina Albano; Donatella Firmani; Luigi Laura; Jerin George Mathew; Anna Lucia Paoletti; Irene Torrente – Journal of Learning Analytics, 2023
Multiple-choice questions (MCQs) are widely used in educational assessments and professional certification exams. Managing large repositories of MCQs, however, poses several challenges due to the high volume of questions and the need to maintain their quality and relevance over time. One of these challenges is the presence of questions that…
Descriptors: Natural Language Processing, Multiple Choice Tests, Test Items, Item Analysis
Pelanek, Radek – Journal of Learning Analytics, 2021
In this work, we consider learning analytics for primary and secondary schools from the perspective of the designer of a learning system. We provide an overview of practically useful analytics techniques with descriptions of their applications and specific illustrations. We highlight data biases and caveats that complicate the analysis and its…
Descriptors: Learning Analytics, Elementary Schools, Secondary Schools, Educational Technology
Marli Crabtree; Kenneth L. Thompson; Ellen M. Robertson – HAPS Educator, 2024
Research has suggested that changing one's answer on multiple-choice examinations is more likely to lead to positive academic outcomes. This study aimed to further understand the relationship between changing answer selections and item attributes, student performance, and time within a population of 158 first-year medical students enrolled in a…
Descriptors: Anatomy, Science Tests, Medical Students, Medical Education
Lozano, José H.; Revuelta, Javier – Educational and Psychological Measurement, 2023
The present paper introduces a general multidimensional model to measure individual differences in learning within a single administration of a test. Learning is assumed to result from practicing the operations involved in solving the items. The model accounts for the possibility that the ability to learn may manifest differently for correct and…
Descriptors: Bayesian Statistics, Learning Processes, Test Items, Item Analysis
Congning Ni; Bhashithe Abeysinghe; Juanita Hicks – International Electronic Journal of Elementary Education, 2025
The National Assessment of Educational Progress (NAEP), often referred to as The Nation's Report Card, offers a window into the state of U.S. K-12 education system. Since 2017, NAEP has transitioned to digital assessments, opening new research opportunities that were previously impossible. Process data tracks students' interactions with the…
Descriptors: Reaction Time, Multiple Choice Tests, Behavior Change, National Competency Tests
Maddox, Bryan – OECD Publishing, 2023
The digital transition in educational testing has introduced many new opportunities for technology to enhance large-scale assessments. These include the potential to collect and use log data on test-taker response processes routinely, and on a large scale. Process data has long been recognised as a valuable source of validation evidence in…
Descriptors: Measurement, Inferences, Test Reliability, Computer Assisted Testing
Jiang, Yang; Gong, Tao; Saldivia, Luis E.; Cayton-Hodges, Gabrielle; Agard, Christopher – Large-scale Assessments in Education, 2021
In 2017, the mathematics assessments that are part of the National Assessment of Educational Progress (NAEP) program underwent a transformation shifting the administration from paper-and-pencil formats to digitally-based assessments (DBA). This shift introduced new interactive item types that bring rich process data and tremendous opportunities to…
Descriptors: Data Use, Learning Analytics, Test Items, Measurement
Patel, Nirmal; Sharma, Aditya; Shah, Tirth; Lomas, Derek – Journal of Educational Data Mining, 2021
Process Analysis is an emerging approach to discover meaningful knowledge from temporal educational data. The study presented in this paper shows how we used Process Analysis methods on the National Assessment of Educational Progress (NAEP) test data for modeling and predicting student test-taking behavior. Our process-oriented data exploration…
Descriptors: Learning Analytics, National Competency Tests, Evaluation Methods, Prediction
Mingying Zheng – ProQuest LLC, 2024
The digital transformation in educational assessment has led to the proliferation of large-scale data, offering unprecedented opportunities to enhance language learning, and testing through machine learning (ML) techniques. Drawing on the extensive data generated by online English language assessments, this dissertation investigates the efficacy…
Descriptors: Artificial Intelligence, Computational Linguistics, Language Tests, English (Second Language)
Tran, Tich Phuoc; Meacheam, David – IEEE Transactions on Learning Technologies, 2020
The use of learning management systems (LMSs) for learning and knowledge sharing has accelerated quickly both in education and corporate worlds. Despite the benefits brought by LMSs, the current systems still face significant challenges, including the lack of automation in generating quiz questions and managing courses. Over the past decade, more…
Descriptors: Integrated Learning Systems, Test Construction, Test Items, Automation
Azevedo, Jose Manuel; Oliveira, Ema P.; Beites, Patrícia Damas – International Journal of Information and Learning Technology, 2019
Purpose: The purpose of this paper is to find appropriate forms of analysis of multiple-choice questions (MCQ) to obtain an assessment method, as fair as possible, for the students. The authors intend to ascertain if it is possible to control the quality of the MCQ contained in a bank of questions, implemented in Moodle, presenting some evidence…
Descriptors: Learning Analytics, Multiple Choice Tests, Test Theory, Item Response Theory
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
