Publication Date
| In 2026 | 0 |
| Since 2025 | 2 |
| Since 2022 (last 5 years) | 12 |
| Since 2017 (last 10 years) | 16 |
| Since 2007 (last 20 years) | 16 |
Descriptor
| Item Analysis | 16 |
| Learning Analytics | 16 |
| Test Items | 10 |
| Computer Software | 6 |
| Item Response Theory | 6 |
| Multiple Choice Tests | 6 |
| Accuracy | 5 |
| Computer Assisted Testing | 5 |
| Mathematics Tests | 5 |
| Evaluation Methods | 4 |
| Models | 4 |
| More ▼ | |
Source
Author
| Anna Lucia Paoletti | 1 |
| Bakla, Arif | 1 |
| Baron, Patricia | 1 |
| Bernard Veldkamp | 1 |
| Bhashithe Abeysinghe | 1 |
| Boxuan Ma | 1 |
| Chu, Wei | 1 |
| Congning Ni | 1 |
| Donatella Firmani | 1 |
| Dywel, Malwina | 1 |
| Effenberger, Tomáš | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 13 |
| Journal Articles | 10 |
| Speeches/Meeting Papers | 3 |
| Dissertations/Theses -… | 2 |
| Numerical/Quantitative Data | 1 |
| Reports - Descriptive | 1 |
Education Level
| Secondary Education | 4 |
| Elementary Education | 3 |
| Junior High Schools | 2 |
| Middle Schools | 2 |
| Grade 8 | 1 |
| High Schools | 1 |
| Higher Education | 1 |
| Postsecondary Education | 1 |
Audience
Location
| California | 1 |
| France | 1 |
| Italy | 1 |
| South Korea | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 2 |
| Program for International… | 1 |
What Works Clearinghouse Rating
Eva de Schipper; Remco Feskens; Franck Salles; Saskia Keskpaik; Reinaldo dos Santos; Bernard Veldkamp; Paul Drijvers – Large-scale Assessments in Education, 2025
Background: Students take many tests and exams during their school career, but they usually receive feedback about their test performance based only on an analysis of the item responses. With the increase in digital assessment, other data have become available for analysis as well, such as log data of student actions in online assessment…
Descriptors: Problem Solving, Mathematics Instruction, Learning Analytics, Identification
Valentina Albano; Donatella Firmani; Luigi Laura; Jerin George Mathew; Anna Lucia Paoletti; Irene Torrente – Journal of Learning Analytics, 2023
Multiple-choice questions (MCQs) are widely used in educational assessments and professional certification exams. Managing large repositories of MCQs, however, poses several challenges due to the high volume of questions and the need to maintain their quality and relevance over time. One of these challenges is the presence of questions that…
Descriptors: Natural Language Processing, Multiple Choice Tests, Test Items, Item Analysis
Marli Crabtree; Kenneth L. Thompson; Ellen M. Robertson – HAPS Educator, 2024
Research has suggested that changing one's answer on multiple-choice examinations is more likely to lead to positive academic outcomes. This study aimed to further understand the relationship between changing answer selections and item attributes, student performance, and time within a population of 158 first-year medical students enrolled in a…
Descriptors: Anatomy, Science Tests, Medical Students, Medical Education
Tsutsumi, Emiko; Kinoshita, Ryo; Ueno, Maomi – International Educational Data Mining Society, 2021
Knowledge tracing (KT), the task of tracking the knowledge state of each student over time, has been assessed actively by artificial intelligence researchers. Recent reports have described that Deep-IRT, which combines Item Response Theory (IRT) with a deep learning model, provides superior performance. It can express the abilities of each student…
Descriptors: Item Response Theory, Prediction, Accuracy, Artificial Intelligence
Lozano, José H.; Revuelta, Javier – Educational and Psychological Measurement, 2023
The present paper introduces a general multidimensional model to measure individual differences in learning within a single administration of a test. Learning is assumed to result from practicing the operations involved in solving the items. The model accounts for the possibility that the ability to learn may manifest differently for correct and…
Descriptors: Bayesian Statistics, Learning Processes, Test Items, Item Analysis
Congning Ni; Bhashithe Abeysinghe; Juanita Hicks – International Electronic Journal of Elementary Education, 2025
The National Assessment of Educational Progress (NAEP), often referred to as The Nation's Report Card, offers a window into the state of U.S. K-12 education system. Since 2017, NAEP has transitioned to digital assessments, opening new research opportunities that were previously impossible. Process data tracks students' interactions with the…
Descriptors: Reaction Time, Multiple Choice Tests, Behavior Change, National Competency Tests
Patel, Nirmal; Sharma, Aditya; Shah, Tirth; Lomas, Derek – Journal of Educational Data Mining, 2021
Process Analysis is an emerging approach to discover meaningful knowledge from temporal educational data. The study presented in this paper shows how we used Process Analysis methods on the National Assessment of Educational Progress (NAEP) test data for modeling and predicting student test-taking behavior. Our process-oriented data exploration…
Descriptors: Learning Analytics, National Competency Tests, Evaluation Methods, Prediction
Boxuan Ma; Sora Fukui; Yuji Ando; Shinichi Konomi – Journal of Educational Data Mining, 2024
Language proficiency diagnosis is essential to extract fine-grained information about the linguistic knowledge states and skill mastery levels of test takers based on their performance on language tests. Different from comprehensive standardized tests, many language learning apps often revolve around word-level questions. Therefore, knowledge…
Descriptors: Language Proficiency, Brain Hemisphere Functions, Language Processing, Task Analysis
PaaBen, Benjamin; Dywel, Malwina; Fleckenstein, Melanie; Pinkwart, Niels – International Educational Data Mining Society, 2022
Item response theory (IRT) is a popular method to infer student abilities and item difficulties from observed test responses. However, IRT struggles with two challenges: How to map items to skills if multiple skills are present? And how to infer the ability of new students that have not been part of the training data? Inspired by recent advances…
Descriptors: Item Response Theory, Test Items, Item Analysis, Inferences
Rina Levy Cohen – ProQuest LLC, 2022
The aim of this study was to examine the relationship between common classroom help-seeking determinants (achievement goals, self-efficacy, prior knowledge, gender, and help-seeking perceptions) and help-seeking behaviors online (hint use percentage, latency of help seeking, answer attempt percentage, feedback level percentage, and seeking help…
Descriptors: Correlation, Help Seeking, Self Efficacy, Prior Learning
Hong, Jeehye; Kim, Hyunjung; Hong, Hun-Gi – Asia-Pacific Science Education, 2022
This study explored science-related variables that have an impact on the prediction of science achievement groups by applying the educational data mining (EDM) method of the random forest analysis to extract factors associated with students categorized in three different achievement groups (high, moderate, and low) in the Korean data from the 2015…
Descriptors: Science Achievement, Prediction, Teaching Methods, Science Teachers
Jechun An – ProQuest LLC, 2024
Students' responses to Word Dictation curriculum-based measurement (CBM) in writing tend to include a lot of missing values, especially items not reached due to the three-minute test time limit. A large amount of non-ignorable not-reached responses in Word Dictation can be considered using alternative item response theory (IRT) approaches. In…
Descriptors: Item Response Theory, Elementary School Students, Writing Difficulties, Writing Evaluation
Pelánek, Radek; Effenberger, Tomáš; Kukucka, Adam – Journal of Educational Data Mining, 2022
We study the automatic identification of educational items worthy of content authors' attention. Based on the results of such analysis, content authors can revise and improve the content of learning environments. We provide an overview of item properties relevant to this task, including difficulty and complexity measures, item discrimination, and…
Descriptors: Item Analysis, Identification, Difficulty Level, Case Studies
Chu, Wei; Pavlik, Philip I., Jr. – International Educational Data Mining Society, 2023
In adaptive learning systems, various models are employed to obtain the optimal learning schedule and review for a specific learner. Models of learning are used to estimate the learner's current recall probability by incorporating features or predictors proposed by psychological theory or empirically relevant to learners' performance. Logistic…
Descriptors: Reaction Time, Accuracy, Models, Predictor Variables
Çekiç, Ahmet; Bakla, Arif – International Online Journal of Education and Teaching, 2021
The Internet and the software stores for mobile devices come with a huge number of digital tools for any task, and those intended for digital formative assessment (DFA) have burgeoned exponentially in the last decade. These tools vary in terms of their functionality, pedagogical quality, cost, operating systems and so forth. Teachers and learners…
Descriptors: Formative Evaluation, Futures (of Society), Computer Assisted Testing, Guidance
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
