Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 3 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 5 |
Descriptor
| Bayesian Statistics | 5 |
| Behavior Patterns | 5 |
| Item Response Theory | 5 |
| Test Items | 4 |
| Cheating | 2 |
| Item Analysis | 2 |
| Models | 2 |
| Monte Carlo Methods | 2 |
| Reaction Time | 2 |
| Responses | 2 |
| Ability Grouping | 1 |
| More ▼ | |
Author
| Brandon Zhang | 1 |
| Carson Keeter | 1 |
| Chun Wang | 1 |
| Douglas Clements | 1 |
| Harring, Jeffrey R. | 1 |
| Jing Lu | 1 |
| Julie Sarama | 1 |
| Lee, HyeSun | 1 |
| Man, Kaiwen | 1 |
| Ningzhong Shi | 1 |
| Pavel Chernyavskiy | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 4 |
| Journal Articles | 3 |
| Reports - Evaluative | 1 |
Education Level
| Elementary Education | 2 |
| Early Childhood Education | 1 |
| Grade 4 | 1 |
| Intermediate Grades | 1 |
| Kindergarten | 1 |
| Primary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 1 |
What Works Clearinghouse Rating
Man, Kaiwen; Harring, Jeffrey R. – Educational and Psychological Measurement, 2021
Many approaches have been proposed to jointly analyze item responses and response times to understand behavioral differences between normally and aberrantly behaved test-takers. Biometric information, such as data from eye trackers, can be used to better identify these deviant testing behaviors in addition to more conventional data types. Given…
Descriptors: Cheating, Item Response Theory, Reaction Time, Eye Movements
Shi Pu; Yu Yan; Brandon Zhang – Journal of Educational Data Mining, 2024
We propose a novel model, Wide & Deep Item Response Theory (Wide & Deep IRT), to predict the correctness of students' responses to questions using historical clickstream data. This model combines the strengths of conventional Item Response Theory (IRT) models and Wide & Deep Learning for Recommender Systems. By leveraging clickstream…
Descriptors: Prediction, Success, Data Analysis, Learning Analytics
Pavel Chernyavskiy; Traci S. Kutaka; Carson Keeter; Julie Sarama; Douglas Clements – Grantee Submission, 2025
When researchers code behavior that is undetectable or falls outside of the validated ordinal scale, the resultant outcomes often suffer from informative missingness. Incorrect analysis of such data can lead to biased arguments around efficacy and effectiveness in the context of experimental and intervention research. Here, we detail a new…
Descriptors: Bayesian Statistics, Mathematics Instruction, Learning Trajectories, Item Response Theory
Lee, HyeSun; Smith, Weldon Z. – Educational and Psychological Measurement, 2020
Based on the framework of testlet models, the current study suggests the Bayesian random block item response theory (BRB IRT) model to fit forced-choice formats where an item block is composed of three or more items. To account for local dependence among items within a block, the BRB IRT model incorporated a random block effect into the response…
Descriptors: Bayesian Statistics, Item Response Theory, Monte Carlo Methods, Test Format
Jing Lu; Chun Wang; Ningzhong Shi – Grantee Submission, 2023
In high-stakes, large-scale, standardized tests with certain time limits, examinees are likely to engage in either one of the three types of behavior (e.g., van der Linden & Guo, 2008; Wang & Xu, 2015): solution behavior, rapid guessing behavior, and cheating behavior. Oftentimes examinees do not always solve all items due to various…
Descriptors: High Stakes Tests, Standardized Tests, Guessing (Tests), Cheating

Peer reviewed
Direct link
