Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 4 |
| Since 2017 (last 10 years) | 6 |
| Since 2007 (last 20 years) | 9 |
Descriptor
| Bayesian Statistics | 13 |
| Responses | 13 |
| Test Items | 13 |
| Item Response Theory | 7 |
| Models | 5 |
| Behavior Patterns | 4 |
| Foreign Countries | 4 |
| Item Analysis | 4 |
| Simulation | 4 |
| Achievement Tests | 3 |
| Computation | 3 |
| More ▼ | |
Source
| Educational and Psychological… | 3 |
| Journal of Educational and… | 3 |
| Grantee Submission | 2 |
| Alberta Journal of… | 1 |
| Applied Measurement in… | 1 |
| Applied Psychological… | 1 |
| Journal of Educational… | 1 |
| Psychometrika | 1 |
Author
| van der Linden, Wim J. | 2 |
| Abu-Ghazalah, Rashid M. | 1 |
| Carson Keeter | 1 |
| Chun Wang | 1 |
| Douglas Clements | 1 |
| Dubins, David N. | 1 |
| Gräfe, Linda | 1 |
| Harring, Jeffrey R. | 1 |
| Jing Lu | 1 |
| Jiwei Zhang | 1 |
| Julie Sarama | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 11 |
| Reports - Research | 9 |
| Reports - Evaluative | 4 |
Education Level
| Higher Education | 3 |
| Elementary Education | 2 |
| Postsecondary Education | 2 |
| Secondary Education | 2 |
| Early Childhood Education | 1 |
| Grade 5 | 1 |
| Intermediate Grades | 1 |
| Kindergarten | 1 |
| Middle Schools | 1 |
| Primary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
| Program for International… | 2 |
| Armed Services Vocational… | 1 |
What Works Clearinghouse Rating
Man, Kaiwen; Harring, Jeffrey R. – Educational and Psychological Measurement, 2023
Preknowledge cheating jeopardizes the validity of inferences based on test results. Many methods have been developed to detect preknowledge cheating by jointly analyzing item responses and response times. Gaze fixations, an essential eye-tracker measure, can be utilized to help detect aberrant testing behavior with improved accuracy beyond using…
Descriptors: Cheating, Reaction Time, Test Items, Responses
Abu-Ghazalah, Rashid M.; Dubins, David N.; Poon, Gregory M. K. – Applied Measurement in Education, 2023
Multiple choice results are inherently probabilistic outcomes, as correct responses reflect a combination of knowledge and guessing, while incorrect responses additionally reflect blunder, a confidently committed mistake. To objectively resolve knowledge from responses in an MC test structure, we evaluated probabilistic models that explicitly…
Descriptors: Guessing (Tests), Multiple Choice Tests, Probability, Models
Pavel Chernyavskiy; Traci S. Kutaka; Carson Keeter; Julie Sarama; Douglas Clements – Grantee Submission, 2024
When researchers code behavior that is undetectable or falls outside of the validated ordinal scale, the resultant outcomes often suffer from informative missingness. Incorrect analysis of such data can lead to biased arguments around efficacy and effectiveness in the context of experimental and intervention research. Here, we detail a new…
Descriptors: Bayesian Statistics, Mathematics Instruction, Learning Trajectories, Item Response Theory
A Sequential Bayesian Changepoint Detection Procedure for Aberrant Behaviors in Computerized Testing
Jing Lu; Chun Wang; Jiwei Zhang; Xue Wang – Grantee Submission, 2023
Changepoints are abrupt variations in a sequence of data in statistical inference. In educational and psychological assessments, it is pivotal to properly differentiate examinees' aberrant behaviors from solution behavior to ensure test reliability and validity. In this paper, we propose a sequential Bayesian changepoint detection algorithm to…
Descriptors: Bayesian Statistics, Behavior Patterns, Computer Assisted Testing, Accuracy
Lu, Jing; Wang, Chun – Journal of Educational Measurement, 2020
Item nonresponses are prevalent in standardized testing. They happen either when students fail to reach the end of a test due to a time limit or quitting, or when students choose to omit some items strategically. Oftentimes, item nonresponses are nonrandom, and hence, the missing data mechanism needs to be properly modeled. In this paper, we…
Descriptors: Item Response Theory, Test Items, Standardized Tests, Responses
Lee, HyeSun; Smith, Weldon Z. – Educational and Psychological Measurement, 2020
Based on the framework of testlet models, the current study suggests the Bayesian random block item response theory (BRB IRT) model to fit forced-choice formats where an item block is composed of three or more items. To account for local dependence among items within a block, the BRB IRT model incorporated a random block effect into the response…
Descriptors: Bayesian Statistics, Item Response Theory, Monte Carlo Methods, Test Format
Pohl, Steffi; Gräfe, Linda; Rose, Norman – Educational and Psychological Measurement, 2014
Data from competence tests usually show a number of missing responses on test items due to both omitted and not-reached items. Different approaches for dealing with missing responses exist, and there are no clear guidelines on which of those to use. While classical approaches rely on an ignorable missing data mechanism, the most recently developed…
Descriptors: Test Items, Achievement Tests, Item Response Theory, Models
van der Linden, Wim J. – Applied Psychological Measurement, 2009
An adaptive testing method is presented that controls the speededness of a test using predictions of the test takers' response times on the candidate items in the pool. Two different types of predictions are investigated: posterior predictions given the actual response times on the items already administered and posterior predictions that use the…
Descriptors: Simulation, Adaptive Testing, Vocational Aptitude, Bayesian Statistics
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2008
Response times on items can be used to improve item selection in adaptive testing provided that a probabilistic model for their distribution is available. In this research, the author used a hierarchical modeling framework with separate first-level models for the responses and response times and a second-level model for the distribution of the…
Descriptors: Reaction Time, Law Schools, Adaptive Testing, Item Analysis
Peer reviewedPatz, Richard J.; Junker, Brian W. – Journal of Educational and Behavioral Statistics, 1999
Extends the basic Markov chain Monte Carlo (MCMC) strategy of R. Patz and B. Junker (1999) for Bayesian inference in complex Item Response Theory settings to address issues such as nonresponse, designed missingness, multiple raters, guessing behaviors, and partial credit (polytomous) test items. Applies the MCMC method to data from the National…
Descriptors: Bayesian Statistics, Item Response Theory, Markov Processes, Monte Carlo Methods
Peer reviewedMislevy, Robert J. – Psychometrika, 1984
Assuming vectors of item responses depend on ability through a fully specified item response model, this paper presents maximum likelihood equations for estimating the population parameters without estimating an ability parameter for each subject. Asymptotic standard errors, tests of fit, computing approximations, and details of four special cases…
Descriptors: Bayesian Statistics, Estimation (Mathematics), Goodness of Fit, Latent Trait Theory
van Barneveld, Christina – Alberta Journal of Educational Research, 2003
The purpose of this study was to examine the potential effect of false assumptions regarding the motivation of examinees on item calibration and test construction. A simulation study was conducted using data generated by means of several models of examinee item response behaviors (the three-parameter logistic model alone and in combination with…
Descriptors: Simulation, Motivation, Computation, Test Construction
Sinharay, Sandip – Journal of Educational and Behavioral Statistics, 2006
Bayesian networks are frequently used in educational assessments primarily for learning about students' knowledge and skills. There is a lack of works on assessing fit of Bayesian networks. This article employs the posterior predictive model checking method, a popular Bayesian model checking tool, to assess fit of simple Bayesian networks. A…
Descriptors: Models, Educational Assessment, Diagnostic Tests, Evaluation Methods

Direct link
