NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 25 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Joseph A. Rios; Jiayi Deng – Educational and Psychological Measurement, 2025
To mitigate the potential damaging consequences of rapid guessing (RG), a form of noneffortful responding, researchers have proposed a number of scoring approaches. The present simulation study examines the robustness of the most popular of these approaches, the unidimensional effort-moderated (EM) scoring procedure, to multidimensional RG (i.e.,…
Descriptors: Scoring, Guessing (Tests), Reaction Time, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Abu-Ghazalah, Rashid M.; Dubins, David N.; Poon, Gregory M. K. – Applied Measurement in Education, 2023
Multiple choice results are inherently probabilistic outcomes, as correct responses reflect a combination of knowledge and guessing, while incorrect responses additionally reflect blunder, a confidently committed mistake. To objectively resolve knowledge from responses in an MC test structure, we evaluated probabilistic models that explicitly…
Descriptors: Guessing (Tests), Multiple Choice Tests, Probability, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Lúcio, Patrícia Silva; Vandekerckhove, Joachim; Polanczyk, Guilherme V.; Cogo-Moreira, Hugo – Journal of Psychoeducational Assessment, 2021
The present study compares the fit of two- and three-parameter logistic (2PL and 3PL) models of item response theory in the performance of preschool children on the Raven's Colored Progressive Matrices. The test of Raven is widely used for evaluating nonverbal intelligence of factor g. Studies comparing models with real data are scarce on the…
Descriptors: Guessing (Tests), Item Response Theory, Test Validity, Preschool Children
Jing Lu; Chun Wang; Jiwei Zhang; Xue Wang – Grantee Submission, 2023
Changepoints are abrupt variations in a sequence of data in statistical inference. In educational and psychological assessments, it is pivotal to properly differentiate examinees' aberrant behaviors from solution behavior to ensure test reliability and validity. In this paper, we propose a sequential Bayesian changepoint detection algorithm to…
Descriptors: Bayesian Statistics, Behavior Patterns, Computer Assisted Testing, Accuracy
Wang, Chun; Xu, Gongjun; Shang, Zhuoran; Kuncel, Nathan – Journal of Educational and Behavioral Statistics, 2018
The modern web-based technology greatly popularizes computer-administered testing, also known as online testing. When these online tests are administered continuously within a certain "testing window," many items are likely to be exposed and compromised, posing a type of test security concern. In addition, if the testing time is limited,…
Descriptors: Computer Assisted Testing, Cheating, Guessing (Tests), Item Response Theory
Jing Lu; Chun Wang; Ningzhong Shi – Grantee Submission, 2023
In high-stakes, large-scale, standardized tests with certain time limits, examinees are likely to engage in either one of the three types of behavior (e.g., van der Linden & Guo, 2008; Wang & Xu, 2015): solution behavior, rapid guessing behavior, and cheating behavior. Oftentimes examinees do not always solve all items due to various…
Descriptors: High Stakes Tests, Standardized Tests, Guessing (Tests), Cheating
Peer reviewed Peer reviewed
Direct linkDirect link
Brassil, Chad E.; Couch, Brian A. – International Journal of STEM Education, 2019
Background: Within undergraduate science courses, instructors often assess student thinking using closed-ended question formats, such as multiple-choice (MC) and multiple-true-false (MTF), where students provide answers with respect to predetermined response options. While MC and MTF questions both consist of a question stem followed by a series…
Descriptors: Multiple Choice Tests, Objective Tests, Student Evaluation, Thinking Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Ames, Allison; Smith, Elizabeth – Journal of Educational Measurement, 2018
Bayesian methods incorporate model parameter information prior to data collection. Eliciting information from content experts is an option, but has seen little implementation in Bayesian item response theory (IRT) modeling. This study aims to use ethical reasoning content experts to elicit prior information and incorporate this information into…
Descriptors: Item Response Theory, Bayesian Statistics, Ethics, Specialists
Peer reviewed Peer reviewed
Direct linkDirect link
Huff, Mark J.; Balota, David A.; Hutchison, Keith A. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2016
We examined whether 2 types of interpolated tasks (i.e., retrieval-practice via free recall or guessing a missing critical item) improved final recognition for related and unrelated word lists relative to restudying or completing a filler task. Both retrieval-practice and guessing tasks improved correct recognition relative to restudy and filler…
Descriptors: Testing, Guessing (Tests), Memory, Retention (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Culpepper, Steven Andrew – Journal of Educational and Behavioral Statistics, 2015
A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…
Descriptors: Bayesian Statistics, Models, Sampling, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Seo, Dong Gi; Weiss, David J. – Educational and Psychological Measurement, 2013
The usefulness of the l[subscript z] person-fit index was investigated with achievement test data from 20 exams given to more than 3,200 college students. Results for three methods of estimating ? showed that the distributions of l[subscript z] were not consistent with its theoretical distribution, resulting in general overfit to the item response…
Descriptors: Achievement Tests, College Students, Goodness of Fit, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Cao, Jing; Stokes, S. Lynne – Psychometrika, 2008
According to the recent Nation's Report Card, 12th-graders failed to produce gains on the 2005 National Assessment of Educational Progress (NAEP) despite earning better grades on average. One possible explanation is that 12th-graders were not motivated taking the NAEP, which is a low-stakes test. We develop three Bayesian IRT mixture models to…
Descriptors: Test Items, Simulation, National Competency Tests, Item Response Theory
Peer reviewed Peer reviewed
Morrison, Donald G.; Brockway, George – Psychometrika, 1979
A modified beta binomial model is presented for use in analyzing random guessing multiple choice tests and taste tests. Detection probabilities for each item are distributed beta across the population subjects. Properties for the observable distribution of correct responses are derived. Two concepts of true score estimates are presented.…
Descriptors: Bayesian Statistics, Guessing (Tests), Mathematical Models, Multiple Choice Tests
Wang, Jianjun – 1995
Effects of blind guessing on the success of passing true-false and multiple-choice tests are investigated under a stochastic binomial model. Critical values of guessing are thresholds which signify when the effect of guessing is negligible. By checking a table of critical values assembled in this paper, one can make a decision with 95% confidence…
Descriptors: Bayesian Statistics, Grading, Guessing (Tests), Models
Peer reviewed Peer reviewed
Jensema, Carl J. – Educational and Psychological Measurement, 1974
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Instruction, Computer Programs
Previous Page | Next Page »
Pages: 1  |  2