NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 35 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Man, Kaiwen; Harring, Jeffrey R. – Educational and Psychological Measurement, 2023
Preknowledge cheating jeopardizes the validity of inferences based on test results. Many methods have been developed to detect preknowledge cheating by jointly analyzing item responses and response times. Gaze fixations, an essential eye-tracker measure, can be utilized to help detect aberrant testing behavior with improved accuracy beyond using…
Descriptors: Cheating, Reaction Time, Test Items, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Zhu, Hongyue; Jiao, Hong; Gao, Wei; Meng, Xiangbin – Journal of Educational and Behavioral Statistics, 2023
Change-point analysis (CPA) is a method for detecting abrupt changes in parameter(s) underlying a sequence of random variables. It has been applied to detect examinees' aberrant test-taking behavior by identifying abrupt test performance change. Previous studies utilized maximum likelihood estimations of ability parameters, focusing on detecting…
Descriptors: Bayesian Statistics, Test Wiseness, Behavior Problems, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Fu, Qiang; Guo, Xin; Land, Kenneth C. – Sociological Methods & Research, 2020
Count responses with grouping and right censoring have long been used in surveys to study a variety of behaviors, status, and attitudes. Yet grouping or right-censoring decisions of count responses still rely on arbitrary choices made by researchers. We develop a new method for evaluating grouping and right-censoring decisions of count responses…
Descriptors: Surveys, Artificial Intelligence, Evaluation Methods, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Abu-Ghazalah, Rashid M.; Dubins, David N.; Poon, Gregory M. K. – Applied Measurement in Education, 2023
Multiple choice results are inherently probabilistic outcomes, as correct responses reflect a combination of knowledge and guessing, while incorrect responses additionally reflect blunder, a confidently committed mistake. To objectively resolve knowledge from responses in an MC test structure, we evaluated probabilistic models that explicitly…
Descriptors: Guessing (Tests), Multiple Choice Tests, Probability, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Kuan-Yu; Wu, Yi-Jhen; Chen, Hui-Fang – Journal of Educational and Behavioral Statistics, 2022
For surveys of complex issues that entail multiple steps, multiple reference points, and nongradient attributes (e.g., social inequality), this study proposes a new multiprocess model that integrates ideal-point and dominance approaches into a treelike structure (IDtree). In the IDtree, an ideal-point approach describes an individual's attitude…
Descriptors: Likert Scales, Item Response Theory, Surveys, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Pavel Chernyavskiy; Traci S. Kutaka; Carson Keeter; Julie Sarama; Douglas Clements – Grantee Submission, 2024
When researchers code behavior that is undetectable or falls outside of the validated ordinal scale, the resultant outcomes often suffer from informative missingness. Incorrect analysis of such data can lead to biased arguments around efficacy and effectiveness in the context of experimental and intervention research. Here, we detail a new…
Descriptors: Bayesian Statistics, Mathematics Instruction, Learning Trajectories, Item Response Theory
Jing Lu; Chun Wang; Jiwei Zhang; Xue Wang – Grantee Submission, 2023
Changepoints are abrupt variations in a sequence of data in statistical inference. In educational and psychological assessments, it is pivotal to properly differentiate examinees' aberrant behaviors from solution behavior to ensure test reliability and validity. In this paper, we propose a sequential Bayesian changepoint detection algorithm to…
Descriptors: Bayesian Statistics, Behavior Patterns, Computer Assisted Testing, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Lu, Jing; Wang, Chun – Journal of Educational Measurement, 2020
Item nonresponses are prevalent in standardized testing. They happen either when students fail to reach the end of a test due to a time limit or quitting, or when students choose to omit some items strategically. Oftentimes, item nonresponses are nonrandom, and hence, the missing data mechanism needs to be properly modeled. In this paper, we…
Descriptors: Item Response Theory, Test Items, Standardized Tests, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HyeSun; Smith, Weldon Z. – Educational and Psychological Measurement, 2020
Based on the framework of testlet models, the current study suggests the Bayesian random block item response theory (BRB IRT) model to fit forced-choice formats where an item block is composed of three or more items. To account for local dependence among items within a block, the BRB IRT model incorporated a random block effect into the response…
Descriptors: Bayesian Statistics, Item Response Theory, Monte Carlo Methods, Test Format
Suh, Youngsuk; Cho, Sun-Joo; Bottge, Brian A. – Grantee Submission, 2018
This article presents a multilevel longitudinal nested logit model for analyzing correct response and error types in multilevel longitudinal intervention data collected under a pretest-posttest, cluster randomized trial design. The use of the model is illustrated with a real data analysis, including a model comparison study regarding model…
Descriptors: Hierarchical Linear Modeling, Longitudinal Studies, Error Patterns, Change
Peer reviewed Peer reviewed
Direct linkDirect link
Warren, Aaron R. – Physical Review Physics Education Research, 2020
The evaluation of hypotheses, and the ability to learn from critical reflection on experimental and theoretical tests of those hypotheses, is central to an authentic practice of physics. A large part of physics education therefore seeks to help students understand the significance of this kind of reflective practice and to develop the strategies…
Descriptors: Epistemology, Bayesian Statistics, Physics, Science Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Golubickis, Marius; Falben, Johanna K.; Cunningham, William A.; Macrae, C. Neil – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2018
Although ownership is acknowledged to exert a potent influence on various aspects of information processing, the origin of these effects remains largely unknown. Based on the demonstration that self-relevance facilitates perceptual judgments (i.e., the self-prioritization effect), here we explored the possibility that ownership enhances object…
Descriptors: Ownership, Self Concept, Stimuli, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Braem, Senne; Liefooghe, Baptist; De Houwer, Jan; Brass, Marcel; Abrahamse, Elger L. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2017
Unlike other animals, humans have the unique ability to share and use verbal instructions to prepare for upcoming tasks. Recent research showed that instructions are sufficient for the automatic, reflex-like activation of responses. However, systematic studies into the limits of these automatic effects of task instructions remain relatively…
Descriptors: Responses, Context Effect, Visual Stimuli, Performance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wu, Mike; Davis, Richard L.; Domingue, Benjamin W.; Piech, Chris; Goodman, Noah – International Educational Data Mining Society, 2020
Item Response Theory (IRT) is a ubiquitous model for understanding humans based on their responses to questions, used in fields as diverse as education, medicine and psychology. Large modern datasets offer opportunities to capture more nuances in human behavior, potentially improving test scoring and better informing public policy. Yet larger…
Descriptors: Item Response Theory, Accuracy, Data Analysis, Public Policy
Peer reviewed Peer reviewed
Direct linkDirect link
Eckes, Thomas; Baghaei, Purya – Applied Measurement in Education, 2015
C-tests are gap-filling tests widely used to assess general language proficiency for purposes of placement, screening, or provision of feedback to language learners. C-tests consist of several short texts in which parts of words are missing. We addressed the issue of local dependence in C-tests using an explicit modeling approach based on testlet…
Descriptors: Language Proficiency, Language Tests, Item Response Theory, Test Reliability
Previous Page | Next Page ยป
Pages: 1  |  2  |  3