NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Policymakers1
Laws, Policies, & Programs
No Child Left Behind Act 20011
Assessments and Surveys
Program for International…17
MacArthur Communicative…1
National Assessment of…1
Trends in International…1
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Militsa G. Ivanova; Hanna Eklöf; Michalis P. Michaelides – Journal of Applied Testing Technology, 2025
Digital administration of assessments allows for the collection of process data indices, such as response time, which can serve as indicators of rapid-guessing and examinee test-taking effort. Setting a time threshold is essential to distinguish effortful from effortless behavior using item response times. Threshold identification methods may…
Descriptors: Test Items, Computer Assisted Testing, Reaction Time, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Melissa Dan Wang – Large-scale Assessments in Education, 2025
Although self-report surveys are widely used for data collection, data quality can vary across populations because certain groups are more likely to engage in insufficient effort responding (IER). Our study examined how different levels of the educational system--student groups, schools, and cultural contexts--affect data quality due to IER, using…
Descriptors: Responses, Response Style (Tests), Questionnaires, Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Ivanova, Militsa; Michaelides, Michalis; Eklöf, Hanna – Educational Research and Evaluation, 2020
Collecting process data in computer-based assessments provides opportunities to describe examinee behaviour during a test-taking session. The number of actions taken by students while interacting with an item is in this context a variable that has been gaining attention. The present study aims to investigate how the number of actions performed on…
Descriptors: Foreign Countries, Secondary School Students, Achievement Tests, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Ulitzsch, Esther; Domingue, Benjamin W.; Kapoor, Radhika; Kanopka, Klint; Rios, Joseph A. – Educational Measurement: Issues and Practice, 2023
Common response-time-based approaches for non-effortful response behavior (NRB) in educational achievement tests filter responses that are associated with response times below some threshold. These approaches are, however, limited in that they require a binary decision on whether a response is classified as stemming from NRB; thus ignoring…
Descriptors: Reaction Time, Responses, Behavior, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lu, Jing; Wang, Chun – Journal of Educational Measurement, 2020
Item nonresponses are prevalent in standardized testing. They happen either when students fail to reach the end of a test due to a time limit or quitting, or when students choose to omit some items strategically. Oftentimes, item nonresponses are nonrandom, and hence, the missing data mechanism needs to be properly modeled. In this paper, we…
Descriptors: Item Response Theory, Test Items, Standardized Tests, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Debeer, Dries; Janssen, Rianne; De Boeck, Paul – Journal of Educational Measurement, 2017
When dealing with missing responses, two types of omissions can be discerned: items can be skipped or not reached by the test taker. When the occurrence of these omissions is related to the proficiency process the missingness is nonignorable. The purpose of this article is to present a tree-based IRT framework for modeling responses and omissions…
Descriptors: Item Response Theory, Test Items, Responses, Testing Problems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wu, Mike; Davis, Richard L.; Domingue, Benjamin W.; Piech, Chris; Goodman, Noah – International Educational Data Mining Society, 2020
Item Response Theory (IRT) is a ubiquitous model for understanding humans based on their responses to questions, used in fields as diverse as education, medicine and psychology. Large modern datasets offer opportunities to capture more nuances in human behavior, potentially improving test scoring and better informing public policy. Yet larger…
Descriptors: Item Response Theory, Accuracy, Data Analysis, Public Policy
Jing Lu; Chun Wang; Jiwei Zhang; Xue Wang – Grantee Submission, 2023
Changepoints are abrupt variations in a sequence of data in statistical inference. In educational and psychological assessments, it is pivotal to properly differentiate examinees' aberrant behaviors from solution behavior to ensure test reliability and validity. In this paper, we propose a sequential Bayesian changepoint detection algorithm to…
Descriptors: Bayesian Statistics, Behavior Patterns, Computer Assisted Testing, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Abulela, Mohammed A. A.; Rios, Joseph A. – Applied Measurement in Education, 2022
When there are no personal consequences associated with test performance for examinees, rapid guessing (RG) is a concern and can differ between subgroups. To date, the impact of differential RG on item-level measurement invariance has received minimal attention. To that end, a simulation study was conducted to examine the robustness of the…
Descriptors: Comparative Analysis, Robustness (Statistics), Nonparametric Statistics, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ercikan, Kadriye; Guo, Hongwen; He, Qiwei – Educational Assessment, 2020
Comparing group is one of the key uses of large-scale assessment results, which are used to gain insights to inform policy and practice and to examine the comparability of scores and score meaning. Such comparisons typically focus on examinees' final answers and responses to test questions, ignoring response process differences groups may engage…
Descriptors: Data Use, Responses, Comparative Analysis, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Zehner, Fabian; Goldhammer, Frank; Lubaway, Emily; Sälzer, Christine – Education Inquiry, 2019
In 2015, the "Programme for International Student Assessment" (PISA) introduced multiple changes in its study design, the most extensive being the transition from paper- to computer-based assessment. We investigated the differences between German students' text responses to eight reading items from the paper-based study in 2012 to text…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yamamoto, Kentaro; He, Qiwei; Shin, Hyo Jeong; von Davier, Mattias – ETS Research Report Series, 2017
Approximately a third of the Programme for International Student Assessment (PISA) items in the core domains (math, reading, and science) are constructed-response items and require human coding (scoring). This process is time-consuming, expensive, and prone to error as often (a) humans code inconsistently, and (b) coding reliability in…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Lu, Yi – ProQuest LLC, 2012
Cross-national comparisons of responses to survey items are often affected by response style, particularly extreme response style (ERS). ERS varies across cultures, and has the potential to bias inferences in cross-national comparisons. For example, in both PISA and TIMSS assessments, it has been documented that when examined within countries,…
Descriptors: Item Response Theory, Attitude Measures, Response Style (Tests), Cultural Differences
Sahin, Füsun – ProQuest LLC, 2017
Examining the testing processes, as well as the scores, is needed for a complete understanding of validity and fairness of computer-based assessments. Examinees' rapid-guessing and insufficient familiarity with computers have been found to be major issues that weaken the validity arguments of scores. This study has three goals: (a) improving…
Descriptors: Computer Assisted Testing, Evaluation Methods, Student Evaluation, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank – Educational and Psychological Measurement, 2016
Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…
Descriptors: Educational Assessment, Coding, Automation, Responses
Previous Page | Next Page »
Pages: 1  |  2