NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 202537
Since 2022 (last 5 years)201
Since 2017 (last 10 years)461
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 461 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Stefanie A. Wind; Beyza Aksu-Dunya – Applied Measurement in Education, 2024
Careless responding is a pervasive concern in research using affective surveys. Although researchers have considered various methods for identifying careless responses, studies are limited that consider the utility of these methods in the context of computer adaptive testing (CAT) for affective scales. Using a simulation study informed by recent…
Descriptors: Response Style (Tests), Computer Assisted Testing, Adaptive Testing, Affective Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Esther Ulitzsch; Janine Buchholz; Hyo Jeong Shin; Jonas Bertling; Oliver Lüdtke – Large-scale Assessments in Education, 2024
Common indicator-based approaches to identifying careless and insufficient effort responding (C/IER) in survey data scan response vectors or timing data for aberrances, such as patterns signaling straight lining, multivariate outliers, or signals that respondents rushed through the administered items. Each of these approaches is susceptible to…
Descriptors: Response Style (Tests), Attention, Achievement Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Militsa G. Ivanova; Hanna Eklöf; Michalis P. Michaelides – Journal of Applied Testing Technology, 2025
Digital administration of assessments allows for the collection of process data indices, such as response time, which can serve as indicators of rapid-guessing and examinee test-taking effort. Setting a time threshold is essential to distinguish effortful from effortless behavior using item response times. Threshold identification methods may…
Descriptors: Test Items, Computer Assisted Testing, Reaction Time, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Ute Mertens; Marlit A. Lindner – Journal of Computer Assisted Learning, 2025
Background: Educational assessments increasingly shift towards computer-based formats. Many studies have explored how different types of automated feedback affect learning. However, few studies have investigated how digital performance feedback affects test takers' ratings of affective-motivational reactions during a testing session. Method: In…
Descriptors: Educational Assessment, Computer Assisted Testing, Automation, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Sedigheh Karimpour; Ehsan Namaziandost; Hossein Kargar Behbahani – Journal of Educational Computing Research, 2025
As an integral part of dynamic assessment, computerized dynamic assessment (CDA) offers learners computer-assisted automated mediation. Accordingly, the possible efficacy of corrective feedback seems to be enhanced with new technologies, such as artificial intelligence tools, that offer automatic corrective feedback. Using technology-enhanced…
Descriptors: Computer Assisted Testing, Feedback (Response), Language Acquisition, Electronic Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Shujun Liu; Azzeddine Boudouaia; Xinya Chen; Yan Li – Asia-Pacific Education Researcher, 2025
The application of Automated Writing Evaluation (AWE) has recently gained researchers' attention worldwide. However, the impact of AWE feedback on student writing, particularly in languages other than English, remains controversial. This study aimed to compare the impacts of Chinese AWE feedback and teacher feedback on Chinese writing revision,…
Descriptors: Foreign Countries, Middle School Students, Grade 7, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Guozhu Ding; Mailin Li; Shan Li; Hao Wu – Asia Pacific Education Review, 2025
This study investigated the optimal feedback intervals for tasks of varying difficulty levels in online testing and whether task difficulty moderates the effect of feedback intervals on student performance. A pre-experimental study with 36 students was conducted to determine the delayed time for providing feedback based on student behavioral data.…
Descriptors: Feedback (Response), Academic Achievement, Computer Assisted Testing, Intervals
Peer reviewed Peer reviewed
Direct linkDirect link
Esther Ulitzsch; Steffi Pohl; Lale Khorramdel; Ulf Kroehne; Matthias von Davier – Journal of Educational and Behavioral Statistics, 2024
Questionnaires are by far the most common tool for measuring noncognitive constructs in psychology and educational sciences. Response bias may pose an additional source of variation between respondents that threatens validity of conclusions drawn from questionnaire data. We present a mixture modeling approach that leverages response time data from…
Descriptors: Item Response Theory, Response Style (Tests), Questionnaires, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2022
Two independent statistical tests of item compromise are presented, one based on the test takers' responses and the other on their response times (RTs) on the same items. The tests can be used to monitor an item in real time during online continuous testing but are also applicable as part of post hoc forensic analysis. The two test statistics are…
Descriptors: Test Items, Item Analysis, Item Response Theory, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Ye Ma; Deborah J. Harris – Educational Measurement: Issues and Practice, 2025
Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Yi-Ling Wu; Yao-Hsuan Huang; Chia-Wen Chen; Po-Hsi Chen – Journal of Educational Measurement, 2025
Multistage testing (MST), a variant of computerized adaptive testing (CAT), differs from conventional CAT in that it is adapted at the module level rather than at the individual item level. Typically, all examinees begin the MST with a linear test form in the first stage, commonly known as the routing stage. In 2020, Han introduced an innovative…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Format, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Ishaya Gambo; Faith-Jane Abegunde; Omobola Gambo; Roseline Oluwaseun Ogundokun; Akinbowale Natheniel Babatunde; Cheng-Chi Lee – Education and Information Technologies, 2025
The current educational system relies heavily on manual grading, posing challenges such as delayed feedback and grading inaccuracies. Automated grading tools (AGTs) offer solutions but come with limitations. To address this, "GRAD-AI" is introduced, an advanced AGT that combines automation with teacher involvement for precise grading,…
Descriptors: Automation, Grading, Artificial Intelligence, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Beyza Aksu Dünya; Stefanie A. Wind; Mehmet Can Demir – SAGE Open, 2025
The purpose of this study was to generate an item bank for assessing faculty members' assessment literacy and to examine the applicability and feasibility of a Computerized Adaptive Test (CAT) approach to monitor assessment literacy among faculty members. In developing this assessment using a sequential mixed-methods research design, our goal was…
Descriptors: Assessment Literacy, Item Banks, College Faculty, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Barno Sayfutdinovna Abdullaeva; Diyorjon Abdullaev; Feruza Abulkosimovna Rakhmatova; Laylo Djuraeva; Nigora Asqaraliyevna Sulaymonova; Zebo Fazliddinovna Shamsiddinova; Oynisa Khamraeva – Language Testing in Asia, 2024
Acquiring technological literacy and acceptance has a significant influence on academic emotion regulation (AER), academic resilience (AR), willingness to communicate (WTC), and academic enjoyment (AE), which are crucial for the success of university students. However, this area has not been adequately explored in research, particularly in the…
Descriptors: Technological Literacy, Emotional Response, Self Control, Resilience (Psychology)
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  31