Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 8 |
| Since 2017 (last 10 years) | 11 |
| Since 2007 (last 20 years) | 17 |
Descriptor
| Comparative Analysis | 19 |
| Test Items | 19 |
| Reaction Time | 11 |
| Scores | 8 |
| Foreign Countries | 7 |
| Difficulty Level | 5 |
| Item Response Theory | 5 |
| Scoring | 5 |
| Test Bias | 5 |
| Achievement Tests | 4 |
| Computer Assisted Testing | 4 |
| More ▼ | |
Source
Author
| Akhtar, Hanif | 1 |
| Ali, Usama S. | 1 |
| Alpayar, Cagla | 1 |
| Ames, Allison J. | 1 |
| Anastasia Pattemore | 1 |
| Ann Arthur | 1 |
| Babcock, Ben | 1 |
| Bowden, Harriet Wood | 1 |
| Bridgeman, Brent | 1 |
| Carmen Muñoz | 1 |
| Chang, Hua-Hua | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 16 |
| Journal Articles | 13 |
| Reports - Descriptive | 2 |
| Guides - Non-Classroom | 1 |
| Numerical/Quantitative Data | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
| Higher Education | 9 |
| Postsecondary Education | 9 |
| Secondary Education | 4 |
| Grade 3 | 2 |
| High Schools | 2 |
| Middle Schools | 2 |
| Early Childhood Education | 1 |
| Elementary Education | 1 |
| Grade 4 | 1 |
| Grade 5 | 1 |
| Grade 6 | 1 |
| More ▼ | |
Audience
Laws, Policies, & Programs
Assessments and Surveys
| SAT (College Admission Test) | 2 |
| ACT Assessment | 1 |
| Measures of Academic Progress | 1 |
| Program for International… | 1 |
| Test of English for… | 1 |
What Works Clearinghouse Rating
Dongmei Li; Shalini Kapoor; Ann Arthur; Chi-Yu Huang; YoungWoo Cho; Chen Qiu; Hongling Wang – ACT Education Corp., 2025
Starting in April 2025, ACT will introduce enhanced forms of the ACT® test for national online testing, with a full rollout to all paper and online test takers in national, state and district, and international test administrations by Spring 2026. ACT introduced major updates by changing the test lengths and testing times, providing more time per…
Descriptors: College Entrance Examinations, Testing, Change, Scoring
Babcock, Ben; Siegel, Zachary D. – Practical Assessment, Research & Evaluation, 2022
Research about repeated testing has revealed that retaking the same exam form generally does not advantage or disadvantage failing candidates in selected response-style credentialing exams. Feinberg, Raymond, and Haist (2015) found a contributing factor to this phenomenon: people answering items incorrectly on both attempts give the same incorrect…
Descriptors: Multiple Choice Tests, Item Analysis, Test Items, Response Style (Tests)
Kuang, Huan; Sahin, Fusun – Large-scale Assessments in Education, 2023
Background: Examinees may not make enough effort when responding to test items if the assessment has no consequence for them. These disengaged responses can be problematic in low-stakes, large-scale assessments because they can bias item parameter estimates. However, the amount of bias, and whether this bias is similar across administrations, is…
Descriptors: Test Items, Comparative Analysis, Mathematics Tests, Reaction Time
Deribo, Tobias; Goldhammer, Frank; Kroehne, Ulf – Educational and Psychological Measurement, 2023
As researchers in the social sciences, we are often interested in studying not directly observable constructs through assessments and questionnaires. But even in a well-designed and well-implemented study, rapid-guessing behavior may occur. Under rapid-guessing behavior, a task is skimmed shortly but not read and engaged with in-depth. Hence, a…
Descriptors: Reaction Time, Guessing (Tests), Behavior Patterns, Bias
Akhtar, Hanif – International Association for Development of the Information Society, 2022
When examinees perceive a test as low stakes, it is logical to assume that some of them will not put out their maximum effort. This condition makes the validity of the test results more complicated. Although many studies have investigated motivational fluctuation across tests during a testing session, only a small number of studies have…
Descriptors: Intelligence Tests, Student Motivation, Test Validity, Student Attitudes
Ames, Allison J. – Educational and Psychological Measurement, 2022
Individual response style behaviors, unrelated to the latent trait of interest, may influence responses to ordinal survey items. Response style can introduce bias in the total score with respect to the trait of interest, threatening valid interpretation of scores. Despite claims of response style stability across scales, there has been little…
Descriptors: Response Style (Tests), Individual Differences, Scores, Test Items
Susan Kowalski; Megan Kuhfeld; Scott Peters; Gustave Robinson; Karyn Lewis – NWEA, 2024
The purpose of this technical appendix is to share detailed results and more fully describe the sample and methods used to produce the research brief, "COVID's Impact on Science Achievement: Trends from 2019 through 2024. We investigated three main research questions in this brief: 1) How did science achievement in 2021 and 2024 compare to…
Descriptors: COVID-19, Pandemics, Science Achievement, Trend Analysis
Carmen Muñoz; Anastasia Pattemore; Daniela Avello – Computer Assisted Language Learning, 2024
Repeated viewing of the same video is a common strategy among autonomous language learners as well as a much used pedagogical strategy among foreign language (FL) teachers. Learners may watch the same video more than once, to increase global comprehension of the target language or to focus their attention on linguistic aspects, such as new…
Descriptors: Captions, Vocabulary Development, Second Language Learning, Second Language Instruction
Türkoguz, Suat – Anatolian Journal of Education, 2020
This study aimed to investigate the item "Response Time Fidelity scores" ("RTFs"), "KuderRichardson Reliability" ("KR[subscript 20]") and "Cronbach's Alpha Reliability" ("alpha") coefficients, calculate "KR[subscript 20]" coefficients with "RTFs" for 30 threshold…
Descriptors: Comparative Analysis, Reaction Time, Multiple Choice Tests, Scores
Alpayar, Cagla; Gulleroglu, H. Deniz – Educational Research and Reviews, 2017
The aim of this research is to determine whether students' test performance and approaches to test questions change based on the type of mathematics questions (visual or verbal) administered to them. This research is based on a mixed-design model. The quantitative data are gathered from 297 seventh grade students, attending seven different middle…
Descriptors: Foreign Countries, Middle School Students, Grade 7, Student Evaluation
Mitchell, Alison M.; Truckenmiller, Adrea; Petscher, Yaacov – Communique, 2015
As part of the Race to the Top initiative, the United States Department of Education made nearly 1 billion dollars available in State Educational Technology grants with the goal of ramping up school technology. One result of this effort is that states, districts, and schools across the country are using computerized assessments to measure their…
Descriptors: Computer Assisted Testing, Educational Technology, Testing, Efficiency
Sieh, Yu-cheng – Taiwan Journal of TESOL, 2016
In an attempt to compare how orthography and phonology interact in EFL learners with different reading abilities, online measures were administered in this study to two groups of university learners, indexed by their reading scores on the Test of English for International Communication (TOEIC). In terms of "accuracy," the less-skilled…
Descriptors: Comparative Analysis, Word Recognition, Phonology, English (Second Language)
Jensen, Nate; Rice, Andrew; Soland, James – Educational Evaluation and Policy Analysis, 2018
While most educators assume that not all students try their best on achievement tests, no current research examines if behaviors associated with low test effort, like rapidly guessing on test items, affect teacher value-added estimates. In this article, we examined the prevalence of rapid guessing to determine if this behavior varied by grade,…
Descriptors: Item Response Theory, Value Added Models, Achievement Tests, Test Items
Ali, Usama S.; Chang, Hua-Hua – ETS Research Report Series, 2014
Adaptive testing is advantageous in that it provides more efficient ability estimates with fewer items than linear testing does. Item-driven adaptive pretesting may also offer similar advantages, and verification of such a hypothesis about item calibration was the main objective of this study. A suitability index (SI) was introduced to adaptively…
Descriptors: Adaptive Testing, Simulation, Pretests Posttests, Test Items
Lado, Beatriz; Bowden, Harriet Wood; Stafford, Catherine A; Sanz, Cristina – Language Teaching Research, 2014
The current study compared the effectiveness of computer-delivered task-essential practice coupled with feedback consisting of (1) negative evidence with metalinguistic information (NE+MI) or (2) negative evidence without metalinguistic information (NE-MI) in promoting absolute beginners' (n = 58) initial learning of aspects of Latin…
Descriptors: Second Language Learning, Accuracy, Morphology (Languages), Syntax
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
