Publication Date
| In 2026 | 0 |
| Since 2025 | 6 |
| Since 2022 (last 5 years) | 42 |
| Since 2017 (last 10 years) | 90 |
| Since 2007 (last 20 years) | 141 |
Descriptor
Source
Author
| Baghaei, Purya | 3 |
| McLean, Stuart | 3 |
| O'Grady, Stefan | 3 |
| Batty, Aaron Olaf | 2 |
| Brownell, Sara E. | 2 |
| DiBattista, David | 2 |
| Gierl, Mark J. | 2 |
| Gu, Lin | 2 |
| Höhne, Jan Karem | 2 |
| Katz, Irvin R. | 2 |
| Krebs, Dagmar | 2 |
| More ▼ | |
Publication Type
Education Level
| Higher Education | 151 |
| Postsecondary Education | 128 |
| Secondary Education | 14 |
| High Schools | 6 |
| Elementary Secondary Education | 4 |
| Elementary Education | 3 |
| Two Year Colleges | 1 |
Audience
Location
| Japan | 9 |
| Turkey | 9 |
| Canada | 6 |
| Iran | 5 |
| South Korea | 5 |
| Germany | 4 |
| China | 3 |
| Netherlands | 3 |
| Philippines | 3 |
| United Kingdom | 3 |
| Australia | 2 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Janet Mee; Ravi Pandian; Justin Wolczynski; Amy Morales; Miguel Paniagua; Polina Harik; Peter Baldwin; Brian E. Clauser – Advances in Health Sciences Education, 2024
Recent advances in automated scoring technology have made it practical to replace multiple-choice questions (MCQs) with short-answer questions (SAQs) in large-scale, high-stakes assessments. However, most previous research comparing these formats has used small examinee samples testing under low-stakes conditions. Additionally, previous studies…
Descriptors: Multiple Choice Tests, High Stakes Tests, Test Format, Test Items
Hung Tan Ha; Duyen Thi Bich Nguyen; Tim Stoeckel – Language Assessment Quarterly, 2025
This article compares two methods for detecting local item dependence (LID): residual correlation examination and Rasch testlet modeling (RTM), in a commonly used 3:6 matching format and an extended matching test (EMT) format. The two formats are hypothesized to facilitate different levels of item dependency due to differences in the number of…
Descriptors: Comparative Analysis, Language Tests, Test Items, Item Analysis
Pentecost, Thomas C.; Raker, Jeffery R.; Murphy, Kristen L. – Practical Assessment, Research & Evaluation, 2023
Using multiple versions of an assessment has the potential to introduce item environment effects. These types of effects result in version dependent item characteristics (i.e., difficulty and discrimination). Methods to detect such effects and resulting implications are important for all levels of assessment where multiple forms of an assessment…
Descriptors: Item Response Theory, Test Items, Test Format, Science Tests
Necati Taskin – International Journal of Technology in Education, 2025
This study examines the effect of item order (random, increasingly difficult, and decreasingly difficult) on student performance, test parameters, and student perceptions in multiple-choice tests administered in a paper-and-pencil format after online learning. In the research conducted using an explanatory sequential mixed methods design,…
Descriptors: Test Items, Difficulty Level, Online Courses, College Freshmen
Corrin Moss; Sharon Kwabi; Scott P. Ardoin; Katherine S. Binder – Reading and Writing: An Interdisciplinary Journal, 2024
The ability to form a mental model of a text is an essential component of successful reading comprehension (RC), and purpose for reading can influence mental model construction. Participants were assigned to one of two conditions during an RC test to alter their purpose for reading: concurrent (texts and questions were presented simultaneously)…
Descriptors: Eye Movements, Reading Comprehension, Test Format, Short Term Memory
Jeff Allen; Jay Thomas; Stacy Dreyer; Scott Johanningmeier; Dana Murano; Ty Cruce; Xin Li; Edgar Sanchez – ACT Education Corp., 2025
This report describes the process of developing and validating the enhanced ACT. The report describes the changes made to the test content and the processes by which these design decisions were implemented. The authors describe how they shared the overall scope of the enhancements, including the initial blueprints, with external expert panels,…
Descriptors: College Entrance Examinations, Testing, Change, Test Construction
Srikanth Allamsetty; M. V. S. S. Chandra; Neelima Madugula; Byamakesh Nayak – IEEE Transactions on Learning Technologies, 2024
The present study is related to the problem associated with student assessment with online examinations at higher educational institutes (HEIs). With the current COVID-19 outbreak, the majority of educational institutes are conducting online examinations to assess their students, where there would always be a chance that the students go for…
Descriptors: Computer Assisted Testing, Accountability, Higher Education, Comparative Analysis
Filipe Manuel Vidal Falcão; Daniela S.M. Pereira; José Miguel Pêgo; Patrício Costa – Education and Information Technologies, 2024
Progress tests (PT) are a popular type of longitudinal assessment used for evaluating clinical knowledge retention and long-life learning in health professions education. Most PTs consist of multiple-choice questions (MCQs) whose development is costly and time-consuming. Automatic Item Generation (AIG) generates test items through algorithms,…
Descriptors: Automation, Test Items, Progress Monitoring, Medical Education
Inga Laukaityte; Marie Wiberg – Practical Assessment, Research & Evaluation, 2024
The overall aim was to examine effects of differences in group ability and features of the anchor test form on equating bias and the standard error of equating (SEE) using both real and simulated data. Chained kernel equating, Postratification kernel equating, and Circle-arc equating were studied. A college admissions test with four different…
Descriptors: Ability Grouping, Test Items, College Entrance Examinations, High Stakes Tests
Sen, Sedat – Creativity Research Journal, 2022
The purpose of this study was to estimate the overall reliability values for the scores produced by Runco Ideational Behavior Scale (RIBS) and explore the variability of RIBS score reliability across studies. To achieve this, a reliability generalization meta-analysis was carried out using the 86 Cronbach's alpha estimates obtained from 77 studies…
Descriptors: Generalization, Creativity, Meta Analysis, Higher Education
McGuire, Michael J. – International Journal for the Scholarship of Teaching and Learning, 2023
College students in a lower-division psychology course made metacognitive judgments by predicting and postdicting performance for true-false, multiple-choice, and fill-in-the-blank question sets on each of three exams. This study investigated which question format would result in the most accurate metacognitive judgments. Extending Koriat's (1997)…
Descriptors: Metacognition, Multiple Choice Tests, Accuracy, Test Format
Zhao, Wenbo; Li, Jiaojiao; Shanks, David R.; Li, Baike; Hu, Xiao; Yang, Chunliang; Luo, Liang – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2023
Making metamemory judgments reactively changes item memory itself. Here we report the first investigation of reactive influences of making judgments of learning (JOLs) on interitem relational memory--specifically, temporal (serial) order memory. Experiment 1 found that making JOLs impaired order reconstruction. Experiment 2 observed minimal…
Descriptors: Metacognition, Memory, Meta Analysis, Recall (Psychology)
Dongmei Li; Shalini Kapoor; Ann Arthur; Chi-Yu Huang; YoungWoo Cho; Chen Qiu; Hongling Wang – ACT Education Corp., 2025
Starting in April 2025, ACT will introduce enhanced forms of the ACT® test for national online testing, with a full rollout to all paper and online test takers in national, state and district, and international test administrations by Spring 2026. ACT introduced major updates by changing the test lengths and testing times, providing more time per…
Descriptors: College Entrance Examinations, Testing, Change, Scoring
Gruss, Richard; Clemons, Josh – Journal of Computer Assisted Learning, 2023
Background: The sudden growth in online instruction due to COVID-19 restrictions has given renewed urgency to questions about remote learning that have remained unresolved. Web-based assessment software provides instructors an array of options for varying testing parameters, but the pedagogical impacts of some of these variations has yet to be…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Mathematics Tests
Jang, Jung Un; Kim, Eun Joo – Journal of Curriculum and Teaching, 2022
This study conducts the validity of the pen-and-paper and smart-device-based tests on optician's examination. The developed questions for each media were based on the national optician's simulation test. The subjects of this study were 60 students enrolled in E University. The data analysis was performed to verify the equivalence of the two…
Descriptors: Optometry, Licensing Examinations (Professions), Test Format, Test Validity

Peer reviewed
Direct link
