Publication Date
In 2025 | 24 |
Since 2024 | 118 |
Since 2021 (last 5 years) | 456 |
Since 2016 (last 10 years) | 861 |
Since 2006 (last 20 years) | 1341 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Practitioners | 195 |
Teachers | 159 |
Researchers | 92 |
Administrators | 49 |
Students | 34 |
Policymakers | 14 |
Parents | 12 |
Counselors | 2 |
Community | 1 |
Media Staff | 1 |
Support Staff | 1 |
More ▼ |
Location
Canada | 62 |
Turkey | 57 |
Germany | 40 |
Australia | 35 |
United Kingdom | 35 |
Japan | 34 |
China | 32 |
United States | 32 |
California | 25 |
United Kingdom (England) | 25 |
Netherlands | 24 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Davis, Sara D.; Chan, Jason C. K. – Educational Psychology Review, 2023
Prior testing can facilitate subsequent learning, a phenomenon termed the forward testing effect (FTE). We examined a metacognitive account of this effect, which proposes that the FTE occurs because retrieval leads to strategy optimizations during later learning. One prediction of this account is that tests that require less retrieval effort…
Descriptors: Metacognition, Futures (of Society), Tests, Difficulty Level
Victoria Crisp; Sylvia Vitello; Abdullah Ali Khan; Heather Mahy; Sarah Hughes – Research Matters, 2025
This research set out to enhance our understanding of the exam techniques and types of written annotations or markings that learners may wish to use to support their thinking when taking digital multiple-choice exams. Additionally, we aimed to further explore issues around the factors that contribute to learners writing less rough work and…
Descriptors: Computer Assisted Testing, Test Format, Multiple Choice Tests, Notetaking
Vahe Permzadian; Kit W. Cho – Teaching in Higher Education, 2025
When administering an in-class exam, a common decision that confronts every instructor is whether the exam format should be closed book or open book. The present review synthesizes research examining the effect of administering closed-book or open-book assessments on long-term learning. Although the overall effect of assessment format on learning…
Descriptors: College Students, Tests, Test Format, Long Term Memory
Nana Kim; Daniel M. Bolt – Journal of Educational and Behavioral Statistics, 2024
Some previous studies suggest that response times (RTs) on rating scale items can be informative about the content trait, but a more recent study suggests they may also be reflective of response styles. The latter result raises questions about the possible consideration of RTs for content trait estimation, as response styles are generally viewed…
Descriptors: Item Response Theory, Reaction Time, Response Style (Tests), Psychometrics
Berenbon, Rebecca F.; McHugh, Bridget C. – Educational Measurement: Issues and Practice, 2023
To assemble a high-quality test, psychometricians rely on subject matter experts (SMEs) to write high-quality items. However, SMEs are not typically given the opportunity to provide input on which content standards are most suitable for multiple-choice questions (MCQs). In the present study, we explored the relationship between perceived MCQ…
Descriptors: Test Items, Multiple Choice Tests, Standards, Difficulty Level
Choe, Edison M.; Han, Kyung T. – Journal of Educational Measurement, 2022
In operational testing, item response theory (IRT) models for dichotomous responses are popular for measuring a single latent construct [theta], such as cognitive ability in a content domain. Estimates of [theta], also called IRT scores or [theta hat], can be computed using estimators based on the likelihood function, such as maximum likelihood…
Descriptors: Scores, Item Response Theory, Test Items, Test Format
Emma Pritchard-Rowe; Carmen de Lemos; Katie Howard; Jenny Gibson – Autism: The International Journal of Research and Practice, 2025
Play is often included in autism diagnostic assessments. These tend to focus on 'deficits' and non-autistic interpretation of observable behaviours. In contrast, a neurodiversity-affirmative assessment approach involves centring autistic perspectives and focusing on strengths, differences and needs. Accordingly, this study was designed to focus on…
Descriptors: Foreign Countries, Adults, Autism Spectrum Disorders, Play
Hung Tan Ha; Duyen Thi Bich Nguyen; Tim Stoeckel – Language Assessment Quarterly, 2025
This article compares two methods for detecting local item dependence (LID): residual correlation examination and Rasch testlet modeling (RTM), in a commonly used 3:6 matching format and an extended matching test (EMT) format. The two formats are hypothesized to facilitate different levels of item dependency due to differences in the number of…
Descriptors: Comparative Analysis, Language Tests, Test Items, Item Analysis
Jacqueline E. McLaughlin; Kathryn Morbitzer; Margaux Meilhac; Natalie Poupart; Rebekah L. Layton; Michael B. Jarstfer – Studies in Graduate and Postdoctoral Education, 2024
Purpose: While known by many names, qualifying exams function as gatekeepers to graduate student advancement to PhD candidacy, yet there has been little formal study on best qualifying exam practices particularly in biomedical and related STEM PhD programs. The purpose of this study is to examine the current state of qualifying exams through an…
Descriptors: Doctoral Programs, Best Practices, STEM Education, Biological Sciences
Ulrike Padó; Yunus Eryilmaz; Larissa Kirschner – International Journal of Artificial Intelligence in Education, 2024
Short-Answer Grading (SAG) is a time-consuming task for teachers that automated SAG models have long promised to make easier. However, there are three challenges for their broad-scale adoption: A technical challenge regarding the need for high-quality models, which is exacerbated for languages with fewer resources than English; a usability…
Descriptors: Grading, Automation, Test Format, Computer Assisted Testing
Jing Miao; Yi Cao; Michael E. Walker – ETS Research Report Series, 2024
Studies of test score comparability have been conducted at different stages in the history of testing to ensure that test results carry the same meaning regardless of test conditions. The expansion of at-home testing via remote proctoring sparked another round of interest. This study uses data from three licensure tests to assess potential mode…
Descriptors: Testing, Test Format, Computer Assisted Testing, Home Study
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2022
The current literature on test equating generally defines it as the process necessary to obtain score comparability between different test forms. The definition is in contrast with Lord's foundational paper which viewed equating as the process required to obtain comparability of measurement scale between forms. The distinction between the notions…
Descriptors: Equated Scores, Test Items, Scores, Probability
Yusuf Oc; Hela Hassen – Marketing Education Review, 2025
Driven by technological innovations, continuous digital expansion has transformed fundamentally the landscape of modern higher education, leading to discussions about evaluation techniques. The emergence of generative artificial intelligence raises questions about reliability and academic honesty regarding multiple-choice assessments in online…
Descriptors: Higher Education, Multiple Choice Tests, Computer Assisted Testing, Electronic Learning
Ben Backes; James Cowan – Grantee Submission, 2024
We investigate two research questions using a recent statewide transition from paper to computer-based testing: first, the extent to which test mode effects found in prior studies can be eliminated in large-scale administration; and second, the degree to which online and paper assessments offer different information about underlying student…
Descriptors: Computer Assisted Testing, Test Format, Differences, Academic Achievement
Ben Backes; James Cowan – Applied Measurement in Education, 2024
We investigate two research questions using a recent statewide transition from paper to computer-based testing: first, the extent to which test mode effects found in prior studies can be eliminated; and second, the degree to which online and paper assessments offer different information about underlying student ability. We first find very small…
Descriptors: Computer Assisted Testing, Test Format, Differences, Academic Achievement