NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)5
Since 2017 (last 10 years)11
Audience
Laws, Policies, & Programs
Assessments and Surveys
Trends in International…1
What Works Clearinghouse Rating
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
McGuire, Michael J. – International Journal for the Scholarship of Teaching and Learning, 2023
College students in a lower-division psychology course made metacognitive judgments by predicting and postdicting performance for true-false, multiple-choice, and fill-in-the-blank question sets on each of three exams. This study investigated which question format would result in the most accurate metacognitive judgments. Extending Koriat's (1997)…
Descriptors: Metacognition, Multiple Choice Tests, Accuracy, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Gustafsson, Martin; Barakat, Bilal Fouad – Comparative Education Review, 2023
International assessments inform education policy debates, yet little is known about their floor effects: To what extent do they fail to differentiate between the lowest performers, and what are the implications of this? TIMSS, SACMEQ, and LLECE data are analyzed to answer this question. In TIMSS, floor effects have been reduced through the…
Descriptors: Achievement Tests, Elementary Secondary Education, International Assessment, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
van den Broek, Gesa S. E.; Gerritsen, Suzanne L.; Oomen, Iris T. J.; Velthoven, Eva; van Boxtel, Femke H. J.; Kester, Liesbeth; van Gog, Tamara – Journal of Educational Psychology, 2023
Multiple-choice questions (MCQs) are popular in vocabulary software because they can be scored automatically and are compatible with many input devices (e.g., touchscreens). Answering MCQs is beneficial for learning, especially when learners retrieve knowledge from memory to evaluate plausible answer alternatives. However, such retrieval may not…
Descriptors: Multiple Choice Tests, Vocabulary Development, Test Format, Cues
Peer reviewed Peer reviewed
Direct linkDirect link
Read, John – Language Testing, 2023
Published work on vocabulary assessment has grown substantially in the last 10 years, but it is still somewhat outside the mainstream of the field. There has been a recent call for those developing vocabulary tests to apply professional standards to their work, especially in validating their instruments for specified purposes before releasing them…
Descriptors: Language Tests, Vocabulary Development, Second Language Learning, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Papenberg, Martin; Diedenhofen, Birk; Musch, Jochen – Journal of Experimental Education, 2021
Testwiseness may introduce construct-irrelevant variance to multiple-choice test scores. Presenting response options sequentially has been proposed as a potential solution to this problem. In an experimental validation, we determined the psychometric properties of a test based on the sequential presentation of response options. We created a strong…
Descriptors: Test Wiseness, Test Validity, Test Reliability, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Joseph, Dane Christian – Journal of Effective Teaching in Higher Education, 2019
Multiple-choice testing is a staple within the U.S. higher education system. From classroom assessments to standardized entrance exams such as the GRE, GMAT, or LSAT, test developers utilize a variety of validated and heuristic driven item-writing guidelines. One such guideline that has been given recent attention is to randomize the position of…
Descriptors: Test Construction, Multiple Choice Tests, Guessing (Tests), Test Wiseness
Peer reviewed Peer reviewed
Direct linkDirect link
Brassil, Chad E.; Couch, Brian A. – International Journal of STEM Education, 2019
Background: Within undergraduate science courses, instructors often assess student thinking using closed-ended question formats, such as multiple-choice (MC) and multiple-true-false (MTF), where students provide answers with respect to predetermined response options. While MC and MTF questions both consist of a question stem followed by a series…
Descriptors: Multiple Choice Tests, Objective Tests, Student Evaluation, Thinking Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Xian; Liu, Jianda; Ai, Haiyang – Language Testing, 2020
The main purpose of this study is to investigate guessing in the Yes/No (YN) format vocabulary test. One-hundred-and-five university students took a YN test, a translation task and a multiple-choice vocabulary size test (MC VST). With matched lexical properties between the real words and the pseudowords, pseudowords could index guessing in the YN…
Descriptors: Vocabulary Development, Language Tests, Test Format, College Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Otoyo, Lucia; Bush, Martin – Practical Assessment, Research & Evaluation, 2018
This article presents the results of an empirical study of "subset selection" tests, which are a generalisation of traditional multiple-choice tests in which test takers are able to express partial knowledge. Similar previous studies have mostly been supportive of subset selection, but the deduction of marks for incorrect responses has…
Descriptors: Multiple Choice Tests, Grading, Test Reliability, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Masrai, Ahmed – SAGE Open, 2022
Vocabulary size measures serve important functions, not only with respect to placing learners at appropriate levels on language courses but also with a view to examining the progress of learners. One of the widely reported formats suitable for these purposes is the Yes/No vocabulary test. The primary aim of this study was to introduce and provide…
Descriptors: Vocabulary Development, Language Tests, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Storme, Martin; Myszkowski, Nils; Baron, Simon; Bernard, David – Journal of Intelligence, 2019
Assessing job applicants' general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor information through Nested Logit Models (NLM; Suh & Bolt, 2010) increases the reliability of ability estimates in…
Descriptors: Intelligence Tests, Item Response Theory, Comparative Analysis, Test Reliability