Publication Date
| In 2026 | 0 |
| Since 2025 | 28 |
| Since 2022 (last 5 years) | 146 |
| Since 2017 (last 10 years) | 359 |
| Since 2007 (last 20 years) | 586 |
Descriptor
| Multiple Choice Tests | 1156 |
| Test Items | 1156 |
| Test Construction | 416 |
| Foreign Countries | 337 |
| Difficulty Level | 298 |
| Test Format | 260 |
| Item Analysis | 244 |
| Item Response Theory | 177 |
| Test Reliability | 172 |
| Higher Education | 162 |
| Test Validity | 161 |
| More ▼ | |
Source
Author
| Haladyna, Thomas M. | 14 |
| Plake, Barbara S. | 8 |
| Samejima, Fumiko | 8 |
| Downing, Steven M. | 7 |
| Bennett, Randy Elliot | 6 |
| Cheek, Jimmy G. | 6 |
| Huntley, Renee M. | 6 |
| Katz, Irvin R. | 6 |
| Kim, Sooyeon | 6 |
| McGhee, Max B. | 6 |
| Suh, Youngsuk | 6 |
| More ▼ | |
Publication Type
Education Level
Audience
| Practitioners | 40 |
| Students | 30 |
| Teachers | 28 |
| Researchers | 26 |
| Administrators | 5 |
| Counselors | 1 |
Location
| Canada | 62 |
| Australia | 37 |
| Turkey | 29 |
| Indonesia | 22 |
| Germany | 14 |
| Iran | 11 |
| Nigeria | 11 |
| Malaysia | 10 |
| China | 9 |
| Taiwan | 9 |
| United Kingdom | 9 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 4 |
| National Defense Education Act | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
| Does not meet standards | 1 |
Guven Demir, Elif; Öksuz, Yücel – Participatory Educational Research, 2022
This research aimed to investigate animation-based achievement tests according to the item format, psychometric features, students' performance, and gender. The study sample consisted of 52 fifth-grade students in Samsun/Turkey in 2017-2018. Measures of the research were open-ended (OE), animation-based open-ended (AOE), multiple-choice (MC), and…
Descriptors: Animation, Achievement Tests, Test Items, Psychometrics
Lions, Séverin; Dartnell, Pablo; Toledo, Gabriela; Godoy, María Inés; Córdova, Nora; Jiménez, Daniela; Lemarié, Julie – Educational and Psychological Measurement, 2023
Even though the impact of the position of response options on answers to multiple-choice items has been investigated for decades, it remains debated. Research on this topic is inconclusive, perhaps because too few studies have obtained experimental data from large-sized samples in a real-world context and have manipulated the position of both…
Descriptors: Multiple Choice Tests, Test Items, Item Analysis, Responses
Rafi, Ibnu; Retnawati, Heri; Apino, Ezi; Hadiana, Deni; Lydiati, Ida; Rosyada, Munaya Nikma – Pedagogical Research, 2023
This study describes the characteristics of the test and its items used in the national-standardized school examination by applying classical test theory and focusing on the item difficulty, item discrimination, test reliability, and distractor analysis. We analyzed response data of 191 12th graders from one of public senior high schools in…
Descriptors: Foreign Countries, National Competency Tests, Standardized Tests, Mathematics Tests
Rintayati, Peduk; Lukitasari, Hafizhah; Syawaludin, Ahmad – International Journal of Instruction, 2021
Assessment of higher-order thinking skills (HOTS) provides few opportunities for students to develop more in-depth knowledge, serving students' ability to identify and solve their problems. One type of instrument for measuring HOTS objectively is the two-tier multiple-choice test (TTMCT). This research is part of the research and development…
Descriptors: Foreign Countries, Elementary School Students, Thinking Skills, Multiple Choice Tests
Klender, Sara; Ferriby, Andrew; Notebaert, Andrew – HAPS Educator, 2019
Multiple-choice questions (MCQ) are commonly used on histology examinations. There are many guidelines for how to properly write MCQ and many of them recommend avoiding negatively worded stems. The current study aims to investigate differences between positively and negatively worded stems in a medical histology course by comparing the item…
Descriptors: Multiple Choice Tests, Science Tests, Biology, Test Construction
O'Grady, Stefan – Language Teaching Research, 2023
The current study explores the impact of varying multiple-choice question preview and presentation formats in a test of second language listening proficiency targeting different levels of text comprehension. In a between-participant design, participants completed a 30-item test of listening comprehension featuring implicit and explicit information…
Descriptors: Language Tests, Multiple Choice Tests, Scores, Second Language Learning
Cheewasukthaworn, Kanchana – PASAA: Journal of Language Teaching and Learning in Thailand, 2022
In 2016, the Office of the Higher Education Commission issued a directive requiring all higher education institutions in Thailand to have their students take a standardized English proficiency test. According to the directive, the test's results had to align with the Common European Framework of Reference for Languages (CEFR). In response to this…
Descriptors: Test Construction, Standardized Tests, Language Tests, English (Second Language)
Becker, Anthony; Nekrasova-Beker, Tatiana – Educational Assessment, 2018
While previous research has identified numerous factors that contribute to item difficulty, studies involving large-scale reading tests have provided mixed results. This study examined five selected-response item types used to measure reading comprehension in the Pearson Test of English Academic: a) multiple-choice (choose one answer), b)…
Descriptors: Reading Comprehension, Test Items, Reading Tests, Test Format
Jia, Bing; He, Dan; Zhu, Zhemin – Problems of Education in the 21st Century, 2020
The quality of multiple-choice questions (MCQs) as well as the student's solve behavior in MCQs are educational concerns. MCQs cover wide educational content and can be immediately and accurately scored. However, many studies have found some flawed items in this exam type, thereby possibly resulting in misleading insights into students'…
Descriptors: Foreign Countries, Multiple Choice Tests, Test Items, Item Response Theory
Holzknecht, Franz; McCray, Gareth; Eberharter, Kathrin; Kremmel, Benjamin; Zehentner, Matthias; Spiby, Richard; Dunlea, Jamie – Language Testing, 2021
Studies from various disciplines have reported that spatial location of options in relation to processing order impacts the ultimate choice of the option. A large number of studies have found a primacy effect, that is, the tendency to prefer the first option. In this paper we report on evidence that position of the key in four-option…
Descriptors: Language Tests, Test Items, Multiple Choice Tests, Listening Comprehension Tests
Steven Moore; Huy Anh Nguyen; John Stamper – Grantee Submission, 2021
While generating multiple-choice questions has been shown to promote deep learning, students often fail to realize this benefit and do not willingly participate in this activity. Additionally, the quality of the student-generated questions may be influenced by both their level of engagement and familiarity with the learning materials. Towards…
Descriptors: Multiple Choice Tests, Learning Processes, Learner Engagement, Familiarity
Jay Parkes – Journal of Faculty Development, 2021
Brief multiple-choice question workshops are a prevalent part of the faculty development landscape. But do they work? Studies have documented that faculty member-written multiple-choice questions (fMCQs) are frequently flawed and do not live up to quality standards. Poor fMCQs have real consequences for students beyond annoyance. Fourteen studies…
Descriptors: Teacher Workshops, Multiple Choice Tests, Faculty Development, Program Effectiveness
Haladyna, Thomas M.; Rodriguez, Michael C.; Stevens, Craig – Applied Measurement in Education, 2019
The evidence is mounting regarding the guidance to employ more three-option multiple-choice items. From theoretical analyses, empirical results, and practical considerations, such items are of equal or higher quality than four- or five-option items, and more items can be administered to improve content coverage. This study looks at 58 tests,…
Descriptors: Multiple Choice Tests, Test Items, Testing Problems, Guessing (Tests)
Schilling, Jim F. – Athletic Training Education Journal, 2019
Context: The accuracy of summative assessment scoring and discriminating the level of knowledge in subject matter is critical in fairness to learners in health care professional programs and to ensure stakeholders of competent providers. An evidence-based approach to determine examination quality for the assessment of applied knowledge is…
Descriptors: Athletics, Allied Health Occupations Education, Test Items, Questioning Techniques
Qiao Wang; Ralph L. Rose; Ayaka Sugawara; Naho Orita – Vocabulary Learning and Instruction, 2025
VocQGen is an automated tool designed to generate multiple-choice cloze (MCC) questions for vocabulary assessment in second language learning contexts. It leverages several natural language processing (NLP) tools and OpenAI's GPT-4 model to produce MCC items quickly from user-specified word lists. To evaluate its effectiveness, we used the first…
Descriptors: Vocabulary Skills, Artificial Intelligence, Computer Software, Multiple Choice Tests

Peer reviewed
Direct link
