Publication Date
In 2025 | 5 |
Since 2024 | 16 |
Since 2021 (last 5 years) | 66 |
Since 2016 (last 10 years) | 162 |
Since 2006 (last 20 years) | 250 |
Descriptor
Multiple Choice Tests | 250 |
Test Format | 250 |
Test Items | 121 |
Foreign Countries | 104 |
Comparative Analysis | 58 |
Scores | 58 |
Item Response Theory | 47 |
Difficulty Level | 45 |
Computer Assisted Testing | 43 |
Undergraduate Students | 43 |
Language Tests | 39 |
More ▼ |
Source
Author
Kim, Sooyeon | 7 |
Walker, Michael E. | 6 |
DeBoer, George E. | 3 |
Hardcastle, Joseph | 3 |
Herrmann-Abell, Cari F. | 3 |
Katz, Irvin R. | 3 |
Keehner, Madeleine | 3 |
McHale, Frederick | 3 |
Moon, Jung Aa | 3 |
Bande, Rhodora A. | 2 |
Bao, Lei | 2 |
More ▼ |
Publication Type
Education Level
Audience
Teachers | 4 |
Practitioners | 2 |
Administrators | 1 |
Location
Turkey | 12 |
Canada | 8 |
Germany | 7 |
Australia | 5 |
Iran | 5 |
Japan | 5 |
United Kingdom | 5 |
China | 4 |
Indonesia | 4 |
Netherlands | 4 |
Saudi Arabia | 4 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Stefanie A. Wind; Yuan Ge – Measurement: Interdisciplinary Research and Perspectives, 2024
Mixed-format assessments made up of multiple-choice (MC) items and constructed response (CR) items that are scored using rater judgments include unique psychometric considerations. When these item types are combined to estimate examinee achievement, information about the psychometric quality of each component can depend on that of the other. For…
Descriptors: Interrater Reliability, Test Bias, Multiple Choice Tests, Responses
Janet Mee; Ravi Pandian; Justin Wolczynski; Amy Morales; Miguel Paniagua; Polina Harik; Peter Baldwin; Brian E. Clauser – Advances in Health Sciences Education, 2024
Recent advances in automated scoring technology have made it practical to replace multiple-choice questions (MCQs) with short-answer questions (SAQs) in large-scale, high-stakes assessments. However, most previous research comparing these formats has used small examinee samples testing under low-stakes conditions. Additionally, previous studies…
Descriptors: Multiple Choice Tests, High Stakes Tests, Test Format, Test Items
Victoria Crisp; Sylvia Vitello; Abdullah Ali Khan; Heather Mahy; Sarah Hughes – Research Matters, 2025
This research set out to enhance our understanding of the exam techniques and types of written annotations or markings that learners may wish to use to support their thinking when taking digital multiple-choice exams. Additionally, we aimed to further explore issues around the factors that contribute to learners writing less rough work and…
Descriptors: Computer Assisted Testing, Test Format, Multiple Choice Tests, Notetaking
Berenbon, Rebecca F.; McHugh, Bridget C. – Educational Measurement: Issues and Practice, 2023
To assemble a high-quality test, psychometricians rely on subject matter experts (SMEs) to write high-quality items. However, SMEs are not typically given the opportunity to provide input on which content standards are most suitable for multiple-choice questions (MCQs). In the present study, we explored the relationship between perceived MCQ…
Descriptors: Test Items, Multiple Choice Tests, Standards, Difficulty Level
Yusuf Oc; Hela Hassen – Marketing Education Review, 2025
Driven by technological innovations, continuous digital expansion has transformed fundamentally the landscape of modern higher education, leading to discussions about evaluation techniques. The emergence of generative artificial intelligence raises questions about reliability and academic honesty regarding multiple-choice assessments in online…
Descriptors: Higher Education, Multiple Choice Tests, Computer Assisted Testing, Electronic Learning
Chunyan Liu; Raja Subhiyah; Richard A. Feinberg – Applied Measurement in Education, 2024
Mixed-format tests that include both multiple-choice (MC) and constructed-response (CR) items have become widely used in many large-scale assessments. When an item response theory (IRT) model is used to score a mixed-format test, the unidimensionality assumption may be violated if the CR items measure a different construct from that measured by MC…
Descriptors: Test Format, Response Style (Tests), Multiple Choice Tests, Item Response Theory
Herwin, Herwin; Pristiwaluyo, Triyanto; Ruslan, Ruslan; Dahalan, Shakila Che – Cypriot Journal of Educational Sciences, 2022
The application of multiple-choice tests often does not consider the scoring technique and the number of choices. The study aims at describing the effect of the scoring technique and numerous options towards the reliability of multiple-choice objective tests on social subjects in elementary school. The study is quantitative research with…
Descriptors: Scoring, Multiple Choice Tests, Test Reliability, Elementary School Students
Yavuz Akbulut – European Journal of Education, 2024
The testing effect refers to the gains in learning and retention that result from taking practice tests before the final test. Understanding the conditions under which practice tests improve learning is crucial, so four experiments were conducted with a total of 438 undergraduate students in Turkey. In the first study, students who took graded…
Descriptors: Foreign Countries, Undergraduate Students, Student Evaluation, Testing
Lim, Alliyza; Brewer, Neil; Aistrope, Denise; Young, Robyn L. – Autism: The International Journal of Research and Practice, 2023
The Reading the Mind in the Eyes Test (RMET) is a purported theory of mind measure and one that reliably differentiates autistic and non-autistic individuals. However, concerns have been raised about the validity of the measure, with some researchers suggesting that the multiple-choice format of the RMET makes it susceptible to the undue influence…
Descriptors: Theory of Mind, Autism Spectrum Disorders, Test Validity, Multiple Choice Tests
Narnaware, Yuwaraj; Cuschieri, Sarah – HAPS Educator, 2023
Visualizing effects of images on improved anatomical knowledge are evident in medical and allied health students, but this phenomenon has rarely been assessed in nursing students. To assess the visualizing effect of images on improving anatomical knowledge and to use images as one of the methods of gross anatomical knowledge assessment in nursing…
Descriptors: Nursing Students, Multiple Choice Tests, Anatomy, Science Tests
Sebastian Moncaleano – ProQuest LLC, 2021
The growth of computer-based testing over the last two decades has motivated the creation of innovative item formats. It is often argued that technology-enhanced items (TEIs) provide better measurement of test-takers' knowledge, skills, and abilities by increasing the authenticity of tasks presented to test-takers (Sireci & Zenisky, 2006).…
Descriptors: Computer Assisted Testing, Test Format, Test Items, Classification
Neto, Joana; Neto, Félix; Furnham, Adrian – Assessment & Evaluation in Higher Education, 2023
The aim of this study is to examine whether a preference for specific assessment methods in higher education is associated with personality and character strengths. Two-hundred and seventy Portuguese students completed a survey of character strengths, a Big Five personality test and their preference for each of six higher education assessment…
Descriptors: Predictor Variables, Student Attitudes, Preferences, Evaluation Methods
Albert Weideman; Tobie van Dyk – Language Teaching Research Quarterly, 2023
This contribution investigates gains in technical economy in measuring language ability by considering one recurrent interest of JD Brown: cloze tests. In the various versions of the Test of Academic Literacy Levels (TALL), its Sesotho and Afrikaans (Toets van Akademiese Geletterdheidsvlakke -- TAG) counterparts, as well as related other tests…
Descriptors: Language Skills, Language Aptitude, Cloze Procedure, Reading Tests
McGuire, Michael J. – International Journal for the Scholarship of Teaching and Learning, 2023
College students in a lower-division psychology course made metacognitive judgments by predicting and postdicting performance for true-false, multiple-choice, and fill-in-the-blank question sets on each of three exams. This study investigated which question format would result in the most accurate metacognitive judgments. Extending Koriat's (1997)…
Descriptors: Metacognition, Multiple Choice Tests, Accuracy, Test Format
Yilmaz, Erdi Okan; Toker, Türker – International Journal of Psychology and Educational Studies, 2022
This study examines the online assessment-evaluation activities in distance education processes. The effects of different online exam application styles considering the online assessment-evaluation in distance education processes, including all programs of a higher education institution, were documented. The population for online…
Descriptors: Foreign Countries, Computer Assisted Testing, Test Format, Distance Education