Publication Date
| In 2026 | 0 |
| Since 2025 | 8 |
| Since 2022 (last 5 years) | 20 |
| Since 2017 (last 10 years) | 20 |
| Since 2007 (last 20 years) | 20 |
Descriptor
Source
Author
| Abdullah Al Fraidan | 1 |
| Abdullah Ali Khan | 1 |
| Adam C. Sales | 1 |
| Amy Morales | 1 |
| Andrew A. McReynolds | 1 |
| Ashish Gurung | 1 |
| Bernhard Ertl | 1 |
| Brian E. Clauser | 1 |
| Byungmin Lee | 1 |
| Chandima Daskon | 1 |
| Chandralekha Singh | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 18 |
| Journal Articles | 17 |
| Tests/Questionnaires | 3 |
| Guides - Classroom - Teacher | 1 |
| Reports - Evaluative | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
| Teachers | 1 |
Location
| Europe | 1 |
| Germany | 1 |
| Indonesia | 1 |
| New York | 1 |
| New Zealand | 1 |
| Saudi Arabia | 1 |
| South Korea | 1 |
| Taiwan | 1 |
| Turkey | 1 |
| United Kingdom | 1 |
| United Kingdom (England) | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
| International English… | 1 |
| National Assessment of… | 1 |
| Trends in International… | 1 |
What Works Clearinghouse Rating
Stefanie A. Wind; Yuan Ge – Measurement: Interdisciplinary Research and Perspectives, 2024
Mixed-format assessments made up of multiple-choice (MC) items and constructed response (CR) items that are scored using rater judgments include unique psychometric considerations. When these item types are combined to estimate examinee achievement, information about the psychometric quality of each component can depend on that of the other. For…
Descriptors: Interrater Reliability, Test Bias, Multiple Choice Tests, Responses
Janet Mee; Ravi Pandian; Justin Wolczynski; Amy Morales; Miguel Paniagua; Polina Harik; Peter Baldwin; Brian E. Clauser – Advances in Health Sciences Education, 2024
Recent advances in automated scoring technology have made it practical to replace multiple-choice questions (MCQs) with short-answer questions (SAQs) in large-scale, high-stakes assessments. However, most previous research comparing these formats has used small examinee samples testing under low-stakes conditions. Additionally, previous studies…
Descriptors: Multiple Choice Tests, High Stakes Tests, Test Format, Test Items
Victoria Crisp; Sylvia Vitello; Abdullah Ali Khan; Heather Mahy; Sarah Hughes – Research Matters, 2025
This research set out to enhance our understanding of the exam techniques and types of written annotations or markings that learners may wish to use to support their thinking when taking digital multiple-choice exams. Additionally, we aimed to further explore issues around the factors that contribute to learners writing less rough work and…
Descriptors: Computer Assisted Testing, Test Format, Multiple Choice Tests, Notetaking
Yusuf Oc; Hela Hassen – Marketing Education Review, 2025
Driven by technological innovations, continuous digital expansion has transformed fundamentally the landscape of modern higher education, leading to discussions about evaluation techniques. The emergence of generative artificial intelligence raises questions about reliability and academic honesty regarding multiple-choice assessments in online…
Descriptors: Higher Education, Multiple Choice Tests, Computer Assisted Testing, Electronic Learning
Chunyan Liu; Raja Subhiyah; Richard A. Feinberg – Applied Measurement in Education, 2024
Mixed-format tests that include both multiple-choice (MC) and constructed-response (CR) items have become widely used in many large-scale assessments. When an item response theory (IRT) model is used to score a mixed-format test, the unidimensionality assumption may be violated if the CR items measure a different construct from that measured by MC…
Descriptors: Test Format, Response Style (Tests), Multiple Choice Tests, Item Response Theory
Yavuz Akbulut – European Journal of Education, 2024
The testing effect refers to the gains in learning and retention that result from taking practice tests before the final test. Understanding the conditions under which practice tests improve learning is crucial, so four experiments were conducted with a total of 438 undergraduate students in Turkey. In the first study, students who took graded…
Descriptors: Foreign Countries, Undergraduate Students, Student Evaluation, Testing
Maximilian C. Fink; Larissa J. Kaltefleiter; Isabell Reis; Bernhard Ertl – Assessment & Evaluation in Higher Education, 2025
This study examines students' format-specific expectations and their preferences toward (1) written multiple-choice examinations, (2) written constructed-response examinations, (3) oral examinations, and (4) standardized practical examinations. N = 509 medical students completed a web-based survey, rating all four examination formats. Preferences…
Descriptors: Foreign Countries, Medical Students, Preferences, Multiple Choice Tests
Lawrence T. DeCarlo – Educational and Psychological Measurement, 2024
A psychological framework for different types of items commonly used with mixed-format exams is proposed. A choice model based on signal detection theory (SDT) is used for multiple-choice (MC) items, whereas an item response theory (IRT) model is used for open-ended (OE) items. The SDT and IRT models are shown to share a common conceptualization…
Descriptors: Test Format, Multiple Choice Tests, Item Response Theory, Models
Stephane E. Collignon; Josey Chacko; Salman Nazir – Journal of Information Systems Education, 2024
Most business schools require students to take at least one technical Management Information System (MIS) course. Due to the technical nature of the material, the course and the assessments tend to be anxiety inducing. With over three out of every five students in US colleges suffering from "overwhelming anxiety" in some form, we study…
Descriptors: Multiple Choice Tests, Test Format, Business Schools, Information Systems
Ashish Gurung; Kirk Vanacore; Andrew A. McReynolds; Korinn S. Ostrow; Eamon S. Worden; Adam C. Sales; Neil T. Heffernan – Grantee Submission, 2024
Learning experience designers consistently balance the trade-off between open and close-ended activities. The growth and scalability of Computer Based Learning Platforms (CBLPs) have only magnified the importance of these design trade-offs. CBLPs often utilize close-ended activities (i.e. Multiple-Choice Questions [MCQs]) due to feasibility…
Descriptors: Multiple Choice Tests, Testing, Test Format, Computer Assisted Testing
Qian Liu; Navé Wald; Chandima Daskon; Tony Harland – Innovations in Education and Teaching International, 2024
This qualitative study looks at multiple-choice questions (MCQs) in examinations and their effectiveness in testing higher-order cognition. While there are claims that MCQs can do this, we consider many assertions problematic because of the difficulty in interpreting what higher-order cognition consists of and whether or not assessment tasks…
Descriptors: Multiple Choice Tests, Critical Thinking, College Faculty, Student Evaluation
Chieh-Ju Tsao; Chun-Sheng Chang; Chih-Hung Chen – Journal of Educational Computing Research, 2025
Prior research has emphasized that two-tier tests are an effective approach to guiding students through game tasks; furthermore, self-efficacy, which is potentially related to students' surroundings and academic performance, has a positive impact on students' cognitive outcomes in games. This implies the significance of conducting a study to probe…
Descriptors: Game Based Learning, Elementary School Students, Grade 6, Foreign Countries
Taehyeong Kim; Byungmin Lee – Language Assessment Quarterly, 2025
The Korean College Scholastic Ability Test (CSAT) aims to assess Korean high school students' scholastic ability required for college readiness. As a high-stakes test, the examination serves as a pivotal hurdle for university admission and exerts a strong washback effect on the educational system in Korea. The present study set out to investigate…
Descriptors: Reading Comprehension, Reading Tests, Language Tests, Multiple Choice Tests
Stefan O'Grady – International Journal of Listening, 2025
Language assessment is increasingly computermediated. This development presents opportunities with new task formats and equally a need for renewed scrutiny of established conventions. Recent recommendations to increase integrated skills assessment in lecture comprehension tests is premised on empirical research that demonstrates enhanced construct…
Descriptors: Language Tests, Lecture Method, Listening Comprehension Tests, Multiple Choice Tests
Fitria Lafifa; Dadan Rosana – Turkish Online Journal of Distance Education, 2024
This research goal to develop a multiple-choice closed-ended test to assessing and evaluate students' digital literacy skills. The sample in this study were students at MTsN 1 Blitar City who were selected using a purposive sampling technique. The test was also validated by experts, namely 2 Doctors of Physics and Science from Yogyakarta State…
Descriptors: Educational Innovation, Student Evaluation, Digital Literacy, Multiple Choice Tests
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
