Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 12 |
Descriptor
| Multiple Choice Tests | 22 |
| Test Construction | 22 |
| Test Content | 22 |
| Test Items | 11 |
| Student Evaluation | 7 |
| Test Format | 6 |
| Test Validity | 5 |
| Evaluation Methods | 4 |
| Foreign Countries | 4 |
| Test Reliability | 4 |
| Academic Achievement | 3 |
| More ▼ | |
Source
Author
| Azevedo, Jose | 1 |
| Babo, Lurdes | 1 |
| Bello, Samira Abdullahi | 1 |
| Bichi, Ado Abdu | 1 |
| Boothroyd, Roger A. | 1 |
| Chahna Gonsalves | 1 |
| Courchene, Robert | 1 |
| Davis, Doris Bitler | 1 |
| Ewing, Maureen | 1 |
| Geisinger, Kurt F. | 1 |
| Hafiz, Hadiza | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 14 |
| Reports - Descriptive | 8 |
| Reports - Research | 7 |
| Reports - Evaluative | 5 |
| Guides - Non-Classroom | 2 |
| Speeches/Meeting Papers | 2 |
| Books | 1 |
| Opinion Papers | 1 |
| Tests/Questionnaires | 1 |
Education Level
| Higher Education | 10 |
| Postsecondary Education | 7 |
| Elementary Secondary Education | 2 |
| Grade 12 | 1 |
| Grade 4 | 1 |
| Grade 8 | 1 |
| Secondary Education | 1 |
Audience
| Teachers | 5 |
| Practitioners | 2 |
| Policymakers | 1 |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
| National Assessment of… | 1 |
What Works Clearinghouse Rating
Stevens, Scott P.; Palocsay, Susan W.; Novoa, Luis J. – INFORMS Transactions on Education, 2023
Test writing is a fundamental component of teaching. With increasing pressure to teach larger groups of students, conduct formal assessment of learning outcomes, and offer online and hybrid classes, there is a need for alternatives to constructed response problem-solving test questions. We believe that appropriate use of multiple-choice (MC)…
Descriptors: Multiple Choice Tests, Introductory Courses, Test Construction, Content Validity
Chahna Gonsalves – Journal of Learning Development in Higher Education, 2023
Multiple-choice quizzes (MCQs) are a popular form of assessment. A rapid shift to online assessment during the COVID-19 pandemic in 2020, drove the uptake of MCQs, yet limited invigilation and wide access to material on the internet allow students to solve the questions via internet search. ChatGPT, an artificial intelligence (AI) agent trained on…
Descriptors: Artificial Intelligence, Technology Uses in Education, Natural Language Processing, Multiple Choice Tests
Haladyna, Thomas M. – IDEA Center, Inc., 2018
Writing multiple-choice test items to measure student learning in higher education is a challenge. Based on extensive scholarly research and experience, the author describes various item formats, offers guidelines for creating these items, and provides many examples of both good and bad test items. He also suggests some shortcuts for developing…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Higher Education
Leber, Jasmin; Renkl, Alexander; Nückles, Matthias; Wäschle, Kristin – Learning: Research and Practice, 2018
According to the model of constructive alignment, learners adjust their learning strategies to the announced assessment (backwash effect). Hence, when teaching for understanding, the assessment method should be aligned with this teaching goal to ensure that learners engage in corresponding learning strategies. A quasi-experimental field study with…
Descriptors: Learning Strategies, Testing Problems, Educational Objectives, Learning Motivation
Davis, Doris Bitler – Teaching of Psychology, 2017
Providing two or more versions of multiple-choice exams has long been a popular strategy for reducing the opportunity for students to engage in academic dishonesty. While the results of studies comparing exam scores under different question-order conditions have been inconclusive, the potential importance of contextual cues to aid student recall…
Descriptors: Test Construction, Multiple Choice Tests, Sequential Approach, Cues
Peter, Johannes; Leichner, Nikolas; Mayer, Anne-Kathrin; Krampen, Günter – Psychology Learning and Teaching, 2015
This paper reports the development of a fixed-choice test for the assessment of basic knowledge in psychology, for use with undergraduate as well as graduate students. Test content is selected based on a core concepts approach and includes a sample of concepts which are indexed most frequently in common introductory psychology textbooks. In a…
Descriptors: Tests, Psychology, Knowledge Level, Scores
Towns, Marcy H. – Journal of Chemical Education, 2014
Chemistry faculty members are highly skilled in obtaining, analyzing, and interpreting physical measurements, but often they are less skilled in measuring student learning. This work provides guidance for chemistry faculty from the research literature on multiple-choice item development in chemistry. Areas covered include content, stem, and…
Descriptors: Multiple Choice Tests, Test Construction, Psychometrics, Test Items
Bichi, Ado Abdu; Hafiz, Hadiza; Bello, Samira Abdullahi – International Journal of Evaluation and Research in Education, 2016
High-stakes testing is used for the purposes of providing results that have important consequences. Validity is the cornerstone upon which all measurement systems are built. This study applied the Item Response Theory principles to analyse Northwest University Kano Post-UTME Economics test items. The developed fifty (50) economics test items was…
Descriptors: Item Response Theory, Test Items, Difficulty Level, Statistical Analysis
Torres, Cristina; Lopes, Ana Paula; Babo, Lurdes; Azevedo, Jose – Online Submission, 2011
A MC (multiple-choice) question can be defined as a question in which students are asked to select one alternative from a given set of alternatives in response to a question stem. The objective of this paper is to analyse if MC questions may be considered as an interesting alternative for assessing knowledge, particularly in the mathematics area,…
Descriptors: Multiple Choice Tests, Alternative Assessment, Evaluation Methods, Questioning Techniques
Hendrickson, Amy; Patterson, Brian; Ewing, Maureen – College Board, 2010
The psychometric considerations and challenges associated with including constructed response items on tests are discussed along with how these issues affect the form assembly specifications for mixed-format exams. Reliability and validity, security and fairness, pretesting, content and skills coverage, test length and timing, weights, statistical…
Descriptors: Multiple Choice Tests, Test Format, Test Construction, Test Validity
National Assessment Governing Board, 2012
As the ongoing national indicator of what American students know and can do, the National Assessment of Educational Progress (NAEP) in Reading regularly collects achievement information on representative samples of students in grades 4, 8, and 12. Through The Nation's Report Card, the NAEP Reading Assessment reports how well students perform in…
Descriptors: Reading Achievement, National Competency Tests, Reading Comprehension, Grade 4
Hertenstein, Matthew J.; Wayand, Joseph F. – Journal of Instructional Psychology, 2008
Many psychology instructors present videotaped examples of behavior at least occasionally during their courses. However, few include video clips during examinations. We provide examples of video-based questions, offer guidelines for their use, and discuss their benefits and drawbacks. In addition, we provide empirical evidence to support the use…
Descriptors: Student Evaluation, Video Technology, Evaluation Methods, Test Construction
Marrelli, Anne F. – Performance and Instruction, 1995
Discusses the advantages of using multiple choice questions, highlighting the flexibility of using different variations of questions. Item writing guidelines include information on content, sensitivity, difficulty, irrelevant sources of difficulty, order, misleads, avoidance of clues, and exercises in the application of guidelines. (JKP)
Descriptors: Distractors (Tests), Guidelines, Multiple Choice Tests, Questioning Techniques
Peer reviewedSireci, Stephen G.; Geisinger, Kurt F. – Applied Psychological Measurement, 1992
A new method for evaluating the content representation of a test is illustrated. Item similarity ratings were obtained from three content domain experts to assess whether ratings corresponded to item groupings specified in the test blueprint. Multidimensional scaling and cluster analysis provided substantial information about the test's content…
Descriptors: Cluster Analysis, Content Analysis, Multidimensional Scaling, Multiple Choice Tests
Pommerich, Mary – Journal of Technology, Learning, and Assessment, 2004
As testing moves from paper-and-pencil administration toward computerized administration, how to present tests on a computer screen becomes an important concern. Of particular concern are tests that contain necessary information that cannot be displayed on screen all at once for an item. Ideally, the method of presentation should not interfere…
Descriptors: Test Content, Computer Assisted Testing, Multiple Choice Tests, Computer Interfaces
Previous Page | Next Page »
Pages: 1 | 2
Direct link
