Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 6 |
Descriptor
| Classification | 23 |
| Multiple Choice Tests | 23 |
| Test Construction | 23 |
| Test Items | 15 |
| Evaluation Methods | 4 |
| Item Analysis | 4 |
| Performance Based Assessment | 4 |
| Scoring | 4 |
| Test Format | 4 |
| Cognitive Processes | 3 |
| College Faculty | 3 |
| More ▼ | |
Source
Author
| Haladyna, Thomas M. | 4 |
| Downing, Steven M. | 3 |
| Argo, Jana K. | 1 |
| Azevedo, Jose | 1 |
| Açikgül Firat, Esra | 1 |
| Babo, Lurdes | 1 |
| Badgett, John L. | 1 |
| Bao, Lei | 1 |
| Bennett, Randy Elliot | 1 |
| Blumberg, Phyllis | 1 |
| Bogner, Franz X. | 1 |
| More ▼ | |
Publication Type
Education Level
| Higher Education | 5 |
| Postsecondary Education | 4 |
| Elementary Secondary Education | 2 |
| Secondary Education | 2 |
| Grade 5 | 1 |
| High Schools | 1 |
| Middle Schools | 1 |
Audience
| Practitioners | 1 |
| Teachers | 1 |
Location
| China | 1 |
| Germany | 1 |
| Pennsylvania (Pittsburgh) | 1 |
| Texas | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Advanced Placement… | 1 |
| National Assessment of… | 1 |
What Works Clearinghouse Rating
Chen, Qingwei; Zhu, Guangtian; Liu, Qiaoyi; Han, Jing; Fu, Zhao; Bao, Lei – Physical Review Physics Education Research, 2020
Problem-solving categorization tasks have been well studied and used as an effective tool for assessment of student knowledge structure. In this study, a traditional free-response categorization test has been modified into a multiple-choice format, and the effectiveness of this new assessment is evaluated. Through randomized testing with Chinese…
Descriptors: Foreign Countries, Test Construction, Multiple Choice Tests, Problem Solving
Açikgül Firat, Esra; Köksal, Mustafa S. – Biochemistry and Molecular Biology Education, 2019
In this study, a 'biotechnology literacy test' was developed to determine the biotechnology literacy of prospective science teachers, and its validity and reliability were determined. For this purpose, 42 items were prepared by considering Bybee's scientific literacy classifications (nominal, functional, procedural, and multidimensional). The…
Descriptors: Test Construction, Multiple Choice Tests, Science Teachers, Preservice Teachers
Young, Arthur; Shawl, Stephen J. – Astronomy Education Review, 2013
Professors who teach introductory astronomy to students not majoring in science desire them to comprehend the concepts and theories that form the basis of the science. They are usually less concerned about the myriad of
detailed facts and information that accompanies the science. As such, professors prefer to test the students for such…
Descriptors: Multiple Choice Tests, Classification, Astronomy, Introductory Courses
Torres, Cristina; Lopes, Ana Paula; Babo, Lurdes; Azevedo, Jose – Online Submission, 2011
A MC (multiple-choice) question can be defined as a question in which students are asked to select one alternative from a given set of alternatives in response to a question stem. The objective of this paper is to analyse if MC questions may be considered as an interesting alternative for assessing knowledge, particularly in the mathematics area,…
Descriptors: Multiple Choice Tests, Alternative Assessment, Evaluation Methods, Questioning Techniques
Gerstner, Sabine; Bogner, Franz X. – Educational Research, 2009
Background: This study deals with the application of concept mapping to the teaching and learning of a science topic with secondary school students in Germany. Purpose: The main research questions were: (1) Do different teaching approaches affect concept map structure or students' learning success? (2) Is the structure of concept maps influenced…
Descriptors: Concept Mapping, Age, Maps, Classification
Longford, Nicholas T. – 1994
This study is a critical evaluation of the roles for coding and scoring of missing responses to multiple-choice items in educational tests. The focus is on tests in which the test-takers have little or no motivation; in such tests omitting and not reaching (as classified by the currently adopted operational rules) is quite frequent. Data from the…
Descriptors: Algorithms, Classification, Coding, Models
Peer reviewedHaladyna, Thomas M.; Downing, Steven M. – Applied Measurement in Education, 1989
Results of 96 theoretical/empirical studies were reviewed to see if they support a taxonomy of 43 rules for writing multiple-choice test items. The taxonomy is the result of an analysis of 46 textbooks dealing with multiple-choice item writing. For nearly half of the rules, no research was found. (SLD)
Descriptors: Classification, Literature Reviews, Multiple Choice Tests, Test Construction
Peer reviewedHaladyna, Thomas M.; Downing, Steven M.; Rodriguez, Michael C. – Applied Measurement in Education, 2002
Validated a taxonomy of 31 multiple-choice item-writing guidelines through a logical process that included reviewing 27 textbooks on educational testing and the results of 27 studies and reviews published since 1990. Presents the taxonomy, which is intended for classroom assessment. (SLD)
Descriptors: Classification, Literature Reviews, Multiple Choice Tests, Student Evaluation
Badgett, John L.; Christmann, Edwin P. – Corwin, 2009
While today's curriculum is largely driven by standards, many teachers find the lack of specificity in the standards to be confounding and even intimidating. Now this practical book provides middle and high school teachers with explicit guidance on designing specific objectives and developing appropriate formative and summative assessments to…
Descriptors: Test Items, Student Evaluation, Knowledge Level, National Standards
Peer reviewedBlumberg, Phyllis; And Others – Educational and Psychological Measurement, 1982
First year medical students answered parallel multiple-choice questions at different taxonomic levels as part of their diagnostic examinations. The results show that when content is held constant, students perform as well on interpretation and problem-solving questions as on recall questions. (Author/BW)
Descriptors: Classification, Cognitive Processes, Difficulty Level, Higher Education
Peer reviewedHaladyna, Thomas M.; Downing, Steven M. – Applied Measurement in Education, 1989
A taxonomy of 43 rules for writing multiple-choice test items is presented, based on a consensus of 46 textbooks. These guidelines are presented as complete and authoritative, with solid consensus apparent for 33 of the rules. Four rules lack consensus, and 5 rules were cited fewer than 10 times. (SLD)
Descriptors: Classification, Interrater Reliability, Multiple Choice Tests, Objective Tests
McCowan, Richard J. – Online Submission, 1999
Item writing is a major responsibility of trainers. Too often, qualified staff who prepare lessons carefully and teach conscientiously use inadequate tests that do not validly reflect the true level of trainee achievement. This monograph describes techniques for constructing multiple-choice items that measure student performance accurately. It…
Descriptors: Multiple Choice Tests, Item Analysis, Test Construction, Test Items
COX, RICHARD C. – 1965
THE VALIDITY OF AN EDUCATIONAL ACHIEVEMENT TEST DEPENDS UPON THE CORRESPONDENCE BETWEEN SPECIFIED EDUCATIONAL OBJECTIVES AND THE EXTENT TO WHICH THESE OBJECTIVES ARE MEASURED BY THE EVALUATION INSTRUMENT. THIS STUDY IS DESIGNED TO EVALUATE THE EFFECT OF STATISTICAL ITEM SELECTION ON THE STRUCTURE OF THE FINAL EVALUATION INSTRUMENT AS COMPARED WITH…
Descriptors: Achievement Tests, Classification, Educational Objectives, Item Analysis
Bennett, Randy Elliot; And Others – 1990
A framework for categorizing constructed-response items was developed in which items were ordered on a continuum from multiple-choice to presentation/performance according to the degree of constraint placed on the examinee's response. Two investigations were carried out to evaluate the validity of this framework. In the first investigation, 27…
Descriptors: Classification, Constructed Response, Models, Multiple Choice Tests
Kadhi, Tau – Online Submission, 2004
This paper addresses instrumentation design and an ongoing study of an online formative assessment instrument and its effects in pre-college (developmental/remedial) mathematics courses. The use of facet theory and instructional technologies are harmonized to construct and scale an online instrument designed to document student procedures while…
Descriptors: Formative Evaluation, Measures (Individuals), Computer Assisted Testing, Remedial Mathematics
Previous Page | Next Page »
Pages: 1 | 2
Direct link
