Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 6 |
| Since 2007 (last 20 years) | 14 |
Descriptor
| Multiple Choice Tests | 22 |
| Test Content | 22 |
| Test Items | 22 |
| Test Construction | 11 |
| Test Format | 9 |
| Mathematics Tests | 7 |
| Item Analysis | 6 |
| Difficulty Level | 5 |
| Student Evaluation | 4 |
| Achievement Tests | 3 |
| College Entrance Examinations | 3 |
| More ▼ | |
Source
Author
| Arneson, Amy | 1 |
| Atalmis, Erkan Hasan | 1 |
| Azevedo, Jose | 1 |
| Babo, Lurdes | 1 |
| Barrett, Richard S. | 1 |
| Bello, Samira Abdullahi | 1 |
| Benz, Christiane | 1 |
| Bichi, Ado Abdu | 1 |
| Brunner, Esther | 1 |
| Bruns, Julia | 1 |
| Buck, Gary | 1 |
| More ▼ | |
Publication Type
Education Level
Audience
| Teachers | 2 |
| Practitioners | 1 |
Location
| Nigeria | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| SAT (College Admission Test) | 2 |
| National Assessment of… | 1 |
What Works Clearinghouse Rating
Atalmis, Erkan Hasan; Kingston, Neal Martin – SAGE Open, 2018
This study explored the impact of homogeneity of answer choices on item difficulty and discrimination. Twenty-two matched pairs of elementary and secondary mathematics items were administered to randomly equivalent samples of students. Each item pair comparison was treated as a separate study with the set of effect sizes analyzed using…
Descriptors: Test Items, Difficulty Level, Multiple Choice Tests, Mathematics Tests
Gasteiger, Hedwig; Bruns, Julia; Benz, Christiane; Brunner, Esther; Sprenger, Priska – ZDM: The International Journal on Mathematics Education, 2020
Measurement instruments of early childhood teachers' mathematical pedagogical content knowledge (MPCK) have to consider the special characteristics of early childhood teaching. Early childhood teaching includes some planned activities but in contrast to learning in school, it is often motivated and generated by situations which unfold…
Descriptors: Mathematics Instruction, Pedagogical Content Knowledge, Multiple Choice Tests, Kindergarten
Arneson, Amy – ProQuest LLC, 2019
This three-paper dissertation explores item cluster-based assessments, first in general as it relates to modeling, and then, specific issues surrounding a particular item cluster-based assessment designed. There should be a reasonable analogy between the structure of a psychometric model and the cognitive theory that the assessment is based upon.…
Descriptors: Item Response Theory, Test Items, Critical Thinking, Cognitive Tests
Zhang, Xinxin; Gierl, Mark – Journal of Educational Issues, 2016
The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…
Descriptors: Test Items, Automation, Content Validity, Test Validity
Haladyna, Thomas M. – IDEA Center, Inc., 2018
Writing multiple-choice test items to measure student learning in higher education is a challenge. Based on extensive scholarly research and experience, the author describes various item formats, offers guidelines for creating these items, and provides many examples of both good and bad test items. He also suggests some shortcuts for developing…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Higher Education
Davis, Doris Bitler – Teaching of Psychology, 2017
Providing two or more versions of multiple-choice exams has long been a popular strategy for reducing the opportunity for students to engage in academic dishonesty. While the results of studies comparing exam scores under different question-order conditions have been inconclusive, the potential importance of contextual cues to aid student recall…
Descriptors: Test Construction, Multiple Choice Tests, Sequential Approach, Cues
National Assessment Governing Board, 2017
The National Assessment of Educational Progress (NAEP) is the only continuing and nationally representative measure of trends in academic achievement of U.S. elementary and secondary school students in various subjects. For more than four decades, NAEP assessments have been conducted periodically in reading, mathematics, science, writing, U.S.…
Descriptors: Mathematics Achievement, Multiple Choice Tests, National Competency Tests, Educational Trends
Towns, Marcy H. – Journal of Chemical Education, 2014
Chemistry faculty members are highly skilled in obtaining, analyzing, and interpreting physical measurements, but often they are less skilled in measuring student learning. This work provides guidance for chemistry faculty from the research literature on multiple-choice item development in chemistry. Areas covered include content, stem, and…
Descriptors: Multiple Choice Tests, Test Construction, Psychometrics, Test Items
Bichi, Ado Abdu; Hafiz, Hadiza; Bello, Samira Abdullahi – International Journal of Evaluation and Research in Education, 2016
High-stakes testing is used for the purposes of providing results that have important consequences. Validity is the cornerstone upon which all measurement systems are built. This study applied the Item Response Theory principles to analyse Northwest University Kano Post-UTME Economics test items. The developed fifty (50) economics test items was…
Descriptors: Item Response Theory, Test Items, Difficulty Level, Statistical Analysis
Cawthon, Stephanie – American Annals of the Deaf, 2011
Linguistic complexity of test items is one test format element that has been studied in the context of struggling readers and their participation in paper-and-pencil tests. The present article presents findings from an exploratory study on the potential relationship between linguistic complexity and test performance for deaf readers. A total of 64…
Descriptors: Language Styles, Test Content, Syntax, Linguistics
Torres, Cristina; Lopes, Ana Paula; Babo, Lurdes; Azevedo, Jose – Online Submission, 2011
A MC (multiple-choice) question can be defined as a question in which students are asked to select one alternative from a given set of alternatives in response to a question stem. The objective of this paper is to analyse if MC questions may be considered as an interesting alternative for assessing knowledge, particularly in the mathematics area,…
Descriptors: Multiple Choice Tests, Alternative Assessment, Evaluation Methods, Questioning Techniques
Hendrickson, Amy; Patterson, Brian; Ewing, Maureen – College Board, 2010
The psychometric considerations and challenges associated with including constructed response items on tests are discussed along with how these issues affect the form assembly specifications for mixed-format exams. Reliability and validity, security and fairness, pretesting, content and skills coverage, test length and timing, weights, statistical…
Descriptors: Multiple Choice Tests, Test Format, Test Construction, Test Validity
Kobrin, Jennifer L.; Kim, Rachel; Sackett, Paul – College Board, 2011
There is much debate on the merits and pitfalls of standardized tests for college admission, with questions regarding the format (multiple-choice versus constructed response), cognitive complexity, and content of these assessments (achievement versus aptitude) at the forefront of the discussion. This study addressed these questions by…
Descriptors: College Entrance Examinations, Mathematics Tests, Test Items, Predictive Validity
Hertenstein, Matthew J.; Wayand, Joseph F. – Journal of Instructional Psychology, 2008
Many psychology instructors present videotaped examples of behavior at least occasionally during their courses. However, few include video clips during examinations. We provide examples of video-based questions, offer guidelines for their use, and discuss their benefits and drawbacks. In addition, we provide empirical evidence to support the use…
Descriptors: Student Evaluation, Video Technology, Evaluation Methods, Test Construction
Peer reviewedBarrett, Richard S. – Public Personnel Management, 1992
The Content Validation Form is presented as a means of proving that occupational tests provide a representative work sample or knowledge, skill, or ability necessary for a job. It is best used during test construction by a panel of subject matter experts. (SK)
Descriptors: Content Validity, Item Analysis, Multiple Choice Tests, Occupational Tests
Previous Page | Next Page ยป
Pages: 1 | 2
Direct link
