Publication Date
| In 2026 | 0 |
| Since 2025 | 85 |
| Since 2022 (last 5 years) | 453 |
| Since 2017 (last 10 years) | 1241 |
| Since 2007 (last 20 years) | 2515 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 122 |
| Teachers | 105 |
| Researchers | 64 |
| Students | 46 |
| Administrators | 14 |
| Policymakers | 7 |
| Counselors | 3 |
| Parents | 3 |
Location
| Canada | 134 |
| Turkey | 131 |
| Australia | 123 |
| Iran | 66 |
| Indonesia | 61 |
| United Kingdom | 51 |
| Germany | 50 |
| Taiwan | 46 |
| United States | 43 |
| China | 39 |
| California | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 3 |
| Meets WWC Standards with or without Reservations | 5 |
| Does not meet standards | 6 |
A Comparison of the Effects of Practice Tests and Traditional Review on Performance and Calibration.
Peer reviewedBol, Linda; Hacker, Douglas J. – Journal of Experimental Education, 2001
Studied the impact of practice tests on students' calibration and examination performance for multiple-choice and essay examinations. Results for 59 graduate students show that practice tests were associated with significantly lower scores on the midterm multiple-choice items and less accurate predictions and postdictions on these items. Discusses…
Descriptors: Essay Tests, Graduate Students, Graduate Study, Multiple Choice Tests
Peer reviewedBurton, Richard F. – Assessment & Evaluation in Higher Education, 2001
Describes four measures of test unreliability that quantify effects of question selection and guessing, both separately and together--three chosen for immediacy and one for greater mathematical elegance. Quantifies their dependence on test length and number of answer options per question. Concludes that many multiple choice tests are unreliable…
Descriptors: Guessing (Tests), Mathematical Models, Multiple Choice Tests, Objective Tests
Peer reviewedLawrenz, Frances; Huffman, Douglas; Welch, Wayne – Journal of Research in Science Teaching, 2000
Compares the costs of four assessment formats: (1) multiple choice; (2) open ended; (3) laboratory station; and (4) full investigation. Tracks the amount of time spent preparing the devices, developing scoring consistency for the devices, and scoring the devices as they were developed. Compares times as if 1,000 students completed each assessment.…
Descriptors: Academic Achievement, Cost Effectiveness, Evaluation Methods, High Schools
Peer reviewedWilliams, J. H. Sims; Barry, M. D. J. – Teaching Mathematics and Its Applications, 1999
The potentially huge number of questions in a national database offers the opportunity for open testing where each student has his own test and can take the test in his own time, saving on supervision and PCs. (Author/ASK)
Descriptors: Computer Uses in Education, Educational Assessment, Evaluation Methods, Mathematics Education
Peer reviewedFriedman, Stephen J. – Journal of Educational Measurement, 1999
This volume describes the characteristics and functions of test items, presents editorial guidelines for writing test items, presents methods for determining the quality of test items, and presents a compendium of important issues about test items. (SLD)
Descriptors: Constructed Response, Criteria, Evaluation Methods, Multiple Choice Tests
Peer reviewedTaylor, Annette Kujawski – College Student Journal, 2005
This research examined 2 elements of multiple-choice test construction, balancing the key and optimal number of options. In Experiment 1 the 3 conditions included a balanced key, overrepresentation of a and b responses, and overrepresentation of c and d responses. The results showed that error-patterns were independent of the key, reflecting…
Descriptors: Comparative Analysis, Test Items, Multiple Choice Tests, Test Construction
Williams, Robert L.; Clark, Lloyd – Educational Research, 2004
In the class session following feedback regarding their scores on multiple-choice exams, undergraduate students in a large human development course rated the strength of possible contributors to their exam performance. Students rated items related to their personal effort in preparing for the exam (identified as student effort in the paper), their…
Descriptors: Multiple Choice Tests, Undergraduate Students, Student Attitudes, Scores
van der Linden, Wim J.; Sotaridona, Leonardo – Journal of Educational Measurement, 2004
A statistical test for the detection of answer copying on multiple-choice tests is presented. The test is based on the idea that the answers of examinees to test items may be the result of three possible processes: (1) knowing, (2) guessing, and (3) copying, but that examinees who do not have access to the answers of other examinees can arrive at…
Descriptors: Multiple Choice Tests, Test Items, Hypothesis Testing, Statistical Distributions
Lane, Andrew M.; Dale, Crispin; Horrell, Andrew – Journal of Further and Higher Education, 2006
The aims of the study were to use differentiated online learning material for use with a Level 1 statistics module for undergraduate sport students and examine relationships between student performance on differentiated tests and module performance. We developed the differentiated material by writing easy and hard multiple choice tests, with the…
Descriptors: Online Courses, Instructional Materials, Physical Education, Undergraduate Students
Peer reviewedSanger, Michael J. – Journal of Chemical Education, 2005
A total of 156 students were asked to provide free-response balanced chemical equations for a classic multiple-choice particulate-drawing question first used by Nurrenbern and Pickering. The balanced equations and the number of students providing each equation are reported in this study. The most common student errors included a confusion between…
Descriptors: Equations (Mathematics), Chemistry, Concept Formation, Student Evaluation
Peer reviewedToby, Sidney; Plano, Richard J. – Journal of Chemical Education, 2004
The limitations of teaching and assessment of knowledge are emphasized, and an improved method is suggested to test the students. This superior examination technique would replace multiple-choice questions with free-response questions for numerical problems, wherein the numerical inputs fed in the computer would be optically scanned and graded…
Descriptors: Testing, Multiple Choice Tests, Grading, Science Instruction
Belanich, James; Wisher, Robert A.; Orvis, Kara L. – American Journal of Distance Education, 2004
A Web-based tool that allows students to generate multiple-choice questions in a collaborative, distributed setting was evaluated through several comparisons. Students first completed a Web-based tutorial on writing effective multiple-choice questions and then authored questions on a given topic. Next, using the Web-based tool, groups of students…
Descriptors: Internet, Multiple Choice Tests, Web Based Instruction, Comparative Analysis
Mottet, Timothy P.; Beebe, Steven A. – Communication Education, 2006
The purpose of this study was to examine whether instructor perceptions of student responsive behaviors and student socio-communicative style were related to instructors' subjective (speech presentation) and objective (multiple-choice exam) assessments of student work. The results suggest that student nonverbal and verbal responsive behaviors…
Descriptors: Student Attitudes, Student Behavior, Teacher Attitudes, Multiple Choice Tests
Maki, Ruth H.; Shields, Micheal; Wheeler, Amanda Easton; Zacchilli, Tammy Lowery – Journal of Educational Psychology, 2005
The authors investigated absolute and relative metacomprehension accuracy as a function of verbal ability in college students. Students read hard texts, revised texts, or a mixed set of texts. They then predicted their performance, took a multiple-choice test on the texts, and made posttest judgments about their performance. With hard texts,…
Descriptors: Metacognition, Individual Differences, College Students, Verbal Ability
Gorin, Joanna S. – Journal of Educational Measurement, 2005
Based on a previously validated cognitive processing model of reading comprehension, this study experimentally examines potential generative components of text-based multiple-choice reading comprehension test questions. Previous research (Embretson & Wetzel, 1987; Gorin & Embretson, 2005; Sheehan & Ginther, 2001) shows text encoding and decision…
Descriptors: Reaction Time, Reading Comprehension, Difficulty Level, Test Items

Direct link
