NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 331 to 345 of 534 results Save | Export
Schmitt, Alicia P.; Crocker, Linda – 1981
The effectiveness of a strategy for improving performance on multiple choice items for examinees with different levels of test anxiety was assessed. Undergraduate measurement students responded to the Mandler-Sarason Test Anxiety Scale and to an objective test covering course content. Results indicated that, for most examinees, generation of an…
Descriptors: Error of Measurement, Higher Education, Multiple Choice Tests, Response Style (Tests)
Weiten, Wayne – 1979
Two different formats for multiple-choice test items were compared in an experimental test given in a college class in introductory psychology. In one format, a question or incomplete statement was followed by four answers or completions, only one of which was correct. In the other format, the double multiple-choice version, the same questions…
Descriptors: Difficulty Level, Higher Education, Item Analysis, Multiple Choice Tests
Scheetz, James P.; vonFraunhofer, J. Anthony – 1980
Subkoviak suggested a technique for estimating both group reliability and the reliability associated with assigning a given individual to a mastery or non-mastery category based on a single test administration. Two assumptions underlie this model. First, it is assumed that had successive test administrations occurred, scores for each individual…
Descriptors: Criterion Referenced Tests, Cutting Scores, Error of Measurement, Higher Education
Ebel, Robert L. – 1973
True-false achievement test items written by typical classroom teachers show about two-thirds of the discrimination of their multiple-choice test items. This is about what should be expected in view of the higher probability of chance success on the true-false items. However, at least half again as many true-false items as multiple-choice items…
Descriptors: Guessing (Tests), Multiple Choice Tests, Objective Tests, Scoring
Peer reviewed Peer reviewed
Poizner, Sharon B.; And Others – Applied Psychological Measurement, 1978
Binary, probability, and ordinal scoring procedures for multiple-choice items were examined. In two situations, it was found that both the probability and ordinal scoring systems were more reliable than the binary scoring method. (Author/CTM)
Descriptors: Confidence Testing, Guessing (Tests), Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Whitby, L. G. – Medical Education, 1977
Advantages and disadvantages of no-penalty and penalty marking systems are discussed. Ways in which examiners have attempted to correct for guessing by students are reviewed, along with the use of "don't know" options and confidence-weighting for attempting to assess the degree of certainty that candidates attach to their answers. (Author/LBH)
Descriptors: Grading, Guessing (Tests), Higher Education, Medical Education
Peer reviewed Peer reviewed
Shively, Michael Jay – Journal of Veterinary Medical Education, 1978
Some of the merits and pitfalls of multiple choice examinations are outlined and ways of increasing reliability and feedback information are summarized. Included are discussions of basic format, examples of poor design, examples of augmentation, and feedback from computerized grading. (LBH)
Descriptors: Feedback, Grading, Higher Education, Instructional Improvement
Peer reviewed Peer reviewed
Weiten, Wayne – Journal of Experimental Education, 1984
The effects of violating four item construction principles were examined to assess the validity of the principles and the importance of students' test wiseness. While flawed items were significantly less difficult than sound items, differences in item discrimination, test reliability, and concurrent validity were not observed. (Author/BW)
Descriptors: Difficulty Level, Higher Education, Item Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
Cranney, A. Garr – Journal of Reading Behavior, 1972
Descriptors: Cloze Procedure, College Students, Item Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
Costin, Frank – Educational and Psychological Measurement, 1972
This study confirmed the practical benefits of three-choice items. (Author)
Descriptors: Achievement Tests, Cues, Item Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
Ilyin, Donna – TESOL Quarterly, 1970
This paper describes the development of EPT (English-Second-Language Placement Test) 100-200-300 which places adult students into the first three levels of English-Second-Language classes and discusses work done on EPT 400-500-600, an experimental test to place students in the last three levels of ESL classes. (Author/FB)
Descriptors: Adult Education, English (Second Language), Language Learning Levels, Language Tests
Peer reviewed Peer reviewed
Forsyth, Robert A.; Spratt, Kevin F. – Journal of Educational Measurement, 1980
The effects of two item formats on item difficulty and item discrimination indices for mathematics problem solving multiple-choice tests were investigated. One format required identifying the proper "set-up" for the item; the other format required complete solving of the item. (Author/JKS)
Descriptors: Difficulty Level, Junior High Schools, Multiple Choice Tests, Problem Solving
Peer reviewed Peer reviewed
Yelvington, James Yowell; Brady, Raymond G. – Community/Junior College Research Quarterly, 1979
Assesses the applicability of corrective feedback (CF) testing, which allows multiple attempts to respond to a test item, to the community college classroom. Compares CF testing to single answer testing, especially with regard to reliability, equitability, and effect on student motivation. (DD)
Descriptors: Community Colleges, Educational Testing, Feedback, Multiple Choice Tests
Peer reviewed Peer reviewed
Feldt, Leonard S. – Applied Measurement in Education, 1993
The recommendation that the reliability of multiple-choice tests will be enhanced if the distribution of item difficulties is concentrated at approximately 0.50 is reinforced and extended in this article by viewing the 0/1 item scoring as a dichotomization of an underlying normally distributed ability score. (SLD)
Descriptors: Ability, Difficulty Level, Guessing (Tests), Mathematical Models
Peer reviewed Peer reviewed
Direct linkDirect link
Burton, Richard F. – Assessment and Evaluation in Higher Education, 2005
Examiners seeking guidance on multiple-choice and true/false tests are likely to encounter various faulty or questionable ideas. Twelve of these are discussed in detail, having to do mainly with the effects on test reliability of test length, guessing and scoring method (i.e. number-right scoring or negative marking). Some misunderstandings could…
Descriptors: Guessing (Tests), Multiple Choice Tests, Objective Tests, Test Reliability
Pages: 1  |  ...  |  19  |  20  |  21  |  22  |  23  |  24  |  25  |  26  |  27  |  ...  |  36