Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 2 |
Descriptor
| Multiple Choice Tests | 23 |
| Scoring | 23 |
| Scoring Formulas | 23 |
| Guessing (Tests) | 11 |
| Test Reliability | 10 |
| Test Validity | 10 |
| Confidence Testing | 6 |
| Item Analysis | 6 |
| Response Style (Tests) | 6 |
| Testing | 6 |
| Weighted Scores | 6 |
| More ▼ | |
Source
Author
| Frary, Robert B. | 3 |
| Echternacht, Gary | 2 |
| Baskin, David | 1 |
| Boldt, Robert F. | 1 |
| Bruno, James E. | 1 |
| Diamond, James J. | 1 |
| Echternacht, Gary J. | 1 |
| Essex, Diane L. | 1 |
| Ferreira, Maria Amélia | 1 |
| Gaio, A. Rita | 1 |
| Gilmer, Jerry S. | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 9 |
| Journal Articles | 4 |
| Speeches/Meeting Papers | 2 |
| Reports - Evaluative | 1 |
Education Level
Audience
| Researchers | 1 |
Location
| Czech Republic | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Severo, Milton; Gaio, A. Rita; Povo, Ana; Silva-Pereira, Fernanda; Ferreira, Maria Amélia – Anatomical Sciences Education, 2015
In theory the formula scoring methods increase the reliability of multiple-choice tests in comparison with number-right scoring. This study aimed to evaluate the impact of the formula scoring method in clinical anatomy multiple-choice examinations, and to compare it with that from the number-right scoring method, hoping to achieve an…
Descriptors: Anatomy, Multiple Choice Tests, Scoring, Decision Making
Jancarík, Antonín; Kostelecká, Yvona – Electronic Journal of e-Learning, 2015
Electronic testing has become a regular part of online courses. Most learning management systems offer a wide range of tools that can be used in electronic tests. With respect to time demands, the most efficient tools are those that allow automatic assessment. The presented paper focuses on one of these tools: matching questions in which one…
Descriptors: Online Courses, Computer Assisted Testing, Test Items, Scoring Formulas
Peer reviewedFrary, Robert B. – Applied Measurement in Education, 1989
Multiple-choice response and scoring methods that attempt to determine an examinee's degree of knowledge about each item in order to produce a total test score are reviewed. There is apparently little advantage to such schemes; however, they may have secondary benefits such as providing feedback to enhance learning. (SLD)
Descriptors: Knowledge Level, Multiple Choice Tests, Scoring, Scoring Formulas
Livingston, Samuel A.; Kastrinos, William – 1982
Leo Nedelsky developed a method for determining absolute grading standards for multiple choice tests. His method required a group of judges to examine each test question and eliminate those responses which the lowest D- student should be able to reject as incorrect. The correct answer probabilities remaining were used in computing an expected test…
Descriptors: Cutting Scores, Judges, Multiple Choice Tests, Real Estate
Peer reviewedWaters, Brian K. – Journal of Educational Research, 1976
This pilot study compared two empirically-derived, option-weighting methods and the resultant effect on the reliability and validity of multiple choice test scores as compared with conventional rights-only scoring. (MM)
Descriptors: Guessing (Tests), Measurement, Multiple Choice Tests, Scoring
Peer reviewedLord, Frederic M. – Journal of Educational Measurement, 1975
The assumption that examinees either know the answer to a test item or else guess at random is usually totally implausible. A different assumption is outlined, under which formula scoring is found to be clearly superior to number right scoring. (Author)
Descriptors: Guessing (Tests), Multiple Choice Tests, Response Style (Tests), Scoring
Peer reviewedFrary, Robert B. – Applied Psychological Measurement, 1980
Six scoring methods for assigning weights to right or wrong responses according to various instructions given to test takers are analyzed with respect to expected change scores and the effect of various levels of information and misinformation. Three of the methods provide feedback to the test taker. (Author/CTM)
Descriptors: Guessing (Tests), Knowledge Level, Multiple Choice Tests, Scores
Peer reviewedReid, Frank J. – Journal of Economic Education, 1976
Examines the conventional scoring formula for multiple-choice tests and proposes an alternative scoring formula which takes into account the situation in which the student does not know the right answer but is able to eliminate one or more of the incorrect alternatives. (Author/AV)
Descriptors: Economics Education, Guessing (Tests), Higher Education, Multiple Choice Tests
Frary, Robert B. – 1980
Ordinal response modes for multiple choice tests are those under which the examinee marks one or more choices in an effort to identify the correct choice, or include it in a proper subset of the choices. Two ordinal response modes: answer-until-correct, and Coomb's elimination of choices which examinees identify as wrong, were analyzed for scoring…
Descriptors: Guessing (Tests), Multiple Choice Tests, Responses, Scoring
Peer reviewedEssex, Diane L. – Journal of Medical Education, 1976
Two multiple-choice scoring schemes--a partial credit scheme and a dichotomous approach--were compared analyzing means, variances, and reliabilities on alternate measures and student reactions. Students preferred the partial-credit approach, which is recommended if rewarding for partial knowledge is an important concern. (Editor/JT)
Descriptors: Higher Education, Medical Students, Multiple Choice Tests, Reliability
Peer reviewedBaskin, David – Journal of Educational Measurement, 1975
Traditional test scoring does not allow the examination of differences among subjects obtaining identical raw scores on the same test. A configuration scoring paradigm for identical raw scores, which provides for such comparisons, is developed and illustrated. (Author)
Descriptors: Elementary Secondary Education, Individual Differences, Mathematical Models, Multiple Choice Tests
Boldt, Robert F. – 1974
One formulation of confidence scoring requires the examinee to indicate as a number his personal probability of the correctness of each alternative in a multiple-choice test. For this formulation a linear transformation of the logarithm of the correct response is maximized if the examinee accurately reports his personal probability. To equate…
Descriptors: Confidence Testing, Guessing (Tests), Multiple Choice Tests, Probability
Peer reviewedDiamond, James J. – Journal of Educational Measurement, 1975
Investigates the reliability and validity of scores yielded from a new scoring formula. (Author/DEP)
Descriptors: Guessing (Tests), Multiple Choice Tests, Objective Tests, Scoring
Peer reviewedPoizner, Sharon B.; And Others – Applied Psychological Measurement, 1978
Binary, probability, and ordinal scoring procedures for multiple-choice items were examined. In two situations, it was found that both the probability and ordinal scoring systems were more reliable than the binary scoring method. (Author/CTM)
Descriptors: Confidence Testing, Guessing (Tests), Higher Education, Multiple Choice Tests
Suhadolnik, Debra; Weiss, David J. – 1983
The present study was an attempt to alleviate some of the difficulties inherent in multiple-choice items by having examinees respond to multiple-choice items in a probabilistic manner. Using this format, examinees are able to respond to each alternative and to provide indications of any partial knowledge they may possess concerning the item. The…
Descriptors: Confidence Testing, Multiple Choice Tests, Probability, Response Style (Tests)
Previous Page | Next Page »
Pages: 1 | 2
Direct link
