Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 9 |
| Since 2007 (last 20 years) | 16 |
Descriptor
Source
Author
Publication Type
Education Level
| Higher Education | 4 |
| Postsecondary Education | 4 |
| Elementary Education | 2 |
| Secondary Education | 2 |
| Elementary Secondary Education | 1 |
| Grade 7 | 1 |
| High Schools | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
Audience
| Practitioners | 1 |
| Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedHolmes, Roy A.; And Others – Educational and Psychological Measurement, 1974
Descriptors: Chemistry, Multiple Choice Tests, Scoring Formulas, Test Reliability
Peer reviewedReams, Redmond; And Others – Gifted Child Quarterly, 1990
The study evaluated speed as a factor in Wechsler Intelligence Scale for Children-Revised performance with 66 high scoring and 36 average scoring children (ages 3-15 years). Results cast doubt on the utility of speed bonuses in tests of general intelligence with gifted children. (Author/DB)
Descriptors: Gifted, Intelligence Tests, Scoring Formulas, Talent Identification
Peer reviewedZimmerman, Donald W. – Journal of Experimental Education, 1977
Derives formulas for the validity of predictor-criterion tests that hold for all test scores constructed according to the expected-value concept of true score. These more general formulas disclose some paradoxical properties of test validity under conditions where errors are correlated and have some implications for practical testing situations…
Descriptors: Correlation, Criterion Referenced Tests, Scoring Formulas, Tables (Data)
Peer reviewedAamodt, Michael G.; Pierce, Walter L., Jr. – Educational and Psychological Measurement, 1987
Data from five separate samples were weighted using the vertical percent method (England) and the rare response method (Telenson, Alexander, and Barrett) to investigate their relative effectiveness for scoring biographical information blanks. Vertical percent scoring yielded significant validity coefficients for all samples, while rare response…
Descriptors: Biographical Inventories, Employees, Job Performance, Predictive Validity
Peer reviewedWaters, Brian K. – Journal of Educational Research, 1976
This pilot study compared two empirically-derived, option-weighting methods and the resultant effect on the reliability and validity of multiple choice test scores as compared with conventional rights-only scoring. (MM)
Descriptors: Guessing (Tests), Measurement, Multiple Choice Tests, Scoring
Peer reviewedGordon, Leonard V. – Educational and Psychological Measurement, 1971
Results indicate that extremeness response sets at the two ends of the continuum differentially contribute to scale validity. (MS)
Descriptors: Attitude Measures, Rating Scales, Response Style (Tests), Scoring Formulas
Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David M. – ETS Research Report Series, 2008
This report presents the results of a research and development effort for SpeechRater? Version 1.0 (v1.0), an automated scoring system for the spontaneous speech of English language learners used operationally in the Test of English as a Foreign Language™ (TOEFL®) Practice Online assessment (TPO). The report includes a summary of the validity…
Descriptors: Speech, Scoring, Scoring Rubrics, Scoring Formulas
Peer reviewedReilly, Richard R. – Educational and Psychological Measurement, 1975
Because previous reports have suggested that the lowered validity of tests scored with empirical option weights might be explained by a capitalization of the keying procedures on omitting tendencies, a procedure was devised to key options empirically with a "correction-for-guessing" constraint. (Author)
Descriptors: Achievement Tests, Graduate Study, Guessing (Tests), Scoring Formulas
Peer reviewedWatkins, Julia M.; Watkins, Dennis A. – Journal of Clinical Psychology, 1975
This study researched the Plenk scoring system more thoroughly to see whether it could be used with older children and whether it could differentiate normal from emotionally disturbed Ss. (Author)
Descriptors: Data Collection, Emotional Disturbances, Handicapped Children, Research Methodology
Peer reviewedSattler, Jerome M.; And Others – Psychology in the Schools, 1978
Fabricated test protocols were used to study how effectively examiners agree in scoring ambiguous WISC-R responses. The results suggest that, even with the improved WISC-R manual, scoring remains a difficult and challenging task. (Author)
Descriptors: Comparative Analysis, Intelligence Tests, Research Projects, Scoring Formulas
Validity and Likability Ratings for Three Scoring Instructions for a Multiple-Choice Vocabulary Test
Peer reviewedWaters, Carrie Wherry; Waters, Lawrence K. – Educational and Psychological Measurement, 1971
Descriptors: Guessing (Tests), Multiple Choice Tests, Response Style (Tests), Scoring Formulas
Peer reviewedAllen, Mary J.; And Others – Perceptual and Motor Skills, 1982
Adults took the Rod and Frame, Portable Rod and Frame, and Embedded Figures Tests. Absolute and algebraic frame-effect scores were more reliable and valid than rod-effect algebraic scores. Correlations with the Embedded Figures Test were so low that the interchangeability of these field articulation measures is questionable. (Author/RD)
Descriptors: Adults, Cognitive Style, Correlation, Measurement Techniques
Peer reviewedAustin, Joe Dan – Psychometrika, 1981
On distractor-identification tests students mark as many distractors as possible on each test item. A grading scale is developed for this type testing. The score is optimal in that it yields an unbiased estimate of the student's score as if no guessing had occurred. (Author/JKS)
Descriptors: Guessing (Tests), Item Analysis, Measurement Techniques, Scoring Formulas
Peer reviewedFrary, Robert B. – Applied Psychological Measurement, 1980
Six scoring methods for assigning weights to right or wrong responses according to various instructions given to test takers are analyzed with respect to expected change scores and the effect of various levels of information and misinformation. Three of the methods provide feedback to the test taker. (Author/CTM)
Descriptors: Guessing (Tests), Knowledge Level, Multiple Choice Tests, Scores
Frary, Robert B. – 1980
Ordinal response modes for multiple choice tests are those under which the examinee marks one or more choices in an effort to identify the correct choice, or include it in a proper subset of the choices. Two ordinal response modes: answer-until-correct, and Coomb's elimination of choices which examinees identify as wrong, were analyzed for scoring…
Descriptors: Guessing (Tests), Multiple Choice Tests, Responses, Scoring


