Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 1 |
Descriptor
| Scoring Formulas | 18 |
| Test Reliability | 18 |
| Multiple Choice Tests | 8 |
| Test Validity | 7 |
| Guessing (Tests) | 6 |
| Factor Analysis | 3 |
| Test Construction | 3 |
| Weighted Scores | 3 |
| Correlation | 2 |
| Cutting Scores | 2 |
| Grade 8 | 2 |
| More ▼ | |
Source
| Educational and Psychological… | 18 |
Author
| Hambleton, Ronald K. | 2 |
| Traub, Ross E. | 2 |
| Abu-Sayf, F. K. | 1 |
| Aiken, Lewis R. | 1 |
| Bejar, Issac I. | 1 |
| Cureton, Edward E. | 1 |
| Eakin, Richard R. | 1 |
| Echternacht, Gary | 1 |
| Gorsuch, Richard L. | 1 |
| Hocevar, Dennis | 1 |
| Holmes, Roy A. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 6 |
| Reports - Research | 5 |
| Reports - Evaluative | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Kroc, Edward; Olvera Astivia, Oscar L. – Educational and Psychological Measurement, 2022
Setting cutoff scores is one of the most common practices when using scales to aid in classification purposes. This process is usually done univariately where each optimal cutoff value is decided sequentially, subscale by subscale. While it is widely known that this process necessarily reduces the probability of "passing" such a test,…
Descriptors: Multivariate Analysis, Cutting Scores, Classification, Measurement
Peer reviewedGorsuch, Richard L. – Educational and Psychological Measurement, 1980
Kaiser and Michael reported a formula for factor scores giving an internal consistency reliability and its square root, the domain validity. Using this formula is inappropriate if variables are included which have trival weights rather than salient weights for the factor for which the score is being computed. (Author/RL)
Descriptors: Factor Analysis, Factor Structure, Scoring Formulas, Test Reliability
Peer reviewedHolmes, Roy A.; And Others – Educational and Psychological Measurement, 1974
Descriptors: Chemistry, Multiple Choice Tests, Scoring Formulas, Test Reliability
Peer reviewedZimmerman, Donald W. – Educational and Psychological Measurement, 1972
Although a great deal of attention has been devoted over a period of years to the estimation of reliability from item statistics, there are still gaps in the mathematical derivation of the Kuder-Richardson results. The main purpose of this paper is to fill some of these gaps, using language consistent with modern probability theory. (Author)
Descriptors: Mathematical Applications, Probability, Scoring Formulas, Statistical Analysis
Peer reviewedBejar, Issac I.; Weiss, David J. – Educational and Psychological Measurement, 1977
The reliabilities yielded by several differential option weighting scoring procedures were compared among themselves as well as against conventional testing. It was found that increases in reliability due to differential option weighting were a function of inter-item correlations. Suggestions for the implementation of differential option weighting…
Descriptors: Correlation, Forced Choice Technique, Item Analysis, Scoring Formulas
Peer reviewedRaju, Nambury S. – Educational and Psychological Measurement, 1982
Rajaratnam, Cronbach and Gleser's generalizability formula for stratified-parallel tests and Raju's coefficient beta are generalized to estimate the reliability of a composite of criterion-referenced tests, where the parts have different cutting scores. (Author/GK)
Descriptors: Criterion Referenced Tests, Cutting Scores, Mathematical Formulas, Scoring Formulas
Peer reviewedReilly, Richard R. – Educational and Psychological Measurement, 1975
Because previous reports have suggested that the lowered validity of tests scored with empirical option weights might be explained by a capitalization of the keying procedures on omitting tendencies, a procedure was devised to key options empirically with a "correction-for-guessing" constraint. (Author)
Descriptors: Achievement Tests, Graduate Study, Guessing (Tests), Scoring Formulas
Peer reviewedSerlin, Ronald C.; Kaiser, Henry F. – Educational and Psychological Measurement, 1978
When multiple-choice tests are scored in the usual manner, giving each correct answer one point, information concerning response patterns is lost. A method for utilizing this information is suggested. An example is presented and compared with two conventional methods of scoring. (Author/JKS)
Descriptors: Correlation, Factor Analysis, Item Analysis, Multiple Choice Tests
Peer reviewedCureton, Edward E. – Educational and Psychological Measurement, 1971
A rebuttal of Frary's 1969 article in Educational and Psychological Measurement. (MS)
Descriptors: Error of Measurement, Guessing (Tests), Multiple Choice Tests, Scoring Formulas
Peer reviewedWillson, Victor L. – Educational and Psychological Measurement, 1982
The Serlin-Kaiser procedure is used to complete a principal components solution for scoring weights for all options of a given item. Coefficient alpha is maximized for a given multiple choice test. (Author/GK)
Descriptors: Analysis of Covariance, Factor Analysis, Multiple Choice Tests, Scoring Formulas
Peer reviewedAbu-Sayf, F. K. – Educational and Psychological Measurement, 1977
A new formula for the correction for chance success due to guessing was advanced and investigated, the mathematical solution of which has the property of equating the scores of fast and slow examinees of equal ability. (Author/JKS)
Descriptors: Academic Ability, Equated Scores, Guessing (Tests), Scoring Formulas
Peer reviewedEakin, Richard R.; Long, Clifford A. – Educational and Psychological Measurement, 1977
A scoring technique for true-false tests is presented. The technique, paired item scoring, involves combining two statements and having the student select one of the four resultants possible: true-true, false-true, true-false, and false-false. The combined item is treated as a multiple choice item. (Author/JKS)
Descriptors: Guessing (Tests), Measurement Techniques, Multiple Choice Tests, Objective Tests
Peer reviewedHocevar, Dennis; Michael, William B. – Educational and Psychological Measurement, 1979
Two multitrait-multimethod studies were conducted to investigate the effects of two scoring formulas. The study demonstrates that tests of divergent thinking lack discriminant validity when scored in the usual manner. A percentage formula did enhance discriminant validity when originality ratings were subjectively determined. (Author/CTM)
Descriptors: Creativity, Creativity Tests, Divergent Thinking, Grade 5
Peer reviewedStauffer, A. J. – Educational and Psychological Measurement, 1974
Descriptors: Attitude Change, Attitude Measures, Comparative Analysis, Educational Research
Peer reviewedEchternacht, Gary – Educational and Psychological Measurement, 1976
Compares various item option scoring methods with respect to coefficient alpha and a concurrent validity coefficient. Scoring methods compared were: formula scoring, a priori scoring, empirical scoring with an internal criterion, and two modifications of formula scoring. The empirically determined scoring system is seen as superior. (RC)
Descriptors: Aptitude Tests, Multiple Choice Tests, Response Style (Tests), Scoring Formulas
Previous Page | Next Page ยป
Pages: 1 | 2
Direct link
