NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kroc, Edward; Olvera Astivia, Oscar L. – Educational and Psychological Measurement, 2022
Setting cutoff scores is one of the most common practices when using scales to aid in classification purposes. This process is usually done univariately where each optimal cutoff value is decided sequentially, subscale by subscale. While it is widely known that this process necessarily reduces the probability of "passing" such a test,…
Descriptors: Multivariate Analysis, Cutting Scores, Classification, Measurement
Peer reviewed Peer reviewed
Gorsuch, Richard L. – Educational and Psychological Measurement, 1980
Kaiser and Michael reported a formula for factor scores giving an internal consistency reliability and its square root, the domain validity. Using this formula is inappropriate if variables are included which have trival weights rather than salient weights for the factor for which the score is being computed. (Author/RL)
Descriptors: Factor Analysis, Factor Structure, Scoring Formulas, Test Reliability
Peer reviewed Peer reviewed
Holmes, Roy A.; And Others – Educational and Psychological Measurement, 1974
Descriptors: Chemistry, Multiple Choice Tests, Scoring Formulas, Test Reliability
Peer reviewed Peer reviewed
Zimmerman, Donald W. – Educational and Psychological Measurement, 1972
Although a great deal of attention has been devoted over a period of years to the estimation of reliability from item statistics, there are still gaps in the mathematical derivation of the Kuder-Richardson results. The main purpose of this paper is to fill some of these gaps, using language consistent with modern probability theory. (Author)
Descriptors: Mathematical Applications, Probability, Scoring Formulas, Statistical Analysis
Peer reviewed Peer reviewed
Bejar, Issac I.; Weiss, David J. – Educational and Psychological Measurement, 1977
The reliabilities yielded by several differential option weighting scoring procedures were compared among themselves as well as against conventional testing. It was found that increases in reliability due to differential option weighting were a function of inter-item correlations. Suggestions for the implementation of differential option weighting…
Descriptors: Correlation, Forced Choice Technique, Item Analysis, Scoring Formulas
Peer reviewed Peer reviewed
Raju, Nambury S. – Educational and Psychological Measurement, 1982
Rajaratnam, Cronbach and Gleser's generalizability formula for stratified-parallel tests and Raju's coefficient beta are generalized to estimate the reliability of a composite of criterion-referenced tests, where the parts have different cutting scores. (Author/GK)
Descriptors: Criterion Referenced Tests, Cutting Scores, Mathematical Formulas, Scoring Formulas
Peer reviewed Peer reviewed
Reilly, Richard R. – Educational and Psychological Measurement, 1975
Because previous reports have suggested that the lowered validity of tests scored with empirical option weights might be explained by a capitalization of the keying procedures on omitting tendencies, a procedure was devised to key options empirically with a "correction-for-guessing" constraint. (Author)
Descriptors: Achievement Tests, Graduate Study, Guessing (Tests), Scoring Formulas
Peer reviewed Peer reviewed
Serlin, Ronald C.; Kaiser, Henry F. – Educational and Psychological Measurement, 1978
When multiple-choice tests are scored in the usual manner, giving each correct answer one point, information concerning response patterns is lost. A method for utilizing this information is suggested. An example is presented and compared with two conventional methods of scoring. (Author/JKS)
Descriptors: Correlation, Factor Analysis, Item Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
Cureton, Edward E. – Educational and Psychological Measurement, 1971
A rebuttal of Frary's 1969 article in Educational and Psychological Measurement. (MS)
Descriptors: Error of Measurement, Guessing (Tests), Multiple Choice Tests, Scoring Formulas
Peer reviewed Peer reviewed
Willson, Victor L. – Educational and Psychological Measurement, 1982
The Serlin-Kaiser procedure is used to complete a principal components solution for scoring weights for all options of a given item. Coefficient alpha is maximized for a given multiple choice test. (Author/GK)
Descriptors: Analysis of Covariance, Factor Analysis, Multiple Choice Tests, Scoring Formulas
Peer reviewed Peer reviewed
Abu-Sayf, F. K. – Educational and Psychological Measurement, 1977
A new formula for the correction for chance success due to guessing was advanced and investigated, the mathematical solution of which has the property of equating the scores of fast and slow examinees of equal ability. (Author/JKS)
Descriptors: Academic Ability, Equated Scores, Guessing (Tests), Scoring Formulas
Peer reviewed Peer reviewed
Eakin, Richard R.; Long, Clifford A. – Educational and Psychological Measurement, 1977
A scoring technique for true-false tests is presented. The technique, paired item scoring, involves combining two statements and having the student select one of the four resultants possible: true-true, false-true, true-false, and false-false. The combined item is treated as a multiple choice item. (Author/JKS)
Descriptors: Guessing (Tests), Measurement Techniques, Multiple Choice Tests, Objective Tests
Peer reviewed Peer reviewed
Hocevar, Dennis; Michael, William B. – Educational and Psychological Measurement, 1979
Two multitrait-multimethod studies were conducted to investigate the effects of two scoring formulas. The study demonstrates that tests of divergent thinking lack discriminant validity when scored in the usual manner. A percentage formula did enhance discriminant validity when originality ratings were subjectively determined. (Author/CTM)
Descriptors: Creativity, Creativity Tests, Divergent Thinking, Grade 5
Peer reviewed Peer reviewed
Stauffer, A. J. – Educational and Psychological Measurement, 1974
Descriptors: Attitude Change, Attitude Measures, Comparative Analysis, Educational Research
Peer reviewed Peer reviewed
Echternacht, Gary – Educational and Psychological Measurement, 1976
Compares various item option scoring methods with respect to coefficient alpha and a concurrent validity coefficient. Scoring methods compared were: formula scoring, a priori scoring, empirical scoring with an internal criterion, and two modifications of formula scoring. The empirically determined scoring system is seen as superior. (RC)
Descriptors: Aptitude Tests, Multiple Choice Tests, Response Style (Tests), Scoring Formulas
Previous Page | Next Page ยป
Pages: 1  |  2