NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)1
Since 2007 (last 20 years)11
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 16 to 30 of 53 results Save | Export
Green, Bert F., Jr. – 1972
The use of Guttman weights in scoring tests is discussed. Scores of 2,500 men on one subtest of the CEED-SAT-Verbal Test were examined using cross-validated Guttman weights. Several scores were compared, as follows: Scores obtained from cross-validated Guttman weights; Scores obtained by rounding the Guttman weights to one digit, ranging from 0 to…
Descriptors: Comparative Analysis, Reliability, Scoring Formulas, Test Results
Peer reviewed Peer reviewed
Olejnik, Stephen; Porter, Andrew C. – Educational and Psychological Measurement, 1975
The four scoring strategies compared were: lamda coefficients, chi-square weights, and two applications of multiple discriminant analysis. No significant differences were found when applied to the Kuder Occupational Interest Survey. (RC)
Descriptors: Analysis of Variance, Comparative Analysis, Discriminant Analysis, Interest Inventories
Marco, Gary L. – 1975
A method of interpolation has been derived that should be superior to linear interpolation in computing the percentile ranks of test scores for unimodal score distributions. The superiority of the logistic interpolation over the linear interpolation is most noticeable for distributions consisting of only a small number of score intervals (say…
Descriptors: Comparative Analysis, Intervals, Mathematical Models, Percentage
Fox, Kathleen V. – 1988
A comparison was made between scores and grades of college students taking a development and learning course, using either a modified mastery grading system (MMGS) or a modified norm-referenced grading system (MNRGS). Under the MMGS, students could take each unit exam up to three times to meet minimum or higher criteria levels. Under the MNRGS,…
Descriptors: Comparative Analysis, Grading, Higher Education, Mastery Tests
Peer reviewed Peer reviewed
Sattler, Jerome M.; And Others – Psychology in the Schools, 1978
Fabricated test protocols were used to study how effectively examiners agree in scoring ambiguous WISC-R responses. The results suggest that, even with the improved WISC-R manual, scoring remains a difficult and challenging task. (Author)
Descriptors: Comparative Analysis, Intelligence Tests, Research Projects, Scoring Formulas
Peer reviewed Peer reviewed
Garcia-Perez, Miguel A.; Frary, Robert B. – Applied Psychological Measurement, 1989
Simulation techniques were used to generate conventional test responses and track the proportion of alternatives examinees could classify independently before and after taking the test. Finite-state scores were compared with these actual values and with number-correct and formula scores. Finite-state scores proved useful. (TJH)
Descriptors: Comparative Analysis, Computer Simulation, Guessing (Tests), Mathematical Models
Wilcox, Rand R. – 1978
A mastery test is frequently described as follows: an examinee responds to n dichotomously scored test items. Depending upon the examinee's observed (number correct) score, a mastery decision is made and the examinee is advanced to the next level of instruction. Otherwise, a nonmastery decision is made and the examinee is given remedial work. This…
Descriptors: Comparative Analysis, Cutting Scores, Factor Analysis, Mastery Tests
Ehri, Linnea C.; Ammon, Paul R. – 1972
The purpose of this project was to explore and more carefully design studies of adjective-related structures and processes as they emerge during development in children between the ages of 4 to 8, since the salient characteristics in speech at this age tend to compare and contrast objects encountered in their environments. A group of 40 black and…
Descriptors: Comparative Analysis, Language Acquisition, Listening Comprehension, Scoring Formulas
Peer reviewed Peer reviewed
Gleser, Leon Jay – Educational and Psychological Measurement, 1972
Paper is concerned with the effect that ipsative scoring has upon a commonly used index of between-subtest correlation. (Author)
Descriptors: Comparative Analysis, Forced Choice Technique, Mathematical Applications, Measurement Techniques
van den Brink, Wulfert – Evaluation in Education: International Progress, 1982
Binomial models for domain-referenced testing are compared, emphasizing the assumptions underlying the beta-binomial model. Advantages and disadvantages are discussed. A proposed item sampling model is presented which takes the effect of guessing into account. (Author/CM)
Descriptors: Comparative Analysis, Criterion Referenced Tests, Item Sampling, Measurement Techniques
Felsenthal, Norman A.; Felsenthal, Helen – 1972
A computer program called TEXAN (Textual Analysis of Language Samples) was developed for use in calculating frequency of characters, words, punctuation units, and stylistic variables. Its usefulness in determining readability levels was examined in an analysis of language samples from 20 elementary tradebooks used as supplementary reading…
Descriptors: Automatic Indexing, Comparative Analysis, Computational Linguistics, Information Processing
Peer reviewed Peer reviewed
Penfield, Douglas A.; Koffler, Stephen L. – Journal of Experimental Education, 1978
Three nonparametric alternatives to the parametric Bartlett test are presented for handling the K-sample equality of variance problem. The two-sample Siegel-Tukey test, Mood test, and Klotz test are extended to the multisample situation by Puri's methods. These K-sample scale tests are illustrated and compared. (Author/GDC)
Descriptors: Comparative Analysis, Guessing (Tests), Higher Education, Mathematical Models
Peer reviewed Peer reviewed
Stauffer, A. J. – Educational and Psychological Measurement, 1974
Descriptors: Attitude Change, Attitude Measures, Comparative Analysis, Educational Research
Tollefson, Nona; Chung, Jing-Mei – 1986
Procedures for correcting for guessing and for assessing partial knowledge (correction-for-guessing, three-decision scoring, elimination/inclusion scoring, and confidence or probabilistic scoring) are discussed. Mean scores and internal consistency reliability estimates were compared across three administration and scoring procedures for…
Descriptors: Achievement Tests, Comparative Analysis, Evaluation Methods, Graduate Students
Rest, James R. – 1975
This paper describes the rationale for the Defining Issues Test (DIT), an objective test of moral judgment which attempts to improve upon three aspects of Kohlberg's research: data collection, categorization of moral judgments (the scoring system), and method of indexing a subject's progress in a developmental sequence. In each case, the way in…
Descriptors: Comparative Analysis, Data Analysis, Data Collection, Human Development
Pages: 1  |  2  |  3  |  4