Descriptor
| Cutting Scores | 7 |
| Scoring Formulas | 7 |
| Testing Problems | 7 |
| Higher Education | 2 |
| Mastery Tests | 2 |
| Mathematical Models | 2 |
| Standard Setting (Scoring) | 2 |
| Test Construction | 2 |
| Test Reliability | 2 |
| True Scores | 2 |
| Child Abuse | 1 |
| More ▼ | |
Source
| Child Abuse and Neglect: The… | 1 |
| Educational and Psychological… | 1 |
| Evaluation in Education:… | 1 |
| Journal of Educational… | 1 |
Author
| Berger, Dale E. | 1 |
| Harris, Chester W. | 1 |
| Legg, Sue M. | 1 |
| Melican, Gerald | 1 |
| Moy, Raymond H. | 1 |
| Plake, Barbara S. | 1 |
| Raju, Nambury S. | 1 |
| Tsujimoto, Richard N. | 1 |
| Wilcox, Rand R. | 1 |
| van der Linden, Wim J. | 1 |
Publication Type
| Reports - Research | 4 |
| Journal Articles | 3 |
| Reports - Evaluative | 3 |
| Speeches/Meeting Papers | 3 |
| Opinion Papers | 1 |
Education Level
Audience
| Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedRaju, Nambury S. – Educational and Psychological Measurement, 1982
Rajaratnam, Cronbach and Gleser's generalizability formula for stratified-parallel tests and Raju's coefficient beta are generalized to estimate the reliability of a composite of criterion-referenced tests, where the parts have different cutting scores. (Author/GK)
Descriptors: Criterion Referenced Tests, Cutting Scores, Mathematical Formulas, Scoring Formulas
Peer reviewedWilcox, Rand R.; Harris, Chester W. – Journal of Educational Measurement, 1977
Emrick's proposed method for determining a mastery level cut-off score is questioned. Emrick's method is shown to be useful only in limited situations. (JKS)
Descriptors: Correlation, Cutting Scores, Mastery Tests, Mathematical Models
Moy, Raymond H. – 1981
The problem of standard setting on language proficiency tests is often approached by the use of norms derived from the group being tested, a process commonly known as "grading on the curve." One particular problem with this ad hoc method of standard setting is that it will usually result in a fluctuating standard dependent on the particular group…
Descriptors: Cutting Scores, Higher Education, Language Proficiency, Norm Referenced Tests
Tsujimoto, Richard N.; Berger, Dale E. – Child Abuse and Neglect: The International Journal, 1988
Two criteria are discussed for determining cutting scores on a predictor variable for identifying cases of likely child abuse--utility maximizing and error minimizing. Utility maximizing is the preferable criterion, as it optimizes the balance between the costs of incorrect decisions and the benefits of correct decisions. (Author/JDD)
Descriptors: Child Abuse, Cost Effectiveness, Cutting Scores, Error of Measurement
van der Linden, Wim J. – Evaluation in Education: International Progress, 1982
In mastery testing a linear relationship between an optimal passing score and test length is presented with a new optimization criterion. The usual indifference zone approach, a binomial error model, decision errors, and corrections for guessing are discussed. Related results in sequential testing and the latent class approach are included. (CM)
Descriptors: Cutting Scores, Educational Testing, Mastery Tests, Mathematical Models
Melican, Gerald; Plake, Barbara S. – 1984
The validity of combining a correction for guessing with the Nedelsky-based cutscore was investigated. A five option multiple choice Mathematics Achievement Test was used in the study. Items were selected to meet several criteria. These included: the capability of measuring mathematics concepts related to performance in introductory statistics;…
Descriptors: Cutting Scores, Guessing (Tests), Higher Education, Multiple Choice Tests
Legg, Sue M. – 1982
A case study of the Florida Teacher Certification Examination (FTCE) program was described to assist others launching the development of large scale item banks. FTCE has four subtests: Mathematics, Reading, Writing, and Professional Education. Rasch calibrated item banks have been developed for all subtests except Writing. The methods used to…
Descriptors: Cutting Scores, Difficulty Level, Field Tests, Item Analysis


