Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 3 |
Descriptor
| Scoring Formulas | 29 |
| Test Interpretation | 29 |
| Statistical Analysis | 10 |
| Multiple Choice Tests | 7 |
| Scores | 7 |
| Testing Problems | 6 |
| Guessing (Tests) | 5 |
| Item Analysis | 5 |
| Test Construction | 5 |
| Test Reliability | 5 |
| Test Theory | 4 |
| More ▼ | |
Source
Author
| Frary, Robert B. | 3 |
| Powell, J. C. | 2 |
| Berk, Ronald A. | 1 |
| Bormuth, John R. | 1 |
| Campbell, Brian | 1 |
| Church, Austin T. | 1 |
| Clampit, M. K. | 1 |
| Dimoliatis, Ioannis D. K. | 1 |
| Divgi, D. R. | 1 |
| Donlon, Thomas F. | 1 |
| Dorans, Neil J. | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 29 |
| Journal Articles | 13 |
| Speeches/Meeting Papers | 7 |
| Reports - Evaluative | 2 |
| Information Analyses | 1 |
| Tests/Questionnaires | 1 |
Education Level
| Higher Education | 2 |
| Elementary Education | 1 |
| Elementary Secondary Education | 1 |
| Grade 7 | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
| Postsecondary Education | 1 |
| Secondary Education | 1 |
Audience
| Researchers | 1 |
Location
| New York | 1 |
| United Kingdom (England) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Wechsler Intelligence Scale… | 3 |
| California Achievement Tests | 1 |
| Minnesota Multiphasic… | 1 |
| Strong Vocational Interest… | 1 |
What Works Clearinghouse Rating
Plucker, Jonathan A.; Qian, Meihua; Schmalensee, Stephanie L. – Creativity Research Journal, 2014
In recent years, the social sciences have seen a resurgence in the study of divergent thinking (DT) measures. However, many of these recent advances have focused on abstract, decontextualized DT tasks (e.g., list as many things as you can think of that have wheels). This study provides a new perspective by exploring the reliability and validity…
Descriptors: Creative Thinking, Creativity Tests, Scoring Formulas, Evaluation Methods
Dimoliatis, Ioannis D. K.; Jelastopulu, Eleni – Universal Journal of Educational Research, 2013
The surgical theatre educational environment measures STEEM, OREEM and mini-STEEM for students (student-STEEM) comprise an up to now disregarded systematic overestimation (OE) due to inaccurate percentage calculation. The aim of the present study was to investigate the magnitude of and suggest a correction for this systematic bias. After an…
Descriptors: Educational Environment, Scores, Grade Prediction, Academic Standards
Dorans, Neil J.; Liang, Longjuan; Puhan, Gautam – Educational Testing Service, 2010
Scores are the most visible and widely used products of a testing program. The choice of score scale has implications for test specifications, equating, and test reliability and validity, as well as for test interpretation. At the same time, the score scale should be viewed as infrastructure likely to require repair at some point. In this report…
Descriptors: Testing Programs, Standard Setting (Scoring), Test Interpretation, Certification
Peer reviewedCampbell, Brian; Wilson, Bradley J. – Journal of School Psychology, 1986
Investigated Kaufman's procedures for determining intersubtest scatter on the Wechsler Intelligence Scale for Children-Revised by means of Sattler's revised tables for determining significant subtest fluctuations. Results indicated that Sattler's revised tables yielded more conservative estimates of subtest scatter than those originally reported…
Descriptors: Intelligence Tests, Scoring Formulas, Statistical Analysis, Statistical Distributions
Peer reviewedWard, L. Charles – Journal of Clinical Psychology, 1986
Equations were derived for estimating MMPI (Minnesota Multiphasic Personality Inventory) scores from a short form developed for cognitively impaired individuals. Multiple regression analyses demonstrated that prediction from a single short-form scale was acceptable and was little improved by the addition of other scales or sex of subject to the…
Descriptors: Mental Retardation, Personality Measures, Predictive Validity, Scoring Formulas
Peer reviewedClampit, M. K.; Silver, Stephen J. – Journal of School Psychology, 1986
Presents four tables for the statistical interpretation of factor scores on the Wechsler Intelligence Scale for Children-Revised. Provides the percentile equivalents of factor scores; the significance of differences between factor scores; the frequency with which specified discrepancies occur; the significance of differences between a factor score…
Descriptors: Factor Analysis, Intelligence Tests, Scores, Scoring Formulas
Peer reviewedDuthie, Bruce; Vincent, Ken R. – Journal of Clinical Psychology, 1986
Diagnostic hit rates for the Diagnostic Inventory of Personality and Symptoms were compared to diagnosis by psychiatrists of the same patients. The Probability Scale employing Bayesian concepts and base rates correctly classified 70% of the patients and was more accurate by far than the other two methods used. (Author/ABB)
Descriptors: Bayesian Statistics, Identification, Personality Measures, Psychological Testing
Peer reviewedGreen, J. R.; And Others – British Journal of Educational Psychology, 1981
A simple unbalanced block model is proposed for examination marks, as an improvement on the usual implicit model. The new model is applied to some real data and is found, by the usual normal linear theory F test, to give a highly significant improvement. Some alternative models are also considered. (Author)
Descriptors: Achievement Rating, Achievement Tests, Models, Scoring Formulas
Peer reviewedDundon, William D.; And Others – Learning Disability Quarterly, 1986
Results of recategorizing the Wechsler Intelligence Scale for Children (Revised) subtest scores of 159 black learning disabled primary grade children into spatial, conceptual, and sequential scales as recommended by A. Bannatyne led to the conclusion that the diagnostic utility of the Bannatyne recategorization is questionable. (Author/DB)
Descriptors: Black Youth, Disability Identification, Learning Disabilities, Primary Education
Berk, Ronald A. – 1980
Seventeen statistics for measuring the reliability of criterion-referenced tests were critically reviewed. The review was organized into two sections: (1) a discussion of preliminary considerations to provide a foundation for choosing the appropriate category of "reliability" (threshold loss function, squared-error loss-function, or…
Descriptors: Criterion Referenced Tests, Cutting Scores, Scoring Formulas, Statistical Analysis
Peer reviewedFrary, Robert B. – Journal of Educational Statistics, 1982
Six different approaches to scoring test data, including number right, correction for guessing, and answer-until-correct, were investigated using Monte Carlo techniques. Modes permitting multiple response showed higher internal consistency, but there was little difference among modes for a validity measure. (JKS)
Descriptors: Guessing (Tests), Measurement Techniques, Multiple Choice Tests, Scoring Formulas
Frary, Robert B.; And Others – 1985
Students in an introductory college course (n=275) responded to equivalent 20-item halves of a test under number-right and formula-scoring instructions. Formula scores of those who omitted items overaged about one point lower than their comparable (formula adjusted) scores on the test half administered under number-right instructions. In contrast,…
Descriptors: Guessing (Tests), Higher Education, Multiple Choice Tests, Questionnaires
Purves, Alan C.; And Others – 1990
After establishing a theoretical depiction of the domain of literature learning, a study developed test packages which examined: (1) the relationship among multiple choice, short open-ended, and long open-ended responses; (2) whether there would be differences according to the genres; (3) the relationship between literary and non-literary texts,…
Descriptors: Educational Research, Evaluation Methods, High Schools, Literary Genres
Divgi, D. R. – 1980
A method is proposed for providing an absolute, in contrast to comparative, evaluation of how well two tests are equated by transforming their raw scores into a particular common scale. The method is direct, not requiring creation of a standard for comparison; expresses its results in scaled rather than raw scores, and allows examination of the…
Descriptors: Equated Scores, Evaluation Criteria, Item Analysis, Latent Trait Theory
Peer reviewedLord, Frederic M. – Journal of Educational Measurement, 1984
Four methods are outlined for estimating or approximating from a single test administration the standard error of measurement of number-right test score at specified ability levels or cutting scores. The methods are illustrated and compared on one set of real test data. (Author)
Descriptors: Academic Ability, Cutting Scores, Error of Measurement, Scoring Formulas
Previous Page | Next Page ยป
Pages: 1 | 2
Direct link
