NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)0
Since 2007 (last 20 years)5
Audience
Laws, Policies, & Programs
Assessments and Surveys
Goodenough Harris Drawing Test1
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cetin, Bayram; Guler, Nese; Sarica, Rabia – Eurasian Journal of Educational Research, 2016
Problem Statement: In addition to being teaching tools, concept maps can be used as effective assessment tools. The use of concept maps for assessment has raised the issue of scoring them. Concept maps generated and used in different ways can be scored via various methods. Holistic and relational scoring methods are two of them. Purpose of the…
Descriptors: Generalizability Theory, Concept Mapping, Scoring, Scoring Formulas
Northwest Evaluation Association, 2016
Northwest Evaluation Association™ (NWEA™) is committed to providing partners with useful tools to help make inferences from Measures of Academic Progress® (MAP®) interim assessment scores. One important tool is the concordance table between MAP and state summative assessments. Concordance tables have been used for decades to relate scores on…
Descriptors: Tables (Data), Benchmarking, Scoring Formulas, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Ravesloot, C. J.; Van der Schaaf, M. F.; Muijtjens, A. M. M.; Haaring, C.; Kruitwagen, C. L. J. J.; Beek, F. J. A.; Bakker, J.; Van Schaik, J.P.J.; Ten Cate, Th. J. – Advances in Health Sciences Education, 2015
Formula scoring (FS) is the use of a don't know option (DKO) with subtraction of points for wrong answers. Its effect on construct validity and reliability of progress test scores, is subject of discussion. Choosing a DKO may not only be affected by knowledge level, but also by risk taking tendency, and may thus introduce construct-irrelevant…
Descriptors: Scoring Formulas, Tests, Scores, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Plucker, Jonathan A.; Qian, Meihua; Schmalensee, Stephanie L. – Creativity Research Journal, 2014
In recent years, the social sciences have seen a resurgence in the study of divergent thinking (DT) measures. However, many of these recent advances have focused on abstract, decontextualized DT tasks (e.g., list as many things as you can think of that have wheels). This study provides a new perspective by exploring the reliability and validity…
Descriptors: Creative Thinking, Creativity Tests, Scoring Formulas, Evaluation Methods
Peer reviewed Peer reviewed
Kaiser, Henry F.; Michael, William B. – Educational and Psychological Measurement, 1977
A formula is derived for ascertaining factor scores for the factor analytic method: Little Jiffy, Mark IV. This formula is then employed to derive a second formula giving an exact determination of the generalized Kuder-Richardson estimate of the reliability of scores on a Little Jiffy factor. (Author/JKS)
Descriptors: Factor Analysis, Reliability, Scores, Scoring Formulas
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Haberman, Shelby J. – ETS Research Report Series, 2008
In educational testing, subscores may be provided based on a portion of the items from a larger test. One consideration in evaluation of such subscores is their ability to predict a criterion score. Two limitations on prediction exist. The first, which is well known, is that the coefficient of determination for linear prediction of the criterion…
Descriptors: Scores, Validity, Educational Testing, Correlation
Green, Bert F., Jr. – 1972
The use of Guttman weights in scoring tests is discussed. Scores of 2,500 men on one subtest of the CEED-SAT-Verbal Test were examined using cross-validated Guttman weights. Several scores were compared, as follows: Scores obtained from cross-validated Guttman weights; Scores obtained by rounding the Guttman weights to one digit, ranging from 0 to…
Descriptors: Comparative Analysis, Reliability, Scoring Formulas, Test Results
Peer reviewed Peer reviewed
Naglieri, Jack A.; Maxwell, Susanna – Perceptual and Motor Skills, 1981
Inter-rater reliability of the Goodenough-Harris and McCarthy Draw-A-Child scoring systems was examined for a sample of 60 children, including 20 school-labeled learning disabled, 20 mentally retarded, and 20 normal children between the ages of six and eight-and-one-half years. (Author)
Descriptors: Correlation, Intelligence Tests, Learning Disabilities, Mental Retardation
Follman, John; Panther, Edward – Child Study Journal Monographs, 1974
Examines empirically the efficacy of utilizing Olympic diving and gymnastic scoring systems for grading graduate students' English compositions. Results indicated that such scoring rules do not produce ratings different in reliability or in level from conventional letter grades. (ED)
Descriptors: English Curriculum, Evaluation Methods, Grading, Graduate Students
Peer reviewed Peer reviewed
Cross, Lawrence H.; And Others – Journal of Experimental Education, 1980
Use of choice-weighted scores as a basis for assigning grades in college courses was investigated. Reliability and validity indices offer little to recommend either type of choice-weighted scoring over number-right scoring. The potential for choice-weighted scoring to enhance the teaching/testing process is discussed. (Author/GK)
Descriptors: Credit Courses, Grading, Higher Education, Multiple Choice Tests
Doppelt, Jerome E. – Test Service Bulletin, 1956
The standard error of measurement as a means for estimating the margin of error that should be allowed for in test scores is discussed. The true score measures the performance that is characteristic of the person tested; the variations, plus and minus, around the true score describe a characteristic of the test. When the standard deviation is used…
Descriptors: Bulletins, Error of Measurement, Measurement Techniques, Reliability
Peer reviewed Peer reviewed
Essex, Diane L. – Journal of Medical Education, 1976
Two multiple-choice scoring schemes--a partial credit scheme and a dichotomous approach--were compared analyzing means, variances, and reliabilities on alternate measures and student reactions. Students preferred the partial-credit approach, which is recommended if rewarding for partial knowledge is an important concern. (Editor/JT)
Descriptors: Higher Education, Medical Students, Multiple Choice Tests, Reliability
Peer reviewed Peer reviewed
Spencer, Ernest – Scottish Educational Review, 1981
Using data from the SCRE Criterion Test composition papers, the author tests the hypothesis that the bulk of inter-marker unreliability is caused by inter-marker inconsistency--which is not correctable statistically. He suggests that a shift to "consensus" standards will realize greater improvements than statistical standardizing alone.…
Descriptors: Achievement Tests, English Instruction, Essay Tests, Reliability
Peer reviewed Peer reviewed
Kleven, Thor Arnfinn – Scandinavian Journal of Educational Research, 1979
Supposing different values of the standard measurement error, the relation of scale coarseness to the total amount of error is studied on the basis of probability distribution of error. The analyses are performed within two models of error and with two criteria of amount of error. (Editor/SJL)
Descriptors: Cutting Scores, Error of Measurement, Goodness of Fit, Grading
Tollefson, Nona; Chung, Jing-Mei – 1986
Procedures for correcting for guessing and for assessing partial knowledge (correction-for-guessing, three-decision scoring, elimination/inclusion scoring, and confidence or probabilistic scoring) are discussed. Mean scores and internal consistency reliability estimates were compared across three administration and scoring procedures for…
Descriptors: Achievement Tests, Comparative Analysis, Evaluation Methods, Graduate Students
Previous Page | Next Page »
Pages: 1  |  2