Publication Date
| In 2026 | 2 |
| Since 2025 | 441 |
| Since 2022 (last 5 years) | 1920 |
| Since 2017 (last 10 years) | 4492 |
| Since 2007 (last 20 years) | 6977 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 454 |
| Practitioners | 319 |
| Teachers | 128 |
| Administrators | 73 |
| Policymakers | 33 |
| Counselors | 31 |
| Students | 17 |
| Parents | 10 |
| Community | 6 |
| Support Staff | 5 |
Location
| Turkey | 831 |
| Australia | 239 |
| China | 211 |
| Canada | 207 |
| Indonesia | 161 |
| Spain | 129 |
| United States | 123 |
| United Kingdom | 121 |
| Germany | 111 |
| Taiwan | 108 |
| Netherlands | 102 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 2 |
| Meets WWC Standards with or without Reservations | 2 |
| Does not meet standards | 1 |
Kimmell, Geraldine M.; Wahl, Jack – Acad Therap Quart, 1970
Descriptors: Auditory Perception, Exceptional Child Research, Screening Tests, Test Reliability
Peer reviewedMarks, Edmond; Lindsay, Carl A. – Journal of Educational Measurement, 1972
Examines the effects of four parameters on the accuracy of test equating under a relaxed definition of test form equivalence. The four parameters studied were sample size, test form length, test form reliability, and the correlation between true scores of the test forms to be equated. (CK)
Descriptors: Scores, Test Interpretation, Test Reliability, Test Results
Peer reviewedRamsay, J. O. – Educational and Psychological Measurement, 1971
The consequences of the assumption that the expected score is equal to the true score are shown and alternatives discussed. (MS)
Descriptors: Psychological Testing, Statistical Analysis, Test Reliability, Testing
Peer reviewedDoppelt, Jerome E. – Educational and Psychological Measurement, 1971
Descriptors: Aptitude Tests, Scores, Statistical Analysis, Test Reliability
Peer reviewedFollman, John; And Others – Educational and Psychological Measurement, 1971
Descriptors: College Students, Critical Thinking, Statistical Analysis, Test Reliability
Peer reviewedBowers, John – Educational and Psychological Measurement, 1971
Descriptors: Error of Measurement, Mathematical Models, Test Reliability, True Scores
Peer reviewedFriedman, Ronald – Journal of Genetic Psychology, 1971
Descriptors: Cognitive Processes, Intelligence, Performance Factors, Test Interpretation
Peer reviewedGardner, P. L. – Journal of Educational Measurement, 1970
Descriptors: Error of Measurement, Mathematical Models, Statistical Analysis, Test Reliability
Ebel, Robert L. – J Eng Educ, 1970
A good test used to measure student's knowledge and abilities is useful educationally even if it does not teach directly. Article based on address at American Society for Engineering Education's North Midwest Section Effective Teaching Institute, March 1969. (IR)
Descriptors: Achievement Tests, Higher Education, Test Reliability, Testing Problems
Berger, Richard A.; Medlin, Robert L. – Res Quart AAHPER, 1969
This study was submitted in partial fulfillment of the requirements for the M.Ed. degree at Texas Technological College under the supervision of R.A. Berger, 1968.
Descriptors: Junior High School Students, Males, Muscular Strength, Test Reliability
Peer reviewedWatson, Betty U. – Journal of Speech and Hearing Disorders, 1983
The test-retest stability of the Hiskey-Nebraska Test of Learning Aptitude (H-NTLA) was examined in a group of 43 hearing impaired children and adolescents. Test-retest correlations for Ss retested after approximately one year, three years, and five years were .79, .85, and .62 respectively. (Author)
Descriptors: Academic Aptitude, Elementary Secondary Education, Hearing Impairments, Test Reliability
Peer reviewedZimmerman, Donald W.; Williams, Richard H. – Journal of Educational Measurement, 1982
The reliability of differences between pretest and posttest measures with intervening experimental treatment can be highly reliable under some conditions. If the error components of pretest and posttest measures are correlated, the classical formula for the reliability of differences underestimates the actual reliability of gains. (Author/CM)
Descriptors: Achievement Gains, Correlation, Pretests Posttests, Research Problems
Bruininks-Oseretsky Test of Motor Proficiency: Further Verification with 3- to 5- yr. -old Children.
Peer reviewedBeitel, Patricia A.; Mead, Barbara J. – Perceptual and Motor Skills, 1982
The Bruininks-Oseretsky Test of Motor Proficiency was evaluated to determine test-retest reliability and if there were presensitizing effects at retest for four- to five-year olds. Test reliability was significantly high. No significant test sensitization of the short form to retesting with the short form or subtests was found. (Author/RD)
Descriptors: Early Childhood Education, Motor Development, Test Reliability, Testing Problems
Peer reviewedHuynh, Huynh – Journal of Educational Statistics, 1982
Two indices for assessing the efficiency of decisions in mastery testing are proposed. The indices are generalizations of the raw agreement index and the kappa index. Empirical examples of these indices are given. (Author/JKS)
Descriptors: Criterion Referenced Tests, Cutting Scores, Mastery Tests, Test Reliability
Peer reviewedRaju, Nambury S. – Educational and Psychological Measurement, 1982
A necessary and sufficient condition for a perfectly homogeneous test in the sense of Loevinger is stated and proved. Using this result, a formula for computing the maximum possible KR-20 when the test variance is assumed fixed is presented. A new index of test homogeneity is also presented and discussed. (Author/BW)
Descriptors: Mathematical Formulas, Mathematical Models, Multiple Choice Tests, Test Reliability


