Descriptor
Author
| Huynh, Huynh | 5 |
| Saunders, Joseph C. | 1 |
Publication Type
| Reports - Research | 4 |
| Journal Articles | 2 |
Education Level
Audience
Location
| South Carolina | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Comprehensive Tests of Basic… | 1 |
What Works Clearinghouse Rating
Huynh, Huynh – 1977
The kappamax reliability index of domain-referenced tests is defined as the upper bound of kappa when all possibile cutoff scores are considered. Computational procedures for kappamax are described, as well as its approximation for long tests, based on Kuder-Richardson formula 21. The sampling error of kappamax, and the effects of test length and…
Descriptors: Criterion Referenced Tests, Mathematical Models, Statistical Analysis, Test Reliability
Peer reviewedHuynh, Huynh – Journal of Educational Statistics, 1981
Simulated data based on five test score distributions indicate that a slight modification of the asymptotic normal theory for the estimation of the p and kappa indices in mastery testing will provide results which are in close agreement with those based on small samples from the beta-binomial distribution. (Author/BW)
Descriptors: Error of Measurement, Mastery Tests, Mathematical Models, Test Reliability
Peer reviewedHuynh, Huynh – Journal of Educational Measurement, 1976
Within the beta-binomial Bayesian framework, procedures are described for the evaluation of the kappa index of reliability on the basis of one administration of a domain-referenced test. Major factors affecting this index include cutoff score, test score variability and test length. Empirical data which substantiate some theoretical trends deduced…
Descriptors: Criterion Referenced Tests, Decision Making, Mathematical Models, Probability
Huynh, Huynh – 1977
Three techniques for estimating Kuder Richardson reliability (KR20) coefficients for incomplete data are contrasted. The methods are: (1) Henderson's Method 1 (analysis of variance, or ANOVA); (2) Henderson's Method 3 (FITCO); and (3) Koch's method of symmetric sums (SYSUM). A Monte Carlo simulation was used to assess the precision of the three…
Descriptors: Analysis of Variance, Comparative Analysis, Mathematical Models, Monte Carlo Methods
Peer reviewedHuynh, Huynh; Saunders, Joseph C. – Journal of Educational Measurement, 1980
Single administration (beta-binomial) estimates for the raw agreement index p and the corrected-for-chance kappa index in mastery testing are compared with those based on two test administrations in terms of estimation bias and sampling variability. Bias is about 2.5 percent for p and 10 percent for kappa. (Author/RL)
Descriptors: Comparative Analysis, Error of Measurement, Mastery Tests, Mathematical Models


