Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 3 |
Descriptor
| Mathematical Formulas | 46 |
| Test Reliability | 46 |
| Error of Measurement | 13 |
| Item Analysis | 10 |
| Statistical Analysis | 8 |
| Test Items | 8 |
| Test Theory | 8 |
| Cutting Scores | 7 |
| Higher Education | 7 |
| Mathematical Models | 7 |
| Mastery Tests | 6 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Researchers | 2 |
Laws, Policies, & Programs
Assessments and Surveys
| SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Foster, Robert C. – Educational and Psychological Measurement, 2021
This article presents some equivalent forms of the common Kuder-Richardson Formula 21 and 20 estimators for nondichotomous data belonging to certain other exponential families, such as Poisson count data, exponential data, or geometric counts of trials until failure. Using the generalized framework of Foster (2020), an equation for the reliability…
Descriptors: Test Reliability, Data, Computation, Mathematical Formulas
Almehrizi, Rashid S. – Applied Measurement in Education, 2021
KR-21 reliability and its extension (coefficient [alpha]) gives the reliability estimate of test scores under the assumption of tau-equivalent forms. KR-21 reliability gives the reliability estimate for summed scores for dichotomous items when items are randomly sampled from an infinite pool of similar items (randomly parallel forms). The article…
Descriptors: Test Reliability, Scores, Scoring, Computation
Longford, Nicholas T. – Journal of Educational and Behavioral Statistics, 2014
A method for medical screening is adapted to differential item functioning (DIF). Its essential elements are explicit declarations of the level of DIF that is acceptable and of the loss function that quantifies the consequences of the two kinds of inappropriate classification of an item. Instead of a single level and a single function, sets of…
Descriptors: Test Items, Test Bias, Simulation, Hypothesis Testing
Peer reviewedVegelius, Jan – Educational and Psychological Measurement, 1980
One argument against the G index is that, unlike phi, it is not a correlation coefficient; yet, G conforms to the Kendall and E-coefficient definitions. The G index is also equal to the Pearson product moment correlation coefficient obtained from double scoring. (Author/CP)
Descriptors: Correlation, Mathematical Formulas, Test Reliability
Peer reviewedUebersax, John S. – Educational and Psychological Measurement, 1982
A more general method for calculating the Kappa measure of nominal rating agreement among multiple raters is presented. It can be used across a broad range of rating designs, including those in which raters vary with respect to their base rates and how many subjects they rate in common. (Author/BW)
Descriptors: Mathematical Formulas, Statistical Significance, Test Reliability
Peer reviewedCliff, Norman – Journal of Educational Statistics, 1984
The proposed coefficient is derived by assuming that the average Goodman-Kruskal gamma between items of identical difficulty would be the same for items of different difficulty. An estimate of covariance between items of identical difficulty leads to an estimate of the correlation between two tests with identical distributions of difficulty.…
Descriptors: Difficulty Level, Mathematical Formulas, Test Items, Test Reliability
Peer reviewedJackson, Paul H. – Psychometrika, 1979
Use of the same term "split-half" for division of an n-item test into two subtests containing equal (Cronbach), and possibly unequal (Guttman), numbers of items sometimes leads to a misunderstanding about the relation between Guttman's maximum split-half bound and Cronbach's coefficient alpha. This distinction is clarified. (Author/JKS)
Descriptors: Item Analysis, Mathematical Formulas, Technical Reports, Test Reliability
Gaylord, Richard H. – Educ Psychol Meas, 1969
Descriptors: Correlation, Item Analysis, Mathematical Formulas, Test Construction
Peer reviewedRaju, Nambury S. – Educational and Psychological Measurement, 1982
A necessary and sufficient condition for a perfectly homogeneous test in the sense of Loevinger is stated and proved. Using this result, a formula for computing the maximum possible KR-20 when the test variance is assumed fixed is presented. A new index of test homogeneity is also presented and discussed. (Author/BW)
Descriptors: Mathematical Formulas, Mathematical Models, Multiple Choice Tests, Test Reliability
Peer reviewedten Berge, Jos M. F.; Zegers, Frits E. – Psychometrika, 1978
Two lower bounds to reliability in classical test theory, Guttman's lamda and Cronbach's alpha, are shown to be terms of an infinite series of lower bounds. All terms of this series are equal to reliability if and only if the test contains items which are tau-equivalent. (Author/JKS)
Descriptors: Mathematical Formulas, Psychometrics, Technical Reports, Test Interpretation
Peer reviewedSilverstein, A. B. – Educational and Psychological Measurement, 1980
An alternative derivation was given of Gaylord's formulas showing the relationships among the average item intercorrelation, the average item-test correlation, and test reliability. Certain parallels were also noted in analysis of variance and principal component analysis. (Author)
Descriptors: Analysis of Variance, Item Analysis, Mathematical Formulas, Test Reliability
Peer reviewedConger, Anthony J. – Educational and Psychological Measurement, 1980
Reliability maximizing weights are related to theoretically specified true score scaling weights to show a constant relationship that is invariant under separate linear tranformations on each variable in the system. Test theoretic relations should be derived for the most general model available and not for unnecessarily constrained models.…
Descriptors: Mathematical Formulas, Scaling, Test Reliability, Test Theory
Peer reviewedKuder, Frederic – Educational and Psychological Measurement, 1991
Recommendations are made for the appropriate use and identification of traditional Kuder-Richardson formulas for the estimation of reliability. "Alpha" should be used for reliabilities estimated for tests or scales composed of items yielding scores distributed on more than two points. (SLD)
Descriptors: Estimation (Mathematics), Evaluation Methods, Mathematical Formulas, Scores
Wilson, Pamela W.; And Others – 1979
The purpose of this study was to present an empirical correction of the KR21 (Kuder Richardson test reliability) formula that not only yields a closer approximation to the numerical value of the KR20 without overestimation, but also simplifies computation. This correction was accomplished by introducing several correction factors to the numerator…
Descriptors: Higher Education, Item Analysis, Mathematical Formulas, Research Reports
Peer reviewedRaju, Nambury S. – Psychometrika, 1979
An important relationship is given for two generalizations of coefficient alpha: (1) Rajaratnam, Cronbach, and Gleser's generalizability formula for stratified-parallel tests, and (2) Raju's coefficient beta. (Author/CTM)
Descriptors: Item Analysis, Mathematical Formulas, Test Construction, Test Items

Direct link
