NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 646 to 660 of 1,074 results Save | Export
Peer reviewed Peer reviewed
Dreger, Ralph Mason – Educational and Psychological Measurement, 1973
Study refers to J. A. Bowers' A note on Gaylord's Estimating test reliability from the item-test correlations,''' EJ 041 295. (CB)
Descriptors: Correlation, Item Analysis, Mathematical Applications, Statistical Analysis
Peer reviewed Peer reviewed
Ramsay, J. O. – Educational and Psychological Measurement, 1971
The consequences of the assumption that the expected score is equal to the true score are shown and alternatives discussed. (MS)
Descriptors: Psychological Testing, Statistical Analysis, Test Reliability, Testing
Peer reviewed Peer reviewed
Doppelt, Jerome E. – Educational and Psychological Measurement, 1971
Descriptors: Aptitude Tests, Scores, Statistical Analysis, Test Reliability
Peer reviewed Peer reviewed
Follman, John; And Others – Educational and Psychological Measurement, 1971
Descriptors: College Students, Critical Thinking, Statistical Analysis, Test Reliability
Peer reviewed Peer reviewed
Gardner, P. L. – Journal of Educational Measurement, 1970
Descriptors: Error of Measurement, Mathematical Models, Statistical Analysis, Test Reliability
Pepin, Michel – 1983
This paper presents three different ways of computing the internal consistency coefficient alpha for a same set of data. The main objective of the paper is the illustration of a method for maximizing coefficient alpha. The maximization of alpha can be achieved with the aid of a principal component analysis. The relation between alpha max. and the…
Descriptors: Research Methodology, Research Problems, Statistical Analysis, Test Items
Peer reviewed Peer reviewed
Echternacht, Gary – Educational and Psychological Measurement, 1975
Estimates for the variances of empirically determined scoring weights are given. It is also shown that test item writers should write distractors that discriminate on the criterion variable when this type of scoring is used. (Author)
Descriptors: Scoring, Statistical Analysis, Test Construction, Test Reliability
Ferguson, Mary Ann – 1981
The professional orientation difference approach was operationalized as a stress on the importance of intellectual activity, autonomy, service and influence, with de-emphasis on the monetary, prestige, security and human relations aspects of a job. As a difference score, the index was subject to special reliability problems. This analysis…
Descriptors: Measures (Individuals), Professional Personnel, Statistical Analysis, Test Reliability
Thostenson, Marvin S. – 1966
This investigation dealt with the development and evaluation of both a music dictation test (PRM78 Dictation Test) and a sightsinging test (CSS76 Criterion Sightsinging Test). It was hoped that the dictation test could eventually be developed to serve as an adequate replacement for the latter. Thirteen samples participated in this project--7…
Descriptors: Auditory Training, Comparative Analysis, Music Reading, Statistical Analysis
Kleinke, David J. – 1976
Data from 200 college-level tests were used to compare three reliability approximations (two of Saupe and one of Cureton) to Kuder-Richardson Formula 20 (KR20). While the approximations correlated highly (about .9) with the reliability estimate, they tended to be underapproximations. The explanation lies in an apparent bias of Lord's approximation…
Descriptors: Comparative Analysis, Correlation, Error of Measurement, Statistical Analysis
Peer reviewed Peer reviewed
Millsap, Roger E. – Applied Psychological Measurement, 1988
Two new methods for constructing a credibility interval (CI)--an interval containing a specified proportion of true validity description--are discussed, from a frequentist perspective. Tolerance intervals, unlike the current method of constructing the CI, have performance characteristics across repeated applications and may be useful in validity…
Descriptors: Bayesian Statistics, Meta Analysis, Statistical Analysis, Test Reliability
Peer reviewed Peer reviewed
Martois, John S. – Educational and Psychological Measurement, 1973
Copies of this program may be obtained from the author at the University of Southern California, School of Pharmacy, University Park, Los Angeles 90007. (CB)
Descriptors: Comparative Analysis, Computer Programs, Input Output, Statistical Analysis
Peer reviewed Peer reviewed
Stafford, Richard E. – Journal of Educational Measurement, 1971
Descriptors: Correlation, Statistical Analysis, Test Interpretation, Test Reliability
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1979
The classical estimate of a binomial probability function is to estimate its mean in the usual manner and to substitute the results in the appropriate expression. Two alternative estimation procedures are described and examined. Emphasis is given to the single administration estimate of the mastery test reliability. (Author/CTM)
Descriptors: Cutting Scores, Mastery Tests, Probability, Scores
Peer reviewed Peer reviewed
Brennan, Robert L.; Lockwood, Robert E. – Applied Psychological Measurement, 1980
Generalizability theory is used to characterize and quantify expected variance in cutting scores and to compare the Nedelsky and Angoff procedures for establishing a cutting score. Results suggest that the restricted nature of the Nedelsky (inferred) probability scale may limit its applicability in certain contexts. (Author/BW)
Descriptors: Cutting Scores, Generalization, Statistical Analysis, Test Reliability
Pages: 1  |  ...  |  40  |  41  |  42  |  43  |  44  |  45  |  46  |  47  |  48  |  ...  |  72