NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Reckase, Mark D.; And Others – Journal of Educational Measurement, 1988
It is demonstrated, theoretically and empirically, that item sets can be selected that meet the unidimensionality assumption of most item response theory models, even though they require more than one ability for a correct response. A method for identifying such item sets for test development purposes is presented. (SLD)
Descriptors: Computer Simulation, Item Analysis, Latent Trait Theory, Mathematical Models
Peer reviewed Peer reviewed
Whitely, Susan E. – Journal of Educational Measurement, 1977
A debate concerning specific issues and the general usefulness of the Rasch latent trait test model is continued. Methods of estimation, necessary sample size, and the applicability of the model are discussed. (JKS)
Descriptors: Error of Measurement, Item Analysis, Mathematical Models, Measurement
Peer reviewed Peer reviewed
Wright, Benjamin D. – Journal of Educational Measurement, 1977
Statements made in a previous article of this journal concerning the Rasch latent trait test model are questioned. Methods of estimation, necessary sample sizes, several formuli, and the general usefulness of the Rasch model are discussed. (JKS)
Descriptors: Computers, Error of Measurement, Item Analysis, Mathematical Models
Peer reviewed Peer reviewed
Veale, James R.; Foreman, Dale I. – Journal of Educational Measurement, 1983
Statistical procedures for measuring heterogeneity of test item distractor distributions, or cultural variation, are presented. These procedures are based on the notion that examinees' responses to the incorrect options of a multiple-choice test provide more information concerning cultural bias than their correct responses. (Author/PN)
Descriptors: Ethnic Bias, Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Thissen, David M. – Journal of Educational Measurement, 1976
Where estimation of abilities in the lower half of the ability distribution for the Raven Progressive Matrices is important, or an increase in accuracy of ability estimation is needed, the multiple category latent trait estimation provides a rational procedure for realizing gains in accuracy from the use of information in wrong responses.…
Descriptors: Intelligence Tests, Item Analysis, Junior High Schools, Mathematical Models
Peer reviewed Peer reviewed
Zwick, Rebecca – Journal of Educational Measurement, 1987
National Assessment of Educational Progress reading data were scaled using a unidimensional item response theory model. Bock's full-information factor analysis and Rosenbaum's test of unidimensionality were applied. Conclusions about unidimensionality for balanced incomplete block spiralled data were the same as for complete data. (Author/GDC)
Descriptors: Factor Analysis, Item Analysis, Latent Trait Theory, Mathematical Models
Peer reviewed Peer reviewed
Secolsky, Charles – Journal of Educational Measurement, 1983
A model is presented using examinee judgements in detecting ambiguous/misinterpreted items on teacher-made criterion-referenced tests. A computational example and guidelines for constructing domain categories and interpreting the indices are presented. (Author/PN)
Descriptors: Criterion Referenced Tests, Higher Education, Item Analysis, Mathematical Models
Peer reviewed Peer reviewed
Sheehan, Kathleen; Mislevy, Robert J. – Journal of Educational Measurement, 1990
The 63 items on skills in acquiring and using information from written documents contained in the Survey of Young Adult Literacy in the 1985 National Assessment of Educational Progress are analyzed. The analyses are based on a qualitative cognitive model and an item-response theory model. (TJH)
Descriptors: Adult Literacy, Cognitive Processes, Diagnostic Tests, Elementary Secondary Education