NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Eignor, Daniel R.; Douglass, James B. – 1982
This paper attempts to provide some initial information about the use of a variety of item response theory (IRT) models in the item selection process; its purpose is to compare the information curves derived from the selection of items characterized by several different IRT models and their associated parameter estimation programs. These…
Descriptors: Comparative Analysis, Latent Trait Theory, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Garcia-Perez, Miguel A.; Frary, Robert B. – Applied Psychological Measurement, 1989
Simulation techniques were used to generate conventional test responses and track the proportion of alternatives examinees could classify independently before and after taking the test. Finite-state scores were compared with these actual values and with number-correct and formula scores. Finite-state scores proved useful. (TJH)
Descriptors: Comparative Analysis, Computer Simulation, Guessing (Tests), Mathematical Models
Koch, Bill R.; Reckase, Mark D. – 1978
A live tailored testing study was conducted to compare the results of using either the one-parameter logistic model or the three-parameter logistic model to measure the performance of college students on multiple choice vocabulary items. The results of the study showed the three-parameter tailored testing procedure to be superior to the…
Descriptors: Adaptive Testing, Comparative Analysis, Goodness of Fit, Higher Education
Peer reviewed Peer reviewed
Zimmerman, Donald W.; And Others – Journal of Experimental Education, 1984
Three types of test were compared: a completion test, a matching test, and a multiple-choice test. The completion test was more reliable than the matching test, and the matching test was more reliable than the multiple-choice test. (Author/BW)
Descriptors: Comparative Analysis, Error of Measurement, Higher Education, Mathematical Models
Peer reviewed Peer reviewed
Bock, R. Darrell – Psychometrika, 1972
Descriptors: Ability Identification, Comparative Analysis, Item Analysis, Mathematical Models
Gugel, John F. – 1990
A new method for estimating the parameters of the normal ogive three-parameter model for multiple-choice test items--the normalized direct (NDIR) procedure--is examined. The procedure is compared to a more commonly used estimation procedure, Lord's LOGIST, using computer simulations. The NDIR procedure uses the normalized (mid-percentile)…
Descriptors: Ability Identification, Aptitude Tests, Chi Square, Comparative Analysis
Nicewander, W. Alan; And Others – 1980
Two methods of interactive, computer-assisted testing methods for multiple-choice items were compared with each other and with conventional multiple-choice tests. The interactive testing methods compared were tailored testing and the respond-until-correct (RUC) item response method. In tailored testing, examinee ability is successively estimated…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Guessing (Tests)
Hanson, Bradley A. – 1990
Three methods of estimating test score distributions that may improve on using the observed frequencies (OBFs) as estimates of a population test score distribution are considered: the kernel method (KM); the polynomial method (PM); and the four-parameter beta binomial method (FPBBM). The assumption each method makes about the smoothness of the…
Descriptors: Comparative Analysis, Computer Simulation, Equations (Mathematics), Estimation (Mathematics)
Kingsbury, G. Gage – 1985
A procedure for assessing content-area and total-test dimensionality which uses response function discrepancies (RFD) was studied. Three different versions of the RFD procedure were compared to Bejar's principal axis content-area procedure and Indow and Samejima's exploratory factor analytic technique. The procedures were compared in terms of the…
Descriptors: Achievement Tests, Comparative Analysis, Elementary Education, Estimation (Mathematics)
Peer reviewed Peer reviewed
Miller, Timothy R.; Hirsch, Thomas M. – Applied Measurement in Education, 1992
A procedure for interpreting multiple-discrimination indices from a multidimensional item-response theory analysis is described and demonstrated with responses of 1,635 high school students to a multiple-choice test. The procedure consists of converting discrimination parameter estimates to direction cosines and analyzing the angular distances…
Descriptors: Ability, Cluster Analysis, Comparative Analysis, Estimation (Mathematics)
Sherman, Susan W. – 1976
The multiple-choice science exercises used by the National Assessment of Educational Progress include an "I Don't Know" (IDK) alternative to estimate more accurately knowledge of groups of respondents. Group percentages of IDK responses were examined and compared with correct responses to see if the IDK introduces bias. Variance common…
Descriptors: Age Differences, Comparative Analysis, Groups, Guessing (Tests)
Weiss, David J. – 1976
Three and one-half years of research on computerized ability testing are summarized. The original objectives of the research were: (1) to develop and implement the stratified computer-based ability test; (2) to compare, on psychometric criteria, the various approaches to computer-based ability testing, including the stratified computerized test,…
Descriptors: Adaptive Testing, Bayesian Statistics, Branching, Comparative Analysis