NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1981
A formal framework is presented for determining which of the distractors of multiple-choice test items has a small probability of being chosen by a typical examinee. The framework is based on a procedure similar to an indifference zone formulation of a ranking and election problem. (Author/BW)
Descriptors: Mathematical Models, Multiple Choice Tests, Probability, Test Items
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1979
Wilcox has described three probability models which characterize a single test item in terms of a population of examinees (ED 156 718). This note indicates indicates that similar models can be derived which characterize a single examinee in terms of an item domain. A numerical illustration is given. (Author/JKS)
Descriptors: Achievement Tests, Item Analysis, Mathematical Models, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Graham, James M. – Educational and Psychological Measurement, 2006
Coefficient alpha, the most commonly used estimate of internal consistency, is often considered a lower bound estimate of reliability, though the extent of its underestimation is not typically known. Many researchers are unaware that coefficient alpha is based on the essentially tau-equivalent measurement model. It is the violation of the…
Descriptors: Models, Test Theory, Reliability, Structural Equation Models
Peer reviewed Peer reviewed
Reuterberg, Sven-Eric; Gustafsson, Jan-Eric – Educational and Psychological Measurement, 1992
The use of confirmatory factor analysis by the LISREL program is demonstrated as an assumption-testing method when computing reliability coefficients under different model assumptions. Results indicate that reliability estimates are robust against departure from the assumption of parallelism of test items. (SLD)
Descriptors: Equations (Mathematics), Estimation (Mathematics), Mathematical Models, Robustness (Statistics)
Peer reviewed Peer reviewed
Feldt, Leonard S. – Educational and Psychological Measurement, 1984
The binomial error model includes form-to-form difficulty differences as error variance and leads to Ruder-Richardson formula 21 as an estimate of reliability. If the form-to-form component is removed from the estimate of error variance, the binomial model leads to KR 20 as the reliability estimate. (Author/BW)
Descriptors: Achievement Tests, Difficulty Level, Error of Measurement, Mathematical Formulas
Peer reviewed Peer reviewed
Smith, Richard M. – Educational and Psychological Measurement, 1991
This study reports results of an investigation based on simulated data of the distributional properties of the item fit statistics that are commonly used in the Rasch model calibration programs as indices of the fit of responses to individual items to the measurement model. (SLD)
Descriptors: Computer Simulation, Equations (Mathematics), Goodness of Fit, Item Response Theory
Peer reviewed Peer reviewed
Albanese, Mark A.; Forsyth, Robert A. – Educational and Psychological Measurement, 1984
The purpose of this study was to compare the relative robustness of the one-, two-, and modified two-parameter latent trait logistic models for the Iowa Tests of Educational Development. Results suggest that the modified two-parameter model may provide the best representation of the data. (Author/BW)
Descriptors: Achievement Tests, Comparative Analysis, Goodness of Fit, Item Analysis
Peer reviewed Peer reviewed
Huck, Schuyler W.; And Others – Educational and Psychological Measurement, 1981
Believing that examinee-by-item interaction should be conceptualized as true score variability rather than as a result of errors of measurement, Lu proposed a modification of Hoyt's analysis of variance reliability procedure. Via a computer simulation study, it is shown that Lu's approach does not separate interaction from error. (Author/RL)
Descriptors: Analysis of Variance, Comparative Analysis, Computer Programs, Difficulty Level
Peer reviewed Peer reviewed
Dodd, Barbara G.; And Others – Educational and Psychological Measurement, 1993
Effects of the following variables on performance of computerized adaptive testing (CAT) procedures for the partial credit model (PCM) were studied: (1) stopping rule for terminating CAT; (2) item pool size; and (3) distribution of item difficulties. Implications of findings for CAT systems based on the PCM are discussed. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Difficulty Level
Peer reviewed Peer reviewed
Fowler, Robert L.; Clingman, Joy M. – Educational and Psychological Measurement, 1992
Monte Carlo techniques are used to examine the power of the "B" statistic of R. L. Brennan (1972) to detect negatively discriminating items drawn from a variety of nonnormal population distributions. A simplified procedure is offered for conducting an item-discrimination analysis on typical classroom objective tests. (SLD)
Descriptors: Classroom Techniques, Elementary Secondary Education, Equations (Mathematics), Item Analysis