NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Xu, Xueli; Jia, Yue – Educational Testing Service, 2011
Estimation of item response model parameters and ability distribution parameters has been, and will remain, an important topic in the educational testing field. Much research has been dedicated to addressing this task. Some studies have focused on item parameter estimation when the latent ability was assumed to follow a normal distribution,…
Descriptors: Test Items, Statistical Analysis, Computation, Item Response Theory
Guo, Hongwen; Sinharay, Sandip – Educational Testing Service, 2011
Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…
Descriptors: Error of Measurement, Nonparametric Statistics, Item Response Theory, Computation
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan – Educational Testing Service, 2011
Estimation of parameters of random effects models from samples collected via complex multistage designs is considered. One way to reduce estimation bias due to unequal probabilities of selection is to incorporate sampling weights. Many researchers have been proposed various weighting methods (Korn, & Graubard, 2003; Pfeffermann, Skinner,…
Descriptors: Computation, Statistical Bias, Sampling, Statistical Analysis
Hsieh, Chueh-an; Xu, Xueli; von Davier, Matthias – Educational Testing Service, 2010
This paper presents an application of a jackknifing approach to variance estimation of ability inferences for groups of students, using a multidimensional discrete model for item response data. The data utilized to demonstrate the approach come from the National Assessment of Educational Progress (NAEP). In contrast to the operational approach…
Descriptors: National Competency Tests, Reading Tests, Grade 4, Computation
Rose, Norman; von Davier, Matthias; Xu, Xueli – Educational Testing Service, 2010
Large-scale educational surveys are low-stakes assessments of educational outcomes conducted using nationally representative samples. In these surveys, students do not receive individual scores, and the outcome of the assessment is inconsequential for respondents. The low-stakes nature of these surveys, as well as variations in average performance…
Descriptors: Item Response Theory, Educational Assessment, Data Analysis, Case Studies
Rijmen, Frank – Educational Testing Service, 2009
Maximum marginal likelihood estimation of multidimensional item response theory (IRT) models has been hampered by the calculation of the multidimensional integral over the ability distribution. However, the researcher often has a specific hypothesis about the conditional (in)dependence relations among the latent variables. Exploiting these…
Descriptors: Maximum Likelihood Statistics, Item Response Theory, Computation, Models
Xu, Xueli; von Davier, Matthias – Educational Testing Service, 2010
One of the major objectives of large-scale educational surveys is reporting trends in academic achievement. For this purpose, a substantial number of items are carried from one assessment cycle to the next. The linking process that places academic abilities measured in different assessments on a common scale is usually based on a concurrent…
Descriptors: Case Studies, Trend Analysis, Computation, Educational Assessment
Livingston, Samuel A.; Lewis, Charles – Educational Testing Service, 2009
This report proposes an empirical Bayes approach to the problem of equating scores on test forms taken by very small numbers of test takers. The equated score is estimated separately at each score point, making it unnecessary to model either the score distribution or the equating transformation. Prior information comes from equatings of other…
Descriptors: Test Length, Equated Scores, Bayesian Statistics, Sample Size
von Davier, Matthias; Sinharay, Sandip – Educational Testing Service, 2009
This paper presents an application of a stochastic approximation EM-algorithm using a Metropolis-Hastings sampler to estimate the parameters of an item response latent regression model. Latent regression models are extensions of item response theory (IRT) to a 2-level latent variable model in which covariates serve as predictors of the…
Descriptors: Item Response Theory, Regression (Statistics), Models, Methods
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – Educational Testing Service, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Adaptive Testing, Test Items, Computation, Context Effect