NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Evaluative63
Journal Articles48
Speeches/Meeting Papers9
Audience
Practitioners1
Location
Germany1
Laws, Policies, & Programs
No Child Left Behind Act 20012
What Works Clearinghouse Rating
Showing 1 to 15 of 63 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hans-Peter Piepho; Johannes Forkman; Waqas Ahmed Malik – Research Synthesis Methods, 2024
Checking for possible inconsistency between direct and indirect evidence is an important task in network meta-analysis. Recently, an evidence-splitting (ES) model has been proposed, that allows separating direct and indirect evidence in a network and hence assessing inconsistency. A salient feature of this model is that the variance for…
Descriptors: Maximum Likelihood Statistics, Evidence, Networks, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2019
In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in…
Descriptors: Item Response Theory, Error of Measurement, Scoring, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Zhiyong; Wang, Lijuan – Psychometrika, 2013
Despite wide applications of both mediation models and missing data techniques, formal discussion of mediation analysis with missing data is still rare. We introduce and compare four approaches to dealing with missing data in mediation analysis including list wise deletion, pairwise deletion, multiple imputation (MI), and a two-stage maximum…
Descriptors: Maximum Likelihood Statistics, Structural Equation Models, Simulation, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao – Educational and Psychological Measurement, 2013
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
Descriptors: Item Response Theory, Computation, Matrices, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, Andre – Applied Psychological Measurement, 2013
The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…
Descriptors: Factor Analysis, Predictor Variables, Reliability, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Kieftenbeld, Vincent; Natesan, Prathiba – Applied Psychological Measurement, 2012
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Descriptors: Test Length, Markov Processes, Item Response Theory, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Woods, Carol M.; Lin, Nan – Applied Psychological Measurement, 2009
Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…
Descriptors: Item Response Theory, Personality Measures, Computation, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Suh, Youngsuk; Bolt, Daniel M. – Psychometrika, 2010
Nested logit item response models for multiple-choice data are presented. Relative to previous models, the new models are suggested to provide a better approximation to multiple-choice items where the application of a solution strategy precedes consideration of response options. In practice, the models also accommodate collapsibility across all…
Descriptors: Computation, Simulation, Psychometrics, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Furgol, Katherine E.; Ho, Andrew D.; Zimmerman, Dale L. – Educational and Psychological Measurement, 2010
Under the No Child Left Behind Act, large-scale test score trend analyses are widespread. These analyses often gloss over interesting changes in test score distributions and involve unrealistic assumptions. Further complications arise from analyses of unanchored, censored assessment data, or proportions of students lying within performance levels…
Descriptors: Trend Analysis, Sample Size, Federal Legislation, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Finkelman, Matthew David – Applied Psychological Measurement, 2010
In sequential mastery testing (SMT), assessment via computer is used to classify examinees into one of two mutually exclusive categories. Unlike paper-and-pencil tests, SMT has the capability to use variable-length stopping rules. One approach to shortening variable-length tests is stochastic curtailment, which halts examination if the probability…
Descriptors: Mastery Tests, Computer Assisted Testing, Adaptive Testing, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Meyer, J. Patrick; Setzer, J. Carl – Journal of Educational Measurement, 2009
Recent changes to federal guidelines for the collection of data on race and ethnicity allow respondents to select multiple race categories. Redefining race subgroups in this manner poses problems for research spanning both sets of definitions. NAEP long-term trends have used the single-race subgroup definitions for over thirty years. Little is…
Descriptors: Elementary Secondary Education, Federal Legislation, Simulation, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Woods, Carol M. – Applied Psychological Measurement, 2009
Differential item functioning (DIF) occurs when items on a test or questionnaire have different measurement properties for one group of people versus another, irrespective of group-mean differences on the construct. Methods for testing DIF require matching members of different groups on an estimate of the construct. Preferably, the estimate is…
Descriptors: Test Results, Testing, Item Response Theory, Test Bias
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Puma, Michael J.; Olsen, Robert B.; Bell, Stephen H.; Price, Cristofer – National Center for Education Evaluation and Regional Assistance, 2009
This NCEE Technical Methods report examines how to address the problem of missing data in the analysis of data in Randomized Controlled Trials (RCTs) of educational interventions, with a particular focus on the common educational situation in which groups of students such as entire classrooms or schools are randomized. Missing outcome data are a…
Descriptors: Educational Research, Research Design, Research Methodology, Control Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Woods, Carol M. – Applied Psychological Measurement, 2008
In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters of a unidimensional item response model using marginal maximum likelihood estimation. This study evaluates RC-IRT for the three-parameter logistic (3PL) model with comparisons to the normal model and to the empirical…
Descriptors: Test Length, Computation, Item Response Theory, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Bernaards, Coen A.; Sijtsma, Klaas – Multivariate Behavioral Research, 2000
Using simulation, studied the influence of each of 12 imputation methods and 2 methods using the EM algorithm on the results of maximum likelihood factor analysis as compared with results from the complete data factor analysis (no missing scores). Discusses why EM methods recovered complete data factor loadings better than imputation methods. (SLD)
Descriptors: Factor Analysis, Maximum Likelihood Statistics, Questionnaires, Simulation
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5