NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20012
What Works Clearinghouse Rating
Showing 61 to 75 of 404 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Koziol, Natalie A.; Goodrich, J. Marc; Yoon, HyeonJin – Educational and Psychological Measurement, 2022
Differential item functioning (DIF) is often used to examine validity evidence of alternate form test accommodations. Unfortunately, traditional approaches for evaluating DIF are prone to selection bias. This article proposes a novel DIF framework that capitalizes on regression discontinuity design analysis to control for selection bias. A…
Descriptors: Regression (Statistics), Item Analysis, Validity, Testing Accommodations
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Saatcioglu, Fatima Munevver; Atar, Hakan Yavuz – International Journal of Assessment Tools in Education, 2022
This study aims to examine the effects of mixture item response theory (IRT) models on item parameter estimation and classification accuracy under different conditions. The manipulated variables of the simulation study are set as mixture IRT models (Rasch, 2PL, 3PL); sample size (600, 1000); the number of items (10, 30); the number of latent…
Descriptors: Accuracy, Classification, Item Response Theory, Programming Languages
Peer reviewed Peer reviewed
Direct linkDirect link
Goodman, Joshua T.; Dallas, Andrew D.; Fan, Fen – Applied Measurement in Education, 2020
Recent research has suggested that re-setting the standard for each administration of a small sample examination, in addition to the high cost, does not adequately maintain similar performance expectations year after year. Small-sample equating methods have shown promise with samples between 20 and 30. For groups that have fewer than 20 students,…
Descriptors: Equated Scores, Sample Size, Sampling, Weighted Scores
Beth A. Perkins – ProQuest LLC, 2021
In educational contexts, students often self-select into specific interventions (e.g., courses, majors, extracurricular programming). When students self-select into an intervention, systematic group differences may impact the validity of inferences made regarding the effect of the intervention. Propensity score methods are commonly used to reduce…
Descriptors: Probability, Causal Models, Evaluation Methods, Control Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Xinru; Dusseldorp, Elise; Meulman, Jacqueline J. – Research Synthesis Methods, 2019
In meta-analytic studies, there are often multiple moderators available (eg, study characteristics). In such cases, traditional meta-analysis methods often lack sufficient power to investigate interaction effects between moderators, especially high-order interactions. To overcome this problem, meta-CART was proposed: an approach that applies…
Descriptors: Correlation, Meta Analysis, Identification, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Mundorf, Abigail M. D.; Lazarus, Linh T. T.; Uitvlugt, Mitchell G.; Healey, M. Karl – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2021
The temporal contiguity effect (TCE) is the tendency for the recall of one event to cue recall of other events originally experienced nearby in time. Retrieved context theory proposes that the TCE results from fundamental properties of episodic memory: binding of events to a drifting context representation during encoding and the reinstatement of…
Descriptors: Incidental Learning, Correlation, Recall (Psychology), Cues
Peer reviewed Peer reviewed
Direct linkDirect link
Liang, Xinya; Kamata, Akihito; Li, Ji – Educational and Psychological Measurement, 2020
One important issue in Bayesian estimation is the determination of an effective informative prior. In hierarchical Bayes models, the uncertainty of hyperparameters in a prior can be further modeled via their own priors, namely, hyper priors. This study introduces a framework to construct hyper priors for both the mean and the variance…
Descriptors: Bayesian Statistics, Randomized Controlled Trials, Effect Size, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Bradley, Alex; Quigley, Martyn – Studies in Higher Education, 2023
The mass participation in higher education has led to greater spending by governments and students which has increased the focus on graduate outcomes. In England, the Office for Students (OfS) is planning to take regulatory action, using the Proceed metric, against universities and their courses which do not have 60% of students with positive…
Descriptors: Foreign Countries, Higher Education, Education Work Relationship, Outcomes of Education
Ziying Li; A. Corinne Huggins-Manley; Walter L. Leite; M. David Miller; Eric A. Wright – Educational and Psychological Measurement, 2022
The unstructured multiple-attempt (MA) item response data in virtual learning environments (VLEs) are often from student-selected assessment data sets, which include missing data, single-attempt responses, multiple-attempt responses, and unknown growth ability across attempts, leading to a complex and complicated scenario for using this kind of…
Descriptors: Sequential Approach, Item Response Theory, Data, Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Köse, Alper; Dogan, C. Deha – International Journal of Evaluation and Research in Education, 2019
The aim of this study was to examine the precision of item parameter estimation in different sample sizes and test lengths under three parameter logistic model (3PL) item response theory (IRT) model, where the trait measured by a test was not normally distributed or had a skewed distribution. In the study, number of categories (1-0), and item…
Descriptors: Statistical Bias, Item Response Theory, Simulation, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Zhonghua; Zhao, Mingren – Journal of Educational Measurement, 2019
The present study evaluated the multiple imputation method, a procedure that is similar to the one suggested by Li and Lissitz (2004), and compared the performance of this method with that of the bootstrap method and the delta method in obtaining the standard errors for the estimates of the parameter scale transformation coefficients in item…
Descriptors: Item Response Theory, Error Patterns, Item Analysis, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Doleman, Brett; Freeman, Suzanne C.; Lund, Jonathan N.; Williams, John P.; Sutton, Alex J. – Research Synthesis Methods, 2020
This study aimed to determine for continuous outcomes dependent on baseline risk, whether funnel plot asymmetry may be due to statistical artefact rather than publication bias and evaluate a novel test to resolve this. Firstly, we conducted assessment for publication bias in nine meta-analyses of postoperative analgesics (344 trials with 25 348…
Descriptors: Outcomes of Treatment, Risk, Publications, Bias
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lu, Ru; Guo, Hongwen; Dorans, Neil J. – ETS Research Report Series, 2021
Two families of analysis methods can be used for differential item functioning (DIF) analysis. One family is DIF analysis based on observed scores, such as the Mantel-Haenszel (MH) and the standardized proportion-correct metric for DIF procedures; the other is analysis based on latent ability, in which the statistic is a measure of departure from…
Descriptors: Robustness (Statistics), Weighted Scores, Test Items, Item Analysis
Fan Pan – ProQuest LLC, 2021
This dissertation informed researchers about the performance of different level-specific and target-specific model fit indices in Multilevel Latent Growth Model (MLGM) using unbalanced design and different trajectories. As the use of MLGMs is a relatively new field, this study helped further the field by informing researchers interested in using…
Descriptors: Goodness of Fit, Item Response Theory, Growth Models, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Kopp, Jason P.; Jones, Andrew T. – Applied Measurement in Education, 2020
Traditional psychometric guidelines suggest that at least several hundred respondents are needed to obtain accurate parameter estimates under the Rasch model. However, recent research indicates that Rasch equating results in accurate parameter estimates with sample sizes as small as 25. Item parameter drift under the Rasch model has been…
Descriptors: Item Response Theory, Psychometrics, Sample Size, Sampling
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  27