NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Rickard, Timothy C.; Pan, Steven C.; Gupta, Mohan W. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2022
We explored the possibility of publication bias in the sleep and explicit motor sequence learning literature by applying precision effect test (PET) and precision effect test with standard errors (PEESE) weighted regression analyses to the 88 effect sizes from a recent comprehensive literature review (Pan & Rickard, 2015). Basic PET analysis…
Descriptors: Publications, Bias, Sleep, Psychomotor Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Sunyoung; Beretvas, S. Natasha – Journal of Experimental Education, 2019
The log-odds ratio (ln[OR]) is commonly used to quantify treatments' effects on dichotomous outcomes and then pooled across studies using inverse-variance (1/v) weights. Calculation of the ln[OR]'s variance requires four cell frequencies for two groups crossed with values for dichotomous outcomes. While primary studies report the total sample size…
Descriptors: Sample Size, Meta Analysis, Statistical Analysis, Efficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Oshima, T. C.; Wright, Keith; White, Nick – International Journal of Testing, 2015
Raju, van der Linden, and Fleer (1995) introduced a framework for differential functioning of items and tests (DFIT) for unidimensional dichotomous models. Since then, DFIT has been shown to be a quite versatile framework as it can handle polytomous as well as multidimensional models both at the item and test levels. However, DFIT is still limited…
Descriptors: Test Bias, Item Response Theory, Test Items, Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Wei, Thomas; Kautz, Tim – National Center for Education Evaluation and Regional Assistance, 2017
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts…
Descriptors: Intervention, Educational Research, Research Problems, Statistical Bias
Garrett, Phyllis – ProQuest LLC, 2009
The use of polytomous items in assessments has increased over the years, and as a result, the validity of these assessments has been a concern. Differential item functioning (DIF) and missing data are two factors that may adversely affect assessment validity. Both factors have been studied separately, but DIF and missing data are likely to occur…
Descriptors: Sample Size, Monte Carlo Methods, Test Validity, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Kromrey, Jeffrey D.; Rendina-Gobioff, Gianna – Educational and Psychological Measurement, 2006
The performance of methods for detecting publication bias in meta-analysis was evaluated using Monte Carlo methods. Four methods of bias detection were investigated: Begg's rank correlation, Egger's regression, funnel plot regression, and trim and fill. Five factors were included in the simulation design: number of primary studies in each…
Descriptors: Comparative Analysis, Meta Analysis, Monte Carlo Methods, Correlation