NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
Elementary and Secondary…2
What Works Clearinghouse Rating
Showing 1 to 15 of 36 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wendy Chan; Jimin Oh; Katherine Wilson – Society for Research on Educational Effectiveness, 2022
Background: Over the past decade, research on the development and assessment of tools to improve the generalizability of experimental findings has grown extensively (Tipton & Olsen, 2018). However, many experimental studies in education are based on small samples, which may include 30-70 schools while inference populations to which…
Descriptors: Educational Research, Research Problems, Sample Size, Research Methodology
Peer reviewed Peer reviewed
Dongho Shin – Grantee Submission, 2024
We consider Bayesian estimation of a hierarchical linear model (HLM) from small sample sizes. The continuous response Y and covariates C are partially observed and assumed missing at random. With C having linear effects, the HLM may be efficiently estimated by available methods. When C includes cluster-level covariates having interactive or other…
Descriptors: Bayesian Statistics, Computation, Hierarchical Linear Modeling, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Tarray, Tanveer A.; Singh, Housila P.; Yan, Zaizai – Sociological Methods & Research, 2017
This article addresses the problem of estimating the proportion Pi[subscript S] of the population belonging to a sensitive group using optional randomized response technique in stratified sampling based on Mangat model that has proportional and Neyman allocation and larger gain in efficiency. Numerically, it is found that the suggested model is…
Descriptors: Models, Efficiency, Sampling, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Grund, Simon; Lüdtke, Oliver; Robitzsch, Alexander – Journal of Educational and Behavioral Statistics, 2021
Large-scale assessments (LSAs) use Mislevy's "plausible value" (PV) approach to relate student proficiency to noncognitive variables administered in a background questionnaire. This method requires background variables to be completely observed, a requirement that is seldom fulfilled. In this article, we evaluate and compare the…
Descriptors: Data Analysis, Error of Measurement, Research Problems, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Klausch, Thomas; Schouten, Barry; Hox, Joop J. – Sociological Methods & Research, 2017
This study evaluated three types of bias--total, measurement, and selection bias (SB)--in three sequential mixed-mode designs of the Dutch Crime Victimization Survey: telephone, mail, and web, where nonrespondents were followed up face-to-face (F2F). In the absence of true scores, all biases were estimated as mode effects against two different…
Descriptors: Evaluation Methods, Statistical Bias, Sequential Approach, Benchmarking
Peer reviewed Peer reviewed
Direct linkDirect link
Zimmer, Ron; Engberg, John – Journal of School Choice, 2016
School choice programs continue to be controversial, spurring a number of researchers into evaluating them. When possible, researchers evaluate the effect of attending a school of choice using randomized designs to eliminate possible selection bias. Randomized designs are often thought of as the gold standard for research, but many circumstances…
Descriptors: Inferences, School Choice, Educational Vouchers, Charter Schools
Reardon, Sean F. – Society for Research on Educational Effectiveness, 2010
Instrumental variable estimators hold the promise of enabling researchers to estimate the effects of educational treatments that are not (or cannot be) randomly assigned but that may be affected by randomly assigned interventions. Examples of the use of instrumental variables in such cases are increasingly common in educational and social science…
Descriptors: Social Science Research, Least Squares Statistics, Computation, Correlation
Rice, Jennifer King – National Education Policy Center, 2012
Schools and school systems throughout the nation are increasingly experimenting with using various instructional technologies to improve productivity and decrease costs, but evidence on both the effectiveness and the costs of education technology is limited. A recent report published by the Thomas B. Fordham Institute sets out to describe "the…
Descriptors: Evidence, Electronic Learning, Distance Education, Online Courses
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dobrescu, Emilian – Journal of Applied Quantitative Methods, 2008
This article presents the author's address at the 2007 "Journal of Applied Quantitative Methods" ("JAQM") prize awarding festivity. The festivity was included in the opening of the 4th International Conference on Applied Statistics, November 22, 2008, Bucharest, Romania. In the address, the author reflects on three theses that…
Descriptors: Research Methodology, Epistemology, Statistical Analysis, Statistical Studies
Baker, Bruce – Education and the Public Interest Center, 2009
The new "Weighted Student Formula Yearbook 2009" from the Reason Foundation provides a simple framework for touting the successes of states and urban school districts that grant greater fiscal autonomy to schools. The report defines the Weighted Student Formula (WSF) reform extremely broadly, presenting a variety of reforms under the WSF umbrella.…
Descriptors: Evidence, Urban Schools, Research Reports, Change Strategies
Carnoy, Martin – Education and the Public Interest Center, 2009
The third-year evaluation of the federally funded Washington, D.C. voucher program shows that low-income students offered vouchers in the first two years of the program had modestly higher reading scores after three years but showed no significant difference in mathematics. Students were randomly assigned to treatment and control groups, and the…
Descriptors: Control Groups, Private Schools, Program Effectiveness, Scoring
Peer reviewed Peer reviewed
Mark, Melvin M. – Evaluation Review, 1983
The purposes of this article are, first, to argue that analyses based on level of treatment implementation can lead to biased estimates of treatment effects, and second, to discuss alternatives to this approach. (PN)
Descriptors: Evaluation Methods, Program Implementation, Research Problems, Statistical Analysis
Miron, Gary; Applegate, Brooks – Education and the Public Interest Center, 2009
The Center for Research on Education Outcomes (CREDO) at Stanford University conducted a large-scale analysis of the impact of charter schools on student performance. The center's data covered 65-70% of the nation's charter schools. Although results varied by state, 17% of the charter school students have significantly higher math results than …
Descriptors: Evidence, Traditional Schools, Charter Schools, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Ritter, Lois A., Ed.; Sue, Valerie M., Ed. – New Directions for Evaluation, 2007
This chapter provides an overview of sampling methods that are appropriate for conducting online surveys. The authors review some of the basic concepts relevant to online survey sampling, present some probability and nonprobability techniques for selecting a sample, and briefly discuss sample size determination and nonresponse bias. Although some…
Descriptors: Sampling, Probability, Evaluation Methods, Computer Assisted Testing
Previous Page | Next Page »
Pages: 1  |  2  |  3