Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 4 |
| Since 2007 (last 20 years) | 12 |
Descriptor
| Evaluation Methods | 15 |
| Probability | 15 |
| Sample Size | 15 |
| Simulation | 8 |
| Comparative Analysis | 5 |
| Models | 5 |
| Computation | 4 |
| Sampling | 4 |
| Statistical Bias | 4 |
| Educational Assessment | 3 |
| Effect Size | 3 |
| More ▼ | |
Source
Author
| Amemiya, Yasuo | 1 |
| Andrea C. Burrows Borowczak | 1 |
| Barr, James | 1 |
| Beretvas, S. Natasha | 1 |
| Beth A. Perkins | 1 |
| Forsberg, Ole J. | 1 |
| Guo, Jia | 1 |
| Huang, Hung-Yu | 1 |
| Hung, Su-Pin | 1 |
| Kevin T. Kilty | 1 |
| Kistner, Emily O. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 11 |
| Reports - Research | 7 |
| Reports - Descriptive | 4 |
| Dissertations/Theses -… | 2 |
| Reports - Evaluative | 2 |
| Speeches/Meeting Papers | 1 |
Education Level
| High Schools | 1 |
| Secondary Education | 1 |
Audience
Location
| Australia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Program for International… | 1 |
What Works Clearinghouse Rating
Trina Johnson Kilty; Kevin T. Kilty; Andrea C. Burrows Borowczak; Mike Borowczak – Problems of Education in the 21st Century, 2024
A computer science camp for pre-collegiate students was operated during the summers of 2022 and 2023. The effect the camp had on attitudes was quantitatively assessed using a survey instrument. However, enrollment at the summer camp was small, which meant the well-known Pearson's Chi-Squared to measure the significance of results was not applied.…
Descriptors: Summer Programs, Camps, Computer Science Education, 21st Century Skills
Hung, Su-Pin; Huang, Hung-Yu – Journal of Educational and Behavioral Statistics, 2022
To address response style or bias in rating scales, forced-choice items are often used to request that respondents rank their attitudes or preferences among a limited set of options. The rating scales used by raters to render judgments on ratees' performance also contribute to rater bias or errors; consequently, forced-choice items have recently…
Descriptors: Evaluation Methods, Rating Scales, Item Analysis, Preferences
Beth A. Perkins – ProQuest LLC, 2021
In educational contexts, students often self-select into specific interventions (e.g., courses, majors, extracurricular programming). When students self-select into an intervention, systematic group differences may impact the validity of inferences made regarding the effect of the intervention. Propensity score methods are commonly used to reduce…
Descriptors: Probability, Causal Models, Evaluation Methods, Control Groups
Steiner, Peter M.; Wong, Vivian – Society for Research on Educational Effectiveness, 2016
Despite recent emphasis on the use of randomized control trials (RCTs) for evaluating education interventions, in most areas of education research, observational methods remain the dominant approach for assessing program effects. Over the last three decades, the within-study comparison (WSC) design has emerged as a method for evaluating the…
Descriptors: Randomized Controlled Trials, Comparative Analysis, Research Design, Evaluation Methods
Shieh, Gwowen – Journal of Experimental Education, 2015
Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…
Descriptors: Statistical Analysis, Sample Size, Computation, Effect Size
Solomon, Benjamin G.; Forsberg, Ole J. – School Psychology Quarterly, 2017
Bayesian techniques have become increasingly present in the social sciences, fueled by advances in computer speed and the development of user-friendly software. In this paper, we forward the use of Bayesian Asymmetric Regression (BAR) to monitor intervention responsiveness when using Curriculum-Based Measurement (CBM) to assess oral reading…
Descriptors: Bayesian Statistics, Regression (Statistics), Least Squares Statistics, Evaluation Methods
Orcan, Fatih – ProQuest LLC, 2013
Parceling is referred to as a procedure for computing sums or average scores across multiple items. Parcels instead of individual items are then used as indicators of latent factors in the structural equation modeling analysis (Bandalos 2002, 2008; Little et al., 2002; Yang, Nay, & Hoyle, 2010). Item parceling may be applied to alleviate some…
Descriptors: Structural Equation Models, Evaluation Methods, Simulation, Sample Size
Lee, HwaYoung; Beretvas, S. Natasha – Educational and Psychological Measurement, 2014
Conventional differential item functioning (DIF) detection methods (e.g., the Mantel-Haenszel test) can be used to detect DIF only across observed groups, such as gender or ethnicity. However, research has found that DIF is not typically fully explained by an observed variable. True sources of DIF may include unobserved, latent variables, such as…
Descriptors: Item Analysis, Factor Structure, Bayesian Statistics, Goodness of Fit
Wall, Melanie M.; Guo, Jia; Amemiya, Yasuo – Multivariate Behavioral Research, 2012
Mixture factor analysis is examined as a means of flexibly estimating nonnormally distributed continuous latent factors in the presence of both continuous and dichotomous observed variables. A simulation study compares mixture factor analysis with normal maximum likelihood (ML) latent factor modeling. Different results emerge for continuous versus…
Descriptors: Sample Size, Simulation, Form Classes (Languages), Diseases
Phillips, Gary W. – Applied Measurement in Education, 2015
This article proposes that sampling design effects have potentially huge unrecognized impacts on the results reported by large-scale district and state assessments in the United States. When design effects are unrecognized and unaccounted for they lead to underestimating the sampling error in item and test statistics. Underestimating the sampling…
Descriptors: State Programs, Sampling, Research Design, Error of Measurement
Wyse, Adam E.; Mapuranga, Raymond – International Journal of Testing, 2009
Differential item functioning (DIF) analysis is a statistical technique used for ensuring the equity and fairness of educational assessments. This study formulates a new DIF analysis method using the information similarity index (ISI). ISI compares item information functions when data fits the Rasch model. Through simulations and an international…
Descriptors: Test Bias, Evaluation Methods, Test Items, Educational Assessment
Ritter, Lois A., Ed.; Sue, Valerie M., Ed. – New Directions for Evaluation, 2007
This chapter provides an overview of sampling methods that are appropriate for conducting online surveys. The authors review some of the basic concepts relevant to online survey sampling, present some probability and nonprobability techniques for selecting a sample, and briefly discuss sample size determination and nonresponse bias. Although some…
Descriptors: Sampling, Probability, Evaluation Methods, Computer Assisted Testing
Kistner, Emily O.; Muller, Keith E. – Psychometrika, 2004
Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact…
Descriptors: Correlation, Test Reliability, Test Results, Probability
Peer reviewedRoss, Kenneth N. – International Journal of Educational Research, 1987
This article considers various kinds of probability and non-probability samples in both experimental and survey studies. Throughout, how a sample is chosen is stressed. Size alone is not the determining consideration in sample selection. Good samples do not occur by accident; they are the result of a careful design. (Author/JAZ)
Descriptors: Educational Assessment, Elementary Secondary Education, Evaluation Methods, Experimental Groups
Rasor, Richard E.; Barr, James – 1998
This paper provides an overview of common sampling methods (both the good and the bad) likely to be used in community college self-evaluations and presents the results from several simulated trials. The report begins by reviewing various survey techniques, discussing the negative and positive aspects of each method. The increased accuracy and…
Descriptors: Community Colleges, Comparative Analysis, Cost Effectiveness, Data Collection

Direct link
