NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 163 results Save | Export
Elizabeth Talbott; Andres De Los Reyes; Devin M. Kearns; Jeannette Mancilla-Martinez; Mo Wang – Exceptional Children, 2023
Evidence-based assessment (EBA) requires that investigators employ scientific theories and research findings to guide decisions about what domains to measure, how and when to measure them, and how to make decisions and interpret results. To implement EBA, investigators need high-quality assessment tools along with evidence-based processes. We…
Descriptors: Evidence Based Practice, Evaluation Methods, Special Education, Educational Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Finucane, Mariel; Thal, Daniel – National Center for Education Evaluation and Regional Assistance, 2022
BASIE is a framework for interpreting impact estimates from evaluations. It is an alternative to null hypothesis significance testing. This guide walks researchers through the key steps of applying BASIE, including selecting prior evidence, reporting impact estimates, interpreting impact estimates, and conducting sensitivity analyses. The guide…
Descriptors: Bayesian Statistics, Educational Research, Data Interpretation, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Barbana, Samir; Dumay, Xavier; Dupriez, Vincent – European Educational Research Journal, 2020
This article is an introduction to a European Educational Research Journal special issue on accountability policies and instruments in Europe. Two hypotheses grounded in the new institutionalist theory are presented to conceptualise and analyse the variety of national trajectories and forms of accountability in four European education systems…
Descriptors: Accountability, Governance, Educational Policy, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Lortie-Forgues, Hugues; Inglis, Matthew – Educational Researcher, 2019
In this response, we first show that Simpson's proposed analysis answers a different and less interesting question than ours. We then justify the choice of prior for our Bayes factors calculations, but we also demonstrate that the substantive conclusions of our article are not substantially affected by varying this choice.
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation
André A. Rupp; Laura Pinsonneault – National Center for the Improvement of Educational Assessment, 2025
State education agencies are sitting on rich repositories of quantitative and qualitative assessment data. This document is designed to provide a conceptual framework and implementation guidance that can help agency leadership leverage and interrogate student performance data in systematic ways for reporting, outreach, and planning purposes. The…
Descriptors: Evaluation Methods, Educational Assessment, Achievement Tests, College Entrance Examinations
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson, Adrian – Educational Researcher, 2019
A recent paper uses Bayes factors to argue a large minority of rigorous, large-scale education RCTs are "uninformative." The definition of "uninformative" depends on the authors' hypothesis choices for calculating Bayes factors. These arguably overadjust for effect size inflation and involve a fixed prior distribution,…
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Nguyen, Khuyen; McDaniel, Mark A. – Teaching of Psychology, 2015
Recently, the testing effect has received a great deal of attention from researchers and educators who are intrigued by its potential to enhance student learning in the classroom. However, effective incorporation of testing (as a learning tool) merits a close examination of the conditions under which testing can enhance student learning in an…
Descriptors: Testing, Student Evaluation, Summative Evaluation, Educational Benefits
Volungeviciene, Airina; Brown, Mark; Greenspon, Rasa; Gaebel, Michael; Morrisroe, Alison – European University Association, 2021
Digitally enhanced learning and teaching is widely used across the European Higher Education Area, with general acceptance growing over the years and institutions widely acknowledging the benefits it brings to the student experience. The strategic focus being placed on digitally enhanced learning and teaching has increased, undoubtedly accelerated…
Descriptors: Educational Technology, Technology Uses in Education, Program Evaluation, Self Evaluation (Groups)
Peer reviewed Peer reviewed
Direct linkDirect link
Porter, Kristin E. – Journal of Research on Educational Effectiveness, 2018
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lewis, Todd F. – Measurement and Evaluation in Counseling and Development, 2017
American Educational Research Association (AERA) standards stipulate that researchers show evidence of the internal structure of instruments. Confirmatory factor analysis (CFA) is one structural equation modeling procedure designed to assess construct validity of assessments that has broad applicability for counselors interested in instrument…
Descriptors: Educational Research, Factor Analysis, Structural Equation Models, Construct Validity
Porter, Kristin E. – MDRC, 2016
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Camilli, Gregory – Educational Research and Evaluation, 2013
In the attempt to identify or prevent unfair tests, both quantitative analyses and logical evaluation are often used. For the most part, fairness evaluation is a pragmatic attempt at determining whether procedural or substantive due process has been accorded to either a group of test takers or an individual. In both the individual and comparative…
Descriptors: Alternative Assessment, Test Bias, Test Content, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Hicks, Tyler A.; Knollman, Greg A. – Career Development and Transition for Exceptional Individuals, 2015
This review examines published secondary analyses of National Longitudinal Transition Study 2 (NLTS2) data, with a primary focus upon statistical objectives, paradigms, inferences, and methods. Its primary purpose was to determine which statistical techniques have been common in secondary analyses of NLTS2 data. The review begins with an…
Descriptors: Longitudinal Studies, Disabilities, Special Education, Transitional Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Mislevy, Robert J.; Haertel, Geneva; Cheng, Britte H.; Ructtinger, Liliana; DeBarger, Angela; Murray, Elizabeth; Rose, David; Gravel, Jenna; Colker, Alexis M.; Rutstein, Daisy; Vendlinski, Terry – Educational Research and Evaluation, 2013
Standardizing aspects of assessments has long been recognized as a tactic to help make evaluations of examinees fair. It reduces variation in irrelevant aspects of testing procedures that could advantage some examinees and disadvantage others. However, recent attention to making assessment accessible to a more diverse population of students…
Descriptors: Testing Accommodations, Access to Education, Testing, Psychometrics
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11