NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)1
Since 2007 (last 20 years)9
What Works Clearinghouse Rating
Showing 1 to 15 of 77 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Gersten, Russell; Jayanthi, Madhavi; Dimino, Joseph – Exceptional Children, 2017
The report of the national response to intervention (RTI) evaluation study, conducted during 2011-2012, was released in November 2015. Anyone who has read the lengthy report can attest to its complexity and the design used in the study. Both these factors can influence the interpretation of the results from this evaluation. In this commentary, we…
Descriptors: Response to Intervention, National Programs, Program Effectiveness, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Koretz, Daniel – Assessment in Education: Principles, Policy & Practice, 2016
Daniel Koretz is the Henry Lee Shattuck Professor of Education at the Harvard Graduate School of Education. His research focuses on educational assessment and policy, particularly the effects of high-stakes testing on educational practice and the validity of score gains. He is the author of "Measuring Up: What Educational Testing Really Tells…
Descriptors: Test Validity, Definitions, Evidence, Relevance (Education)
Peer reviewed Peer reviewed
Direct linkDirect link
Zumbo, Bruno D.; Hubley, Anita M. – Assessment in Education: Principles, Policy & Practice, 2016
Ultimately, measures in research, testing, assessment and evaluation are used, or have implications, for ranking, intervention, feedback, decision-making or policy purposes. Explicit recognition of this fact brings the often-ignored and sometimes maligned concept of consequences to the fore. Given that measures have personal and social…
Descriptors: Testing Programs, Testing Problems, Measurement Techniques, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Koretz, Daniel – Measurement: Interdisciplinary Research and Perspectives, 2013
Haertel's argument is that one must "expand the scope of test validation to include indirect testing effects" because these effects are often the "rationale for the entire testing program." The author strongly agrees that this is essential. However, he maintains that Haertel's argument does not go far enough and that there are two additional…
Descriptors: Educational Testing, Test Validity, Test Results, Testing Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Dancis, Jerome – AASA Journal of Scholarship & Practice, 2014
The Organization for Economic Cooperation and Development [OECD] is a global policy organization that includes the United States and about half of the Western Europe countries. It administers international comparison tests, called Programme for International Student Assessment (PISA), for 15 year-old students in Mathematics and other subjects. I…
Descriptors: Mathematics Achievement, Mathematics Tests, Cross Cultural Studies, Comparative Education
Peer reviewed Peer reviewed
Direct linkDirect link
Lane, Suzanne – Measurement: Interdisciplinary Research and Perspectives, 2012
Considering consequences in the evaluation of validity is not new although it is still debated by Paul E. Newton and others. The argument-based approach to validity entails an interpretative argument that explicitly identifies the proposed interpretations and uses of test scores and a validity argument that provides a structure for evaluating the…
Descriptors: Educational Opportunities, Accountability, Validity, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Lam, Tony C. M. – American Journal of Evaluation, 2009
D'Eon et al. concluded that change in performance self-assessment means from before to after a workshop can detect workshop success in their and other situations. In this commentary, their recommendation is refuted by showing that (a) self-assessments with balanced over- and underestimations are still biased and should not be used to evaluate…
Descriptors: Workshops, Success, Self Evaluation (Individuals), Test Bias
Green, Donald Ross – 1997
It is argued that publishers of achievement tests, especially those who publish tests intended for use in many parts of the United States, are for the most part not in a position to obtain any decent evidence about the consequences of the uses that are made of their tests. What responsibilities and actions publishers can reasonably be expected to…
Descriptors: Achievement Tests, Standardized Tests, State Programs, Test Use
Peer reviewed Peer reviewed
Direct linkDirect link
Briggs, Derek C. – Educational Researcher, 2008
When causal inferences are to be synthesized across multiple studies, efforts to establish the magnitude of a causal effect should be balanced by an effort to evaluate the generalizability of the effect. The evaluation of generalizability depends on two factors that are given little attention in current syntheses: construct validity and external…
Descriptors: Test Validity, Construct Validity, Inferences, Educational Policy
Peer reviewed Peer reviewed
Quellmalz, Edys S. – Educational Measurement: Issues and Practice, 1984
A summary of the writing assessment programs reviewed in this journal is presented. The problems inherent in the programs are outlined. A coordinated research program on major problems in writing assessment is proposed as being beneficial and cost-effective. (DWH)
Descriptors: Essay Tests, Program Evaluation, Scoring, State Programs
Haney, Walt – 1982
A needs assessment of the National Assessment of Educational Progress (NAEP) is presented. It deals with cost, design and technical issues, and utility. Suggestions include cost reduction via assessment schedule cutbacks and re-use of released NAEP exercises; and a shift from federal to private funding by selling NAEP exercises with interpretative…
Descriptors: Change Strategies, Cost Effectiveness, Educational Assessment, Evaluation Methods
Yap, Kim Onn – 1984
Two separate sets of minimum standards designed to guide the evaluation of bilingual projects are proposed. The first set relates to the process in which the evaluation activities are conducted. They include: validity of assessment procedures, validity and reliability of evaluation instruments, representativeness of findings, use of procedures for…
Descriptors: Academic Achievement, Bilingual Education Programs, Elementary Secondary Education, Evaluation Methods
Haynes, Billie – 1985
Administering a large scale licensing examination program presents both technical and non-technical challenges. Five major areas are discussed in this paper: (1) ensuring test validity in relation to occupational entry standards; (2) developing test items from valid examination specifications; (3) establishing legally defensible passing scores;…
Descriptors: Certification, Cutting Scores, Occupational Tests, Program Implementation
Peer reviewed Peer reviewed
Lapointe, Archie E.; Koffler, Stephen L. – Educational Researcher, 1982
Describes the creation and objectives of the National Assessment of Educational Progress (NAEP), and reviews the findings of an evaluation of the NAEP, published in "Measuring the Quality of Education," by Wirtz and Lapointe. Focuses on the role that the NAEP can play in establishing uniform, national educational standards. (GC)
Descriptors: Educational Assessment, Educational Objectives, Elementary Secondary Education, Evaluation Methods
Baker, Keith – Phi Delta Kappan, 2007
The idea that America was being harmed because its schools were not keeping up with those in other advanced nations emerged after Sputnik in 1957, took a firm hold on education policy when "A Nation at Risk" appeared in 1983, and continues today. Policy makers justify this concern by pointing to evidence showing that, for individuals…
Descriptors: Testing Programs, Academic Achievement, Achievement Tests, International Education
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6