NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)0
Since 2007 (last 20 years)11
Publication Type
Journal Articles11
Reports - Research11
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Perry, Thomas – British Educational Research Journal, 2016
Value-added "Progress" measures are to be introduced for all English schools in 2016 as "headline" measures of school performance. This move comes despite research highlighting high levels of instability in value-added measures and concerns about the omission of contextual variables in the planned measure. This article studies…
Descriptors: Foreign Countries, Value Added Models, School Effectiveness, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Parker, Richard I.; Vannest, Kimberly J.; Davis, John L. – Journal of School Psychology, 2013
The use of multi-category scales is increasing for the monitoring of IEP goals, classroom and school rules, and Behavior Improvement Plans (BIPs). Although they require greater inference than traditional data counting, little is known about the inter-rater reliability of these scales. This simulation study examined the performance of nine…
Descriptors: Rating Scales, Scaling, Interrater Reliability, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Dhuey, Elizabeth; Lipscomb, Stephen – Economics of Education Review, 2010
This study extends recent findings of a relationship between the relative age of students among their peers and their probability of disability classification. Using three nationally representative surveys spanning 1988-2004 and grades K-10, we find that an additional month of relative age decreases the likelihood of receiving special education…
Descriptors: Learning Disabilities, Achievement Gains, Classification, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Worts, Diana; Sacker, Amanda; McDonough, Peggy – Social Indicators Research, 2010
This paper addresses a key methodological challenge in the modeling of individual poverty dynamics--the influence of measurement error. Taking the US and Britain as case studies and building on recent research that uses latent Markov models to reduce bias, we examine how measurement error can affect a range of important poverty estimates. Our data…
Descriptors: Poverty, Measurement, Error of Measurement, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Hawkins, Abigail; Barbour, Michael K. – American Journal of Distance Education, 2010
Variation in policies virtual schools use to calculate course completion and retention rates impacts the comparability of these quality metrics. This study surveyed 159 U.S. virtual schools examining the variability in trial period and course completion policies--two policies that affect course completion rates. Of the 86 respondents, almost 70%…
Descriptors: Academic Persistence, School Holding Power, Differences, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Bridges, David – British Educational Research Journal, 2009
For better or for worse, the assessment of research quality is one of the primary drivers of the behaviour of the academic community with all sorts of potential for distorting that behaviour. So, if you are going to assess research quality, how do you do it? This article explores some of the problems and possibilities, with particular reference to…
Descriptors: Educational Research, Humanities, Quality Control, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Myford, Carol M.; Wolfe, Edward W. – Journal of Educational Measurement, 2009
In this study, we describe a framework for monitoring rater performance over time. We present several statistical indices to identify raters whose standards drift and explain how to use those indices operationally. To illustrate the use of the framework, we analyzed rating data from the 2002 Advanced Placement English Literature and Composition…
Descriptors: English Literature, Advanced Placement, Measures (Individuals), Writing (Composition)
Peer reviewed Peer reviewed
Direct linkDirect link
Clauser, Brian E.; Mee, Janet; Baldwin, Su G.; Margolis, Melissa J.; Dillon, Gerard F. – Journal of Educational Measurement, 2009
Although the Angoff procedure is among the most widely used standard setting procedures for tests comprising multiple-choice items, research has shown that subject matter experts have considerable difficulty accurately making the required judgments in the absence of examinee performance data. Some authors have viewed the need to provide…
Descriptors: Standard Setting (Scoring), Program Effectiveness, Expertise, Health Personnel
Peer reviewed Peer reviewed
Direct linkDirect link
Moen, Ross; Liu, Kristi; Thurlow, Martha; Lekwa, Adam; Scullin, Sarah; Hausmann, Kristin – Journal of Applied Testing Technology, 2009
Some students are less accurately measured by typical reading tests than other students. By asking teachers to identify students whose performance on state reading tests would likely underestimate their reading skills, this study sought to learn about characteristics of less accurately measured students while also evaluating how well teachers can…
Descriptors: Reading Tests, Academic Achievement, Interviews, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
McCaffrey, Daniel F.; Sass, Tim R.; Lockwood, J. R.; Mihaly, Kata – Education Finance and Policy, 2009
The utility of value-added estimates of teachers' effects on student test scores depends on whether they can distinguish between high- and low-productivity teachers and predict future teacher performance. This article studies the year-to-year variability in value-added measures for elementary and middle school mathematics teachers from five large…
Descriptors: Teacher Characteristics, Mathematics Achievement, Sampling, Middle School Teachers