NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hall, Garret J.; Kaplan, David; Albers, Craig A. – Learning Disabilities Research & Practice, 2022
Bayesian latent change score modeling (LCSM) was used to compare models of triannual (fall, winter, spring) change on elementary math computation and concepts/applications curriculum-based measures. Data were collected from elementary students in Grades 2-5, approximately 700 to 850 students in each grade (47%-54% female; 78%-79% White, 10%-11%…
Descriptors: Learning Disabilities, Students with Disabilities, Elementary School Students, Mathematics Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Dan; Opfer, John E. – Developmental Psychology, 2017
Representations of numerical value have been assessed by using bounded (e.g., 0-1,000) and unbounded (e.g., 0-?) number-line tasks, with considerable debate regarding whether 1 or both tasks elicit unique cognitive strategies (e.g., addition or subtraction) and require unique cognitive models. To test this, we examined how well a mixed log-linear…
Descriptors: Computation, Numbers, Children, Cognitive Development
Choi, Kilchan; Kim, Jinok – Journal of Educational and Behavioral Statistics, 2019
This article proposes a latent variable regression four-level hierarchical model (LVR-HM4) that uses a fully Bayesian approach. Using multisite multiple-cohort longitudinal data, for example, annual assessment scores over grades for students who are nested within cohorts within schools, the LVR-HM4 attempts to simultaneously model two types of…
Descriptors: Regression (Statistics), Hierarchical Linear Modeling, Longitudinal Studies, Cohort Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Pohl, Steffi; Gräfe, Linda; Rose, Norman – Educational and Psychological Measurement, 2014
Data from competence tests usually show a number of missing responses on test items due to both omitted and not-reached items. Different approaches for dealing with missing responses exist, and there are no clear guidelines on which of those to use. While classical approaches rely on an ignorable missing data mechanism, the most recently developed…
Descriptors: Test Items, Achievement Tests, Item Response Theory, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
May, Henry – Society for Research on Educational Effectiveness, 2014
Interest in variation in program impacts--How big is it? What might explain it?--has inspired recent work on the analysis of data from multi-site experiments. One critical aspect of this problem involves the use of random or fixed effect estimates to visualize the distribution of impact estimates across a sample of sites. Unfortunately, unless the…
Descriptors: Educational Research, Program Effectiveness, Research Problems, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z.; Chiang, Hanley S. – Journal of Educational and Behavioral Statistics, 2013
This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…
Descriptors: Classification, Measurement, Elementary School Teachers, Elementary Schools
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Wang, Wen-Chung – Educational and Psychological Measurement, 2014
In the social sciences, latent traits often have a hierarchical structure, and data can be sampled from multiple levels. Both hierarchical latent traits and multilevel data can occur simultaneously. In this study, we developed a general class of item response theory models to accommodate both hierarchical latent traits and multilevel data. The…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Computation, Test Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Petscher, Yaacov; Kershaw, Sarah; Koon, Sharon; Foorman, Barbara R. – Regional Educational Laboratory Southeast, 2014
Districts and schools use progress monitoring to assess student progress, to identify students who fail to respond to intervention, and to further adapt instruction to student needs. Researchers and practitioners often use progress monitoring data to estimate student achievement growth (slope) and evaluate changes in performance over time for…
Descriptors: Reading Comprehension, Reading Achievement, Elementary School Students, Secondary School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Petscher, Yaacov; Kershaw, Sarah; Koon, Sharon; Foorman, Barbara R. – Regional Educational Laboratory Southeast, 2014
Districts and schools use progress monitoring to assess student progress, to identify students who fail to respond to intervention, and to further adapt instruction to student needs. Researchers and practitioners often use progress monitoring data to estimate student achievement growth (slope) and evaluate changes in performance over time for…
Descriptors: Response to Intervention, Achievement Gains, High Stakes Tests, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Palardy, Gregory J. – Educational and Psychological Measurement, 2010
This article examines the multilevel linear crossed random effects growth model for estimating teacher and school effects from repeated measurements of student achievement. Results suggest that even a small degree of unmodeled nonlinearity can result in a substantial upward bias in the magnitude of the teacher effect, which raises concerns about…
Descriptors: Computation, Models, Statistical Analysis, Academic Achievement