NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Laws, Policies, & Programs
No Child Left Behind Act 20011
Showing all 7 results Save | Export
Xin Qiao; Akihito Kamata; Yusuf Kara; Cornelis Potgieter; Joseph Nese – Grantee Submission, 2023
In this article, the beta-binomial model for count data is proposed and demonstrated in terms of its application in the context of oral reading fluency (ORF) assessment, where the number of words read correctly (WRC) is of interest. Existing studies adopted the binomial model for count data in similar assessment scenarios. The beta-binomial model,…
Descriptors: Oral Reading, Reading Fluency, Bayesian Statistics, Markov Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Trendtel, Matthias; Robitzsch, Alexander – Journal of Educational and Behavioral Statistics, 2021
A multidimensional Bayesian item response model is proposed for modeling item position effects. The first dimension corresponds to the ability that is to be measured; the second dimension represents a factor that allows for individual differences in item position effects called persistence. This model allows for nonlinear item position effects on…
Descriptors: Bayesian Statistics, Item Response Theory, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Hung, Lai-Fa – Applied Psychological Measurement, 2012
Rasch used a Poisson model to analyze errors and speed in reading tests. An important property of the Poisson distribution is that the mean and variance are equal. However, in social science research, it is very common for the variance to be greater than the mean (i.e., the data are overdispersed). This study embeds the Rasch model within an…
Descriptors: Social Science Research, Markov Processes, Reading Tests, Social Sciences
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Hongli; Suen, Hoi K. – Educational Assessment, 2013
Cognitive diagnostic analyses have been advocated as methods that allow an assessment to function as a formative assessment to inform instruction. To use this approach, it is necessary to first identify the skills required for each item in the test, known as a Q-matrix. However, because the construct being tested and the underlying cognitive…
Descriptors: Reading Tests, Reading Comprehension, Cognitive Processes, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ito, Kyoko; Sykes, Robert C.; Yao, Lihua – Applied Measurement in Education, 2008
Reading and Mathematics tests of multiple-choice items for grades Kindergarten through 9 were vertically scaled using the three-parameter logistic model and two different scaling procedures: concurrent and separate by grade groups. Item parameters were estimated using Markov chain Monte Carlo methodology while fixing the grade 4 population…
Descriptors: Grades (Scholastic), Markov Processes, Mathematics Tests, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Campuzano, Larissa; Dynarski, Mark; Agodini, Roberto; Rall, Kristina – National Center for Education Evaluation and Regional Assistance, 2009
In the No Child Left Behind Act (NCLB), Congress called for the U.S. Department of Education (ED) to conduct a rigorous study of the conditions and practices under which educational technology is effective in increasing student academic achievement. A 2007 report presenting study findings for the 2004-2005 school year, indicated that, after one…
Descriptors: Teacher Characteristics, Federal Legislation, Academic Achievement, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Johnson, Matthew S.; Jenkins, Frank – ETS Research Report Series, 2005
Large-scale educational assessments such as the National Assessment of Educational Progress (NAEP) sample examinees to whom an exam will be administered. In most situations the sampling design is not a simple random sample and must be accounted for in the estimating model. After reviewing the current operational estimation procedure for NAEP, this…
Descriptors: Bayesian Statistics, Hierarchical Linear Modeling, National Competency Tests, Sampling