Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 13 |
Descriptor
Computation | 13 |
Item Response Theory | 13 |
Grade 4 | 11 |
Mathematics Tests | 5 |
Models | 5 |
Test Items | 5 |
Grade 6 | 4 |
Simulation | 4 |
Correlation | 3 |
Difficulty Level | 3 |
Foreign Countries | 3 |
More ▼ |
Source
Author
Ketterlin-Geller, Leanne R. | 2 |
Liu, Kimy | 2 |
Tindal, Gerald | 2 |
Beretvas, S. Natasha | 1 |
Cai, Li | 1 |
Camilli, Gregory | 1 |
De Boeck, Paul | 1 |
Falk, Carl F. | 1 |
Fox, Jean-Paul | 1 |
Guarino, Cassandra | 1 |
Huang, Hung-Yu | 1 |
More ▼ |
Publication Type
Journal Articles | 8 |
Reports - Research | 7 |
Reports - Evaluative | 5 |
Numerical/Quantitative Data | 2 |
Reports - Descriptive | 1 |
Education Level
Grade 4 | 13 |
Elementary Education | 9 |
Intermediate Grades | 6 |
Grade 3 | 4 |
Grade 5 | 4 |
Grade 6 | 4 |
Grade 7 | 4 |
Grade 8 | 4 |
Secondary Education | 3 |
Elementary Secondary Education | 2 |
Junior High Schools | 2 |
More ▼ |
Audience
Location
Armenia | 1 |
Austria | 1 |
Belgium | 1 |
Colorado | 1 |
Florida | 1 |
Iran | 1 |
New York | 1 |
North Carolina | 1 |
Norway | 1 |
Oregon | 1 |
Taiwan | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 3 |
National Assessment of… | 2 |
SAT (College Admission Test) | 1 |
Students Evaluation of… | 1 |
What Works Clearinghouse Rating
Jewsbury, Paul A.; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
In large-scale educational assessment data consistent with a simple-structure multidimensional item response theory (MIRT) model, where every item measures only one latent variable, separate unidimensional item response theory (UIRT) models for each latent variable are often calibrated for practical reasons. While this approach can be valid for…
Descriptors: Item Response Theory, Computation, Test Items, Adaptive Testing
Camilli, Gregory; Fox, Jean-Paul – Journal of Educational and Behavioral Statistics, 2015
An aggregation strategy is proposed to potentially address practical limitation related to computing resources for two-level multidimensional item response theory (MIRT) models with large data sets. The aggregate model is derived by integration of the normal ogive model, and an adaptation of the stochastic approximation expectation maximization…
Descriptors: Factor Analysis, Item Response Theory, Grade 4, Simulation
Sen, Sedat – International Journal of Testing, 2018
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…
Descriptors: Item Response Theory, Comparative Analysis, Computation, Maximum Likelihood Statistics
Falk, Carl F.; Cai, Li – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2015
We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…
Descriptors: Guessing (Tests), Item Response Theory, Mathematics Instruction, Mathematics Tests
Ye, Meng; Xin, Tao – Educational and Psychological Measurement, 2014
The authors explored the effects of drifting common items on vertical scaling within the higher order framework of item parameter drift (IPD). The results showed that if IPD occurred between a pair of test levels, the scaling performance started to deviate from the ideal state, as indicated by bias of scaling. When there were two items drifting…
Descriptors: Scaling, Test Items, Equated Scores, Achievement Gains
Murphy, Daniel L.; Beretvas, S. Natasha – Applied Measurement in Education, 2015
This study examines the use of cross-classified random effects models (CCrem) and cross-classified multiple membership random effects models (CCMMrem) to model rater bias and estimate teacher effectiveness. Effect estimates are compared using CTT versus item response theory (IRT) scaling methods and three models (i.e., conventional multilevel…
Descriptors: Teacher Effectiveness, Comparative Analysis, Hierarchical Linear Modeling, Test Theory
Zhang, Jinming – Journal of Educational and Behavioral Statistics, 2012
The impact of uncertainty about item parameters on test information functions is investigated. The information function of a test is one of the most important tools in item response theory (IRT). Inaccuracy in the estimation of test information can have substantial consequences on data analyses based on IRT. In this article, the major part (called…
Descriptors: Item Response Theory, Tests, Accuracy, Data Analysis
Huang, Hung-Yu; Wang, Wen-Chung – Educational and Psychological Measurement, 2014
In the social sciences, latent traits often have a hierarchical structure, and data can be sampled from multiple levels. Both hierarchical latent traits and multilevel data can occur simultaneously. In this study, we developed a general class of item response theory models to accommodate both hierarchical latent traits and multilevel data. The…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Computation, Test Reliability
Stacy, Brian; Reckase, Mark; Wooldridge, Jeffrey; Guarino, Cassandra – Education Policy Center at Michigan State University, 2013
This paper investigates how the precision and stability of a teacher's value-added estimate relates to the characteristics of the teacher's students. Using a large administrative data set and a variety of teacher value-added estimators, it finds that the stability over time of teacher value-added estimates can depend on the previous achievement…
Descriptors: Teacher Effectiveness, Teacher Competencies, Academic Achievement, Computation
von Davier, Matthias; Sinharay, Sandip – Educational Testing Service, 2009
This paper presents an application of a stochastic approximation EM-algorithm using a Metropolis-Hastings sampler to estimate the parameters of an item response latent regression model. Latent regression models are extensions of item response theory (IRT) to a 2-level latent variable model in which covariates serve as predictors of the…
Descriptors: Item Response Theory, Regression (Statistics), Models, Methods
Kahraman, Nilufer; De Boeck, Paul; Janssen, Rianne – International Journal of Testing, 2009
This study introduces an approach for modeling multidimensional response data with construct-relevant group and domain factors. The item level parameter estimation process is extended to incorporate the refined effects of test dimension and group factors. Differences in item performances over groups are evaluated, distinguishing two levels of…
Descriptors: Test Bias, Test Items, Groups, Interaction
Liu, Kimy; Ketterlin-Geller, Leanne R.; Yovanoff, Paul; Tindal, Gerald – Behavioral Research and Teaching, 2008
BRT Math Screening Measures focus on students' mathematics performance in grade-level standards for students in grades 1-8. A total of 24 test forms are available with three test forms per grade corresponding to fall, winter, and spring testing periods. Each form contains computation problems and application problems. BRT Math Screening Measures…
Descriptors: Test Items, Test Format, Test Construction, Item Response Theory
Liu, Kimy; Sundstrom-Hebert, Krystal; Ketterlin-Geller, Leanne R.; Tindal, Gerald – Behavioral Research and Teaching, 2008
The purpose of this study was to document the instrument development of maze measures for grades 3-8. Each maze passage contained twelve omitted words that students filled in by choosing the best-fit word from among the provided options. In this technical report, we describe the process of creating, reviewing, and pilot testing the maze measures.…
Descriptors: Test Construction, Cloze Procedure, Multiple Choice Tests, Reading Tests