Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 4 |
| Since 2017 (last 10 years) | 13 |
| Since 2007 (last 20 years) | 21 |
Descriptor
| Data Analysis | 35 |
| Test Items | 35 |
| Simulation | 15 |
| Item Response Theory | 14 |
| Computation | 10 |
| Regression (Statistics) | 9 |
| Maximum Likelihood Statistics | 8 |
| Models | 8 |
| Error of Measurement | 7 |
| Measurement | 7 |
| Psychometrics | 7 |
| More ▼ | |
Source
Author
| Amanda Goodwin | 2 |
| Bolt, Daniel M. | 2 |
| Karabatsos, George | 2 |
| Matthew Naveiras | 2 |
| Paul De Boeck | 2 |
| Sun-Joo Cho | 2 |
| Arenson, Ethan A. | 1 |
| Ban, Jae-Chun | 1 |
| Brian T. Keller | 1 |
| Carbonaro, Michael | 1 |
| Carlson, James E. | 1 |
| More ▼ | |
Publication Type
Education Level
| Secondary Education | 5 |
| Elementary Education | 4 |
| Junior High Schools | 4 |
| Middle Schools | 4 |
| Elementary Secondary Education | 3 |
| Higher Education | 2 |
| Postsecondary Education | 2 |
| Grade 12 | 1 |
| Grade 4 | 1 |
| Grade 8 | 1 |
| High Schools | 1 |
| More ▼ | |
Audience
| Researchers | 6 |
| Teachers | 2 |
| Practitioners | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 3 |
| Trends in International… | 2 |
| ACT Assessment | 1 |
| Big Five Inventory | 1 |
| Computer Attitude Scale | 1 |
| National Longitudinal Study… | 1 |
What Works Clearinghouse Rating
Egamaria Alacam; Craig K. Enders; Han Du; Brian T. Keller – Grantee Submission, 2023
Composite scores are an exceptionally important psychometric tool for behavioral science research applications. A prototypical example occurs with self-report data, where researchers routinely use questionnaires with multiple items that tap into different features of a target construct. Item-level missing data are endemic to composite score…
Descriptors: Regression (Statistics), Scores, Psychometrics, Test Items
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Grantee Submission, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Journal of Educational Measurement, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Chengyu Cui; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Multidimensional item response theory (MIRT) models have generated increasing interest in the psychometrics literature. Efficient approaches for estimating MIRT models with dichotomous responses have been developed, but constructing an equally efficient and robust algorithm for polytomous models has received limited attention. To address this gap,…
Descriptors: Item Response Theory, Accuracy, Simulation, Psychometrics
Charlotte A. Bolch; Tim Jacobbe – Numeracy, 2019
Statistical literacy refers to two interrelated components: people's ability to interpret and critically evaluate statistical information, and their ability to discuss or communicate their reactions to statistical information. The ability to read and interpret graphical displays is part of statistical literacy because much of the statistical…
Descriptors: Visual Aids, Data Analysis, Statistics Education, Introductory Courses
Patton, Jeffrey M.; Cheng, Ying; Hong, Maxwell; Diao, Qi – Journal of Educational and Behavioral Statistics, 2019
In psychological and survey research, the prevalence and serious consequences of careless responses from unmotivated participants are well known. In this study, we propose to iteratively detect careless responders and cleanse the data by removing their responses. The careless responders are detected using person-fit statistics. In two simulation…
Descriptors: Test Items, Response Style (Tests), Identification, Computation
Arenson, Ethan A.; Karabatsos, George – Grantee Submission, 2017
Item response models typically assume that the item characteristic (step) curves follow a logistic or normal cumulative distribution function, which are strictly monotone functions of person test ability. Such assumptions can be overly-restrictive for real item response data. We propose a simple and more flexible Bayesian nonparametric IRT model…
Descriptors: Bayesian Statistics, Item Response Theory, Nonparametric Statistics, Models
Drabinová, Adéla; Martinková, Patrícia – Journal of Educational Measurement, 2017
In this article we present a general approach not relying on item response theory models (non-IRT) to detect differential item functioning (DIF) in dichotomous items with presence of guessing. The proposed nonlinear regression (NLR) procedure for DIF detection is an extension of method based on logistic regression. As a non-IRT approach, NLR can…
Descriptors: Test Items, Regression (Statistics), Guessing (Tests), Identification
Sinharay, Sandip – Journal of Educational Measurement, 2017
Person-fit assessment (PFA) is concerned with uncovering atypical test performance as reflected in the pattern of scores on individual items on a test. Existing person-fit statistics (PFSs) include both parametric and nonparametric statistics. Comparison of PFSs has been a popular research topic in PFA, but almost all comparisons have employed…
Descriptors: Goodness of Fit, Testing, Test Items, Scores
Liu, Chen-Wei; Wang, Wen-Chung – Journal of Educational Measurement, 2017
The examinee-selected-item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set of items (e.g., choose one item to respond from a pair of items), always yields incomplete data (i.e., only the selected items are answered and the others have missing data) that are likely nonignorable. Therefore, using…
Descriptors: Item Response Theory, Models, Maximum Likelihood Statistics, Data Analysis
Fife, James H.; James, Kofi; Peters, Stephanie – ETS Research Report Series, 2020
The concept of variability is central to statistics. In this research report, we review mathematics education research on variability and, based on that review and on feedback from an expert panel, propose a learning progression (LP) for variability. The structure of the proposed LP consists of 5 levels of sophistication in understanding…
Descriptors: Mathematics Education, Statistics Education, Feedback (Response), Research Reports
Dardick, William R.; Mislevy, Robert J. – Educational and Psychological Measurement, 2016
A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed…
Descriptors: Bayesian Statistics, Probability, Data Analysis, Item Response Theory
Mohr, Doris, Ed.; Walcott, Crystal, Ed.; Kloosterman, Peter, Ed. – National Council of Teachers of Mathematics, 2019
"Mathematical Thinking: From Assessment Items to Challenging Tasks" is a compilation of 36 problem-based lessons that encourage students to engage in productive struggle and deep thinking. Its 36 full-length lessons for grades 2-8 are each inspired by an actual test item from the National Assessment of Educational Progress (NAEP).…
Descriptors: Problem Based Learning, Test Items, Elementary School Mathematics, Middle School Mathematics
Maydeu-Olivares, Alberto; Montano, Rosa – Psychometrika, 2013
We investigate the performance of three statistics, R [subscript 1], R [subscript 2] (Glas in "Psychometrika" 53:525-546, 1988), and M [subscript 2] (Maydeu-Olivares & Joe in "J. Am. Stat. Assoc." 100:1009-1020, 2005, "Psychometrika" 71:713-732, 2006) to assess the overall fit of a one-parameter logistic model…
Descriptors: Foreign Countries, Item Response Theory, Statistics, Data Analysis
National Assessment Governing Board, 2017
The National Assessment of Educational Progress (NAEP) is the only continuing and nationally representative measure of trends in academic achievement of U.S. elementary and secondary school students in various subjects. For more than four decades, NAEP assessments have been conducted periodically in reading, mathematics, science, writing, U.S.…
Descriptors: Mathematics Achievement, Multiple Choice Tests, National Competency Tests, Educational Trends

Peer reviewed
Direct link
