Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 2 |
Descriptor
| Identification | 3 |
| Monte Carlo Methods | 3 |
| Test Length | 3 |
| Accuracy | 2 |
| Classification | 2 |
| Item Analysis | 2 |
| Item Response Theory | 2 |
| Test Items | 2 |
| Bayesian Statistics | 1 |
| Comparative Analysis | 1 |
| Computer Assisted Testing | 1 |
| More ▼ | |
Author
| Allan S. Cohen | 1 |
| David J. Weiss | 1 |
| Finch, Holmes | 1 |
| Gina Biancarosa | 1 |
| Joseph N. DeWeese | 1 |
| Mark L. Davison | 1 |
| Ozge Ersan | 1 |
| Patrick C. Kennedy | 1 |
| Sedat Sen | 1 |
Publication Type
| Journal Articles | 2 |
| Reports - Research | 2 |
| Numerical/Quantitative Data | 1 |
| Reports - Evaluative | 1 |
Education Level
| Elementary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Sedat Sen; Allan S. Cohen – Educational and Psychological Measurement, 2024
A Monte Carlo simulation study was conducted to compare fit indices used for detecting the correct latent class in three dichotomous mixture item response theory (IRT) models. Ten indices were considered: Akaike's information criterion (AIC), the corrected AIC (AICc), Bayesian information criterion (BIC), consistent AIC (CAIC), Draper's…
Descriptors: Goodness of Fit, Item Response Theory, Sample Size, Classification
Mark L. Davison; David J. Weiss; Ozge Ersan; Joseph N. DeWeese; Gina Biancarosa; Patrick C. Kennedy – Grantee Submission, 2021
MOCCA is an online assessment of inferential reading comprehension for students in 3rd through 6th grades. It can be used to identify good readers and, for struggling readers, identify those who overly rely on either a Paraphrasing process or an Elaborating process when their comprehension is incorrect. Here a propensity to over-rely on…
Descriptors: Reading Tests, Computer Assisted Testing, Reading Comprehension, Elementary School Students
Finch, Holmes – Applied Psychological Measurement, 2005
This study compares the ability of the multiple indicators, multiple causes (MIMIC) confirmatory factor analysis model to correctly identify cases of differential item functioning (DIF) with more established methods. Although the MIMIC model might have application in identifying DIF for multiple grouping variables, there has been little…
Descriptors: Identification, Factor Analysis, Test Bias, Models

Peer reviewed
Direct link
