Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 4 |
| Since 2007 (last 20 years) | 18 |
Descriptor
| Accuracy | 18 |
| Maximum Likelihood Statistics | 18 |
| Monte Carlo Methods | 18 |
| Computation | 9 |
| Models | 9 |
| Comparative Analysis | 8 |
| Item Response Theory | 8 |
| Correlation | 6 |
| Error of Measurement | 5 |
| Sample Size | 5 |
| Simulation | 4 |
| More ▼ | |
Source
Author
| Cai, Li | 2 |
| Monroe, Scott | 2 |
| Pfaffel, Andreas | 2 |
| Spiel, Christiane | 2 |
| Becker, Betsy Jane | 1 |
| Cohen, Allan S. | 1 |
| Dillenbourg, Pierre | 1 |
| Faucon, Louis | 1 |
| Finch, Holmes | 1 |
| French, Brian F. | 1 |
| Green, Samuel B. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 15 |
| Reports - Research | 14 |
| Reports - Evaluative | 2 |
| Dissertations/Theses -… | 1 |
| Reports - Descriptive | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Law School Admission Test | 1 |
| Program for International… | 1 |
What Works Clearinghouse Rating
Sen, Sedat; Cohen, Allan S. – Educational and Psychological Measurement, 2023
The purpose of this study was to examine the effects of different data conditions on item parameter recovery and classification accuracy of three dichotomous mixture item response theory (IRT) models: the Mix1PL, Mix2PL, and Mix3PL. Manipulated factors in the simulation included the sample size (11 different sample sizes from 100 to 5000), test…
Descriptors: Sample Size, Item Response Theory, Accuracy, Classification
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2019
In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in…
Descriptors: Item Response Theory, Error of Measurement, Scoring, Inferences
Finch, Holmes; French, Brian F. – Applied Measurement in Education, 2019
The usefulness of item response theory (IRT) models depends, in large part, on the accuracy of item and person parameter estimates. For the standard 3 parameter logistic model, for example, these parameters include the item parameters of difficulty, discrimination, and pseudo-chance, as well as the person ability parameter. Several factors impact…
Descriptors: Item Response Theory, Accuracy, Test Items, Difficulty Level
Potgieter, Cornelis; Kamata, Akihito; Kara, Yusuf – Grantee Submission, 2017
This study proposes a two-part model that includes components for reading accuracy and reading speed. The speed component is a log-normal factor model, for which speed data are measured by reading time for each sentence being assessed. The accuracy component is a binomial-count factor model, where the accuracy data are measured by the number of…
Descriptors: Reading Rate, Oral Reading, Accuracy, Models
Faucon, Louis; Kidzinski, Lukasz; Dillenbourg, Pierre – International Educational Data Mining Society, 2016
Large-scale experiments are often expensive and time consuming. Although Massive Online Open Courses (MOOCs) provide a solid and consistent framework for learning analytics, MOOC practitioners are still reluctant to risk resources in experiments. In this study, we suggest a methodology for simulating MOOC students, which allow estimation of…
Descriptors: Markov Processes, Monte Carlo Methods, Bayesian Statistics, Online Courses
Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo – Educational and Psychological Measurement, 2015
Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…
Descriptors: Factor Analysis, Error of Measurement, Accuracy, Hypothesis Testing
Pfaffel, Andreas; Schober, Barbara; Spiel, Christiane – Practical Assessment, Research & Evaluation, 2016
A common methodological problem in the evaluation of the predictive validity of selection methods, e.g. in educational and employment selection, is that the correlation between predictor and criterion is biased. Thorndike's (1949) formulas are commonly used to correct for this biased correlation. An alternative approach is to view the selection…
Descriptors: Comparative Analysis, Correlation, Statistical Bias, Maximum Likelihood Statistics
Pfaffel, Andreas; Spiel, Christiane – Practical Assessment, Research & Evaluation, 2016
Approaches to correcting correlation coefficients for range restriction have been developed under the framework of large sample theory. The accuracy of missing data techniques for correcting correlation coefficients for range restriction has thus far only been investigated with relatively large samples. However, researchers and evaluators are…
Descriptors: Correlation, Sample Size, Error of Measurement, Accuracy
Sahin, Alper; Weiss, David J. – Educational Sciences: Theory and Practice, 2015
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Sample Size, Item Banks
Monroe, Scott; Cai, Li – Educational and Psychological Measurement, 2014
In Ramsay curve item response theory (RC-IRT) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's EM algorithm, which yields maximum marginal likelihood estimates. This method, however, does not produce the…
Descriptors: Item Response Theory, Models, Computation, Mathematics
Koziol, Natalie A. – Applied Measurement in Education, 2016
Testlets, or groups of related items, are commonly included in educational assessments due to their many logistical and conceptual advantages. Despite their advantages, testlets introduce complications into the theory and practice of educational measurement. Responses to items within a testlet tend to be correlated even after controlling for…
Descriptors: Classification, Accuracy, Comparative Analysis, Models
Wu, Meng-Jia; Becker, Betsy Jane – Research Synthesis Methods, 2013
Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…
Descriptors: Regression (Statistics), Correlation, Research Methodology, Accuracy
Lang, Kyle M.; Little, Todd D. – International Journal of Behavioral Development, 2014
We present a new paradigm that allows simplified testing of multiparameter hypotheses in the presence of incomplete data. The proposed technique is a straight-forward procedure that combines the benefits of two powerful data analytic tools: multiple imputation and nested-model ?2 difference testing. A Monte Carlo simulation study was conducted to…
Descriptors: Hypothesis Testing, Data Analysis, Error of Measurement, Computation
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao – Educational and Psychological Measurement, 2013
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
Descriptors: Item Response Theory, Computation, Matrices, Statistical Inference
Sun, Shuyan; Pan, Wei – Journal of Experimental Education, 2013
Regression discontinuity design is an alternative to randomized experiments to make causal inference when random assignment is not possible. This article first presents the formal identification and estimation of regression discontinuity treatment effects in the framework of Rubin's causal model, followed by a thorough literature review of…
Descriptors: Regression (Statistics), Computation, Accuracy, Causal Models
Previous Page | Next Page ยป
Pages: 1 | 2
Peer reviewed
Direct link
