Publication Date
| In 2026 | 0 |
| Since 2025 | 12 |
| Since 2022 (last 5 years) | 56 |
| Since 2017 (last 10 years) | 108 |
| Since 2007 (last 20 years) | 255 |
Descriptor
| Sample Size | 404 |
| Simulation | 404 |
| Item Response Theory | 113 |
| Statistical Analysis | 92 |
| Error of Measurement | 86 |
| Models | 84 |
| Test Items | 80 |
| Comparative Analysis | 77 |
| Monte Carlo Methods | 75 |
| Correlation | 67 |
| Evaluation Methods | 64 |
| More ▼ | |
Source
Author
| Fan, Xitao | 7 |
| Beretvas, S. Natasha | 5 |
| Algina, James | 4 |
| Chan, Wai | 4 |
| Cohen, Allan S. | 4 |
| De Champlain, Andre | 4 |
| Finch, W. Holmes | 4 |
| French, Brian F. | 4 |
| Kim, Seock-Ho | 4 |
| Kromrey, Jeffrey D. | 4 |
| Paek, Insu | 4 |
| More ▼ | |
Publication Type
Education Level
Audience
| Teachers | 4 |
| Researchers | 3 |
Location
| North Carolina | 2 |
| Armenia | 1 |
| Austria | 1 |
| Canada | 1 |
| Florida (Miami) | 1 |
| Hong Kong | 1 |
| Indiana | 1 |
| Iran | 1 |
| Montana | 1 |
| New York (New York) | 1 |
| Norway | 1 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 2 |
Assessments and Surveys
What Works Clearinghouse Rating
van Dorresteijn, Chevy; Kan, Kees-Jan; Smits, Niels – Assessment & Evaluation in Higher Education, 2023
When higher education students are assessed multiple times, teachers need to consider how these assessments can be combined into a single pass or fail decision. A common question that arises is whether students should be allowed to take a resit. Previous research has found little to no clear learning benefits of resits and therefore suggested they…
Descriptors: College Students, Student Evaluation, Pretests Posttests, Regression (Statistics)
Novak, Josip; Rebernjak, Blaž – Measurement: Interdisciplinary Research and Perspectives, 2023
A Monte Carlo simulation study was conducted to examine the performance of [alpha], [lambda]2, [lambda][subscript 4], [lambda][subscript 2], [omega][subscript T], GLB[subscript MRFA], and GLB[subscript Algebraic] coefficients. Population reliability, distribution shape, sample size, test length, and number of response categories were varied…
Descriptors: Monte Carlo Methods, Evaluation Methods, Reliability, Simulation
Jehanzeb Rashid Cheema – Journal of Education in Muslim Societies, 2024
This study explores the relationship between the Spiral Dynamics and the 3H (head, heart, hands) models of human growth and development, using constructs such as empathy, moral reasoning, forgiveness, and community mindedness that have been shown to have implications for education. The specific research question is, "Can a combination of…
Descriptors: Correlation, Factor Analysis, Computer Software, Moral Values
Emma Somer; Carl Falk; Milica Miocevic – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Factor Score Regression (FSR) is increasingly employed as an alternative to structural equation modeling (SEM) in small samples. Despite its popularity in psychology, the performance of FSR in multigroup models with small samples remains relatively unknown. The goal of this study was to examine the performance of FSR, namely Croon's correction and…
Descriptors: Scores, Structural Equation Models, Comparative Analysis, Sample Size
Xiao, Leifeng; Hau, Kit-Tai – Applied Measurement in Education, 2023
We compared coefficient alpha with five alternatives (omega total, omega RT, omega h, GLB, and coefficient H) in two simulation studies. Results showed for unidimensional scales, (a) all indices except omega h performed similarly well for most conditions; (b) alpha is still good; (c) GLB and coefficient H overestimated reliability with small…
Descriptors: Test Theory, Test Reliability, Factor Analysis, Test Length
Henninger, Mirka; Debelak, Rudolf; Strobl, Carolin – Educational and Psychological Measurement, 2023
To detect differential item functioning (DIF), Rasch trees search for optimal split-points in covariates and identify subgroups of respondents in a data-driven way. To determine whether and in which covariate a split should be performed, Rasch trees use statistical significance tests. Consequently, Rasch trees are more likely to label small DIF…
Descriptors: Item Response Theory, Test Items, Effect Size, Statistical Significance
Chenchen Ma; Jing Ouyang; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Survey instruments and assessments are frequently used in many domains of social science. When the constructs that these assessments try to measure become multifaceted, multidimensional item response theory (MIRT) provides a unified framework and convenient statistical tool for item analysis, calibration, and scoring. However, the computational…
Descriptors: Algorithms, Item Response Theory, Scoring, Accuracy
Paek, Insu; Lin, Zhongtian; Chalmers, Robert Philip – Educational and Psychological Measurement, 2023
To reduce the chance of Heywood cases or nonconvergence in estimating the 2PL or the 3PL model in the marginal maximum likelihood with the expectation-maximization (MML-EM) estimation method, priors for the item slope parameter in the 2PL model or for the pseudo-guessing parameter in the 3PL model can be used and the marginal maximum a posteriori…
Descriptors: Models, Item Response Theory, Test Items, Intervals
Kalkan, Ömür Kaya; Toprak, Emre – International Journal of Psychology and Educational Studies, 2022
All cognitive diagnostic models that evaluate educational test data require a Q-matrix that combines every item in a test with the required cognitive skills for each item to be answered correctly. Generally, the Q-matrix is constructed by education experts' judgment, leading to some uncertainty in its elements. Various statistical methods are…
Descriptors: Q Methodology, Matrices, Input Output Analysis, Models
Aidoo, Eric Nimako; Appiah, Simon K.; Boateng, Alexander – Journal of Experimental Education, 2021
This study investigated the small sample biasness of the ordered logit model parameters under multicollinearity using Monte Carlo simulation. The results showed that the level of biasness associated with the ordered logit model parameters consistently decreases for an increasing sample size while the distribution of the parameters becomes less…
Descriptors: Statistical Bias, Monte Carlo Methods, Simulation, Sample Size
Fu, Yuanshu; Wen, Zhonglin; Wang, Yang – Educational and Psychological Measurement, 2022
Composite reliability, or coefficient omega, can be estimated using structural equation modeling. Composite reliability is usually estimated under the basic independent clusters model of confirmatory factor analysis (ICM-CFA). However, due to the existence of cross-loadings, the model fit of the exploratory structural equation model (ESEM) is…
Descriptors: Comparative Analysis, Structural Equation Models, Factor Analysis, Reliability
Beauducel, André; Hilger, Norbert – Educational and Psychological Measurement, 2022
In the context of Bayesian factor analysis, it is possible to compute plausible values, which might be used as covariates or predictors or to provide individual scores for the Bayesian latent variables. Previous simulation studies ascertained the validity of mean plausible values by the mean squared difference of the mean plausible values and the…
Descriptors: Bayesian Statistics, Factor Analysis, Prediction, Simulation
Peer reviewedDongho Shin – Grantee Submission, 2024
We consider Bayesian estimation of a hierarchical linear model (HLM) from small sample sizes. The continuous response Y and covariates C are partially observed and assumed missing at random. With C having linear effects, the HLM may be efficiently estimated by available methods. When C includes cluster-level covariates having interactive or other…
Descriptors: Bayesian Statistics, Computation, Hierarchical Linear Modeling, Data Analysis
Cai, Liuhan; Albano, Anthony D.; Roussos, Louis A. – Measurement: Interdisciplinary Research and Perspectives, 2021
Multistage testing (MST), an adaptive test delivery mode that involves algorithmic selection of predefined item modules rather than individual items, offers a practical alternative to linear and fully computerized adaptive testing. However, interactions across stages between item modules and examinee groups can lead to challenges in item…
Descriptors: Adaptive Testing, Test Items, Item Response Theory, Test Construction
Sahin Kursad, Merve; Cokluk Bokeoglu, Omay; Cikrikci, Rahime Nukhet – International Journal of Assessment Tools in Education, 2022
Item parameter drift (IPD) is the systematic differentiation of parameter values of items over time due to various reasons. If it occurs in computer adaptive tests (CAT), it causes errors in the estimation of item and ability parameters. Identification of the underlying conditions of this situation in CAT is important for estimating item and…
Descriptors: Item Analysis, Computer Assisted Testing, Test Items, Error of Measurement

Direct link
