NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 421 to 435 of 3,310 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
De Raadt, Alexandra; Warrens, Matthijs J.; Bosker, Roel J.; Kiers, Henk A. L. – Educational and Psychological Measurement, 2019
Cohen's kappa coefficient is commonly used for assessing agreement between classifications of two raters on a nominal scale. Three variants of Cohen's kappa that can handle missing data are presented. Data are considered missing if one or both ratings of a unit are missing. We study how well the variants estimate the kappa value for complete data…
Descriptors: Interrater Reliability, Data, Statistical Analysis, Statistical Bias
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Taylor, John M. – Practical Assessment, Research & Evaluation, 2019
Although frequentist estimators can effectively fit ordinal confirmatory factor analysis (CFA) models, their assumptions are difficult to establish and estimation problems may prohibit their use at times. Consequently, researchers may want to also look to Bayesian analysis to fit their ordinal models. Bayesian methods offer researchers an…
Descriptors: Bayesian Statistics, Factor Analysis, Least Squares Statistics, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Bais, Frank; Schouten, Barry; Lugtig, Peter; Toepoel, Vera; Arends-Tòth, Judit; Douhou, Salima; Kieruj, Natalia; Morren, Mattijn; Vis, Corrie – Sociological Methods & Research, 2019
Item characteristics can have a significant effect on survey data quality and may be associated with measurement error. Literature on data quality and measurement error is often inconclusive. This could be because item characteristics used for detecting measurement error are not coded unambiguously. In our study, we use a systematic coding…
Descriptors: Foreign Countries, National Surveys, Error of Measurement, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Educational and Psychological Measurement, 2019
Previous work showing that revised parallel analysis can be effective with dichotomous items has used a two-parameter model and normally distributed abilities. In this study, both two- and three-parameter models were used with normally distributed and skewed ability distributions. Relatively minor skew and kurtosis in the underlying ability…
Descriptors: Item Analysis, Models, Error of Measurement, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Koziol, Natalie A.; Goodrich, J. Marc; Yoon, HyeonJin – Educational and Psychological Measurement, 2022
Differential item functioning (DIF) is often used to examine validity evidence of alternate form test accommodations. Unfortunately, traditional approaches for evaluating DIF are prone to selection bias. This article proposes a novel DIF framework that capitalizes on regression discontinuity design analysis to control for selection bias. A…
Descriptors: Regression (Statistics), Item Analysis, Validity, Testing Accommodations
Peer reviewed Peer reviewed
Direct linkDirect link
Jobst, Lisa J.; Auerswald, Max; Moshagen, Morten – Educational and Psychological Measurement, 2022
Prior studies investigating the effects of non-normality in structural equation modeling typically induced non-normality in the indicator variables. This procedure neglects the factor analytic structure of the data, which is defined as the sum of latent variables and errors, so it is unclear whether previous results hold if the source of…
Descriptors: Goodness of Fit, Structural Equation Models, Error of Measurement, Factor Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kane, Michael T.; Mroch, Andrew A. – ETS Research Report Series, 2020
Ordinary least squares (OLS) regression and orthogonal regression (OR) address different questions and make different assumptions about errors. The OLS regression of Y on X yields predictions of a dependent variable (Y) contingent on an independent variable (X) and minimizes the sum of squared errors of prediction. It assumes that the independent…
Descriptors: Regression (Statistics), Least Squares Statistics, Test Bias, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Nguyen, Trang Quynh; Stuart, Elizabeth A. – Journal of Educational and Behavioral Statistics, 2020
We address measurement error bias in propensity score (PS) analysis due to covariates that are latent variables. In the setting where latent covariate X is measured via multiple error-prone items W, PS analysis using several proxies for X--the W items themselves, a summary score (mean/sum of the items), or the conventional factor score (i.e.,…
Descriptors: Error of Measurement, Statistical Bias, Error Correction, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Jones, Andrew T.; Kopp, Jason P.; Ong, Thai Q. – Educational Measurement: Issues and Practice, 2020
Studies investigating invariance have often been limited to measurement or prediction invariance. Selection invariance, wherein the use of test scores for classification results in equivalent classification accuracy between groups, has received comparatively little attention in the psychometric literature. Previous research suggests that some form…
Descriptors: Test Construction, Test Bias, Classification, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Phillippo, David M.; Dias, Sofia; Ades, A. E.; Welton, Nicky J. – Research Synthesis Methods, 2020
Indirect comparisons are used to obtain estimates of relative effectiveness between two treatments that have not been compared in the same randomized controlled trial, but have instead been compared against a common comparator in separate trials. Standard indirect comparisons use only aggregate data, under the assumption that there are no…
Descriptors: Comparative Analysis, Outcomes of Treatment, Patients, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Eames, Cheryl L.; Barrett, Jeffrey E.; Cullen, Craig J.; Rutherford, George; Klanderman, David; Clements, Douglas H.; Sarama, Julie; Van Dine, Douglas W. – School Science and Mathematics, 2020
This study explored children's area estimation performance. Two groups of fourth grade children completed area estimation tasks with rectangles ranging from 5 to 200 square units. A randomly assigned treatment group completed instructional sessions that involved a conceptual area measurement strategy along with numerical feedback. Children tended…
Descriptors: Elementary School Mathematics, Elementary School Students, Grade 4, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Koçak, Duygu – Pedagogical Research, 2020
Iteration number in Monte Carlo simulation method used commonly in educational research has an effect on Item Response Theory test and item parameters. The related studies show that the number of iteration is at the discretion of the researcher. Similarly, there is no specific number suggested for the number of iteration in the related literature.…
Descriptors: Monte Carlo Methods, Item Response Theory, Educational Research, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Goodman, Joshua T.; Dallas, Andrew D.; Fan, Fen – Applied Measurement in Education, 2020
Recent research has suggested that re-setting the standard for each administration of a small sample examination, in addition to the high cost, does not adequately maintain similar performance expectations year after year. Small-sample equating methods have shown promise with samples between 20 and 30. For groups that have fewer than 20 students,…
Descriptors: Equated Scores, Sample Size, Sampling, Weighted Scores
Nese, Joseph F. T.; Kamata, Akihito – Grantee Submission, 2020
Curriculum-based measurement of oral reading fluency (CBM-R) is widely used across the country as a quick measure of reading proficiency that also serves as a good predictor of comprehension and overall reading achievement, but has several practical and technical inadequacies, including a large standard error of measurement (SEM). Reducing the SEM…
Descriptors: Curriculum Based Assessment, Oral Reading, Reading Fluency, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Jia, Yuane; Konold, Timothy – Journal of Experimental Education, 2021
Traditional observed variable multilevel models for evaluating indirect effects are limited by their inability to quantify measurement and sampling error. They are further restricted by being unable to fully separate within- and between-level effects without bias. Doubly latent models reduce these biases by decomposing the observed within-level…
Descriptors: Hierarchical Linear Modeling, Educational Environment, Aggression, Bullying
Pages: 1  |  ...  |  25  |  26  |  27  |  28  |  29  |  30  |  31  |  32  |  33  |  ...  |  221