NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 1,096 to 1,110 of 3,311 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
de la Torre, Jimmy; Lee, Young-Sun – Journal of Educational Measurement, 2013
This article used the Wald test to evaluate the item-level fit of a saturated cognitive diagnosis model (CDM) relative to the fits of the reduced models it subsumes. A simulation study was carried out to examine the Type I error and power of the Wald test in the context of the G-DINA model. Results show that when the sample size is small and a…
Descriptors: Statistical Analysis, Test Items, Goodness of Fit, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Jihye; Oshima, T. C. – Educational and Psychological Measurement, 2013
In a typical differential item functioning (DIF) analysis, a significance test is conducted for each item. As a test consists of multiple items, such multiple testing may increase the possibility of making a Type I error at least once. The goal of this study was to investigate how to control a Type I error rate and power using adjustment…
Descriptors: Test Bias, Test Items, Statistical Analysis, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Chia-ling; Shen, I-hsuan; Chen, Chung-yao; Wu, Ching-yi; Liu, Wen-Yu; Chung, Chia-ying – Research in Developmental Disabilities: A Multidisciplinary Journal, 2013
This study examined criterion-related validity and clinimetric properties of the pediatric balance scale ("PBS") in children with cerebral palsy (CP). Forty-five children with CP (age range: 19-77 months) and their parents participated in this study. At baseline and at follow up, Pearson correlation coefficients were used to determine…
Descriptors: Measurement, Measures (Individuals), Correlation, Cerebral Palsy
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Xin; Beretvas, S. Natasha – Structural Equation Modeling: A Multidisciplinary Journal, 2013
This simulation study investigated use of the multilevel structural equation model (MLSEM) for handling measurement error in both mediator and outcome variables ("M" and "Y") in an upper level multilevel mediation model. Mediation and outcome variable indicators were generated with measurement error. Parameter and standard…
Descriptors: Sample Size, Structural Equation Models, Simulation, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Tijmstra, Jesper; Hessen, David J.; van der Heijden, Peter G. M.; Sijtsma, Klaas – Psychometrika, 2013
Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest monotonicity across a variety of observed scores,…
Descriptors: Item Response Theory, Statistical Inference, Probability, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Freeman, Ruth; Gibson, Barry; Humphris, Gerry; Leonard, Helen; Yuan, Siyang; Whelton, Helen – Health Education Journal, 2016
Objective: To use a model of health learning to examine the role of health-learning capacity and the effect of a school-based oral health education intervention (Winning Smiles) on the health outcome, child oral health-related quality of life (COHRQoL). Setting: Primary schools, high social deprivation, Ireland/Northern Ireland. Design: Cluster…
Descriptors: Health Education, Role, Intervention, Dental Health
Powers, Sonya; Li, Dongmei; Suh, Hongwook; Harris, Deborah J. – ACT, Inc., 2016
ACT reporting categories and ACT Readiness Ranges are new features added to the ACT score reports starting in fall 2016. For each reporting category, the number correct score, the maximum points possible, the percent correct, and the ACT Readiness Range, along with an indicator of whether the reporting category score falls within the Readiness…
Descriptors: Scores, Classification, College Entrance Examinations, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Yarnell, Jordy B.; Pfeiffer, Steven I. – Journal of Psychoeducational Assessment, 2015
The present study examined the psychometric equivalence of administering a computer-based version of the Gifted Rating Scale (GRS) compared with the traditional paper-and-pencil GRS-School Form (GRS-S). The GRS-S is a teacher-completed rating scale used in gifted assessment. The GRS-Electronic Form provides an alternative method of administering…
Descriptors: Gifted, Psychometrics, Rating Scales, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Reardon, Sean F.; Ho, Andrew D. – Journal of Educational and Behavioral Statistics, 2015
In an earlier paper, we presented methods for estimating achievement gaps when test scores are coarsened into a small number of ordered categories, preventing fine-grained distinctions between individual scores. We demonstrated that gaps can nonetheless be estimated with minimal bias across a broad range of simulated and real coarsened data…
Descriptors: Achievement Gap, Performance Factors, Educational Practices, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lockwood, J. R.; McCaffrey, Daniel F. – Grantee Submission, 2015
Regression, weighting and related approaches to estimating a population mean from a sample with nonrandom missing data often rely on the assumption that conditional on covariates, observed samples can be treated as random. Standard methods using this assumption generally will fail to yield consistent estimators when covariates are measured with…
Descriptors: Simulation, Computation, Statistical Analysis, Statistical Bias
Reardon, Sean F.; Ho, Andrew D. – Grantee Submission, 2015
Ho and Reardon (2012) present methods for estimating achievement gaps when test scores are coarsened into a small number of ordered categories, preventing fine-grained distinctions between individual scores. They demonstrate that gaps can nonetheless be estimated with minimal bias across a broad range of simulated and real coarsened data…
Descriptors: Achievement Gap, Performance Factors, Educational Practices, Scores
Grochowalski, Joseph H. – ProQuest LLC, 2015
Component Universe Score Profile analysis (CUSP) is introduced in this paper as a psychometric alternative to multivariate profile analysis. The theoretical foundations of CUSP analysis are reviewed, which include multivariate generalizability theory and constrained principal components analysis. Because CUSP is a combination of generalizability…
Descriptors: Computation, Psychometrics, Profiles, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Atalay Kabasakal, Kübra; Arsan, Nihan; Gök, Bilge; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2014
This simulation study compared the performances (Type I error and power) of Mantel-Haenszel (MH), SIBTEST, and item response theory-likelihood ratio (IRT-LR) methods under certain conditions. Manipulated factors were sample size, ability differences between groups, test length, the percentage of differential item functioning (DIF), and underlying…
Descriptors: Comparative Analysis, Item Response Theory, Statistical Analysis, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Jacob, Robin T.; Goddard, Roger D.; Kim, Eun Sook – Educational Evaluation and Policy Analysis, 2014
It is often difficult and costly to obtain individual-level student achievement data, yet, researchers are frequently reluctant to use school-level achievement data that are widely available from state websites. We argue that public-use aggregate school-level achievement data are, in fact, sufficient to address a wide range of evaluation questions…
Descriptors: Academic Achievement, Data, Information Utilization, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Fan, Xitao; Sun, Shaojing – Journal of Early Adolescence, 2014
In adolescence research, the treatment of measurement reliability is often fragmented, and it is not always clear how different reliability coefficients are related. We show that generalizability theory (G-theory) is a comprehensive framework of measurement reliability, encompassing all other reliability methods (e.g., Pearson "r,"…
Descriptors: Generalizability Theory, Measurement, Reliability, Correlation
Pages: 1  |  ...  |  70  |  71  |  72  |  73  |  74  |  75  |  76  |  77  |  78  |  ...  |  221