NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 166 to 180 of 318 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ruscio, John; Walters, Glenn D. – Psychological Assessment, 2009
Factor-analytic research is common in the study of constructs and measures in psychological assessment. Latent factors can represent traits as continuous underlying dimensions or as discrete categories. When examining the distributions of estimated scores on latent factors, one would expect unimodal distributions for dimensional data and bimodal…
Descriptors: Factor Analysis, Comparative Analysis, Data Analysis, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Belov, Dmitry I. – Applied Psychological Measurement, 2011
This article presents the Variable Match Index (VM-Index), a new statistic for detecting answer copying. The power of the VM-Index relies on two-dimensional conditioning as well as the structure of the test. The asymptotic distribution of the VM-Index is analyzed by reduction to Poisson trials. A computational study comparing the VM-Index with the…
Descriptors: Cheating, Journal Articles, Computation, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Zhou, P.; Ang, B. W. – Social Indicators Research, 2009
Composite indicators have been increasingly recognized as a useful tool for performance monitoring, benchmarking comparisons and public communication in a wide range of fields. The usefulness of a composite indicator depends heavily on the underlying data aggregation scheme where multiple criteria decision analysis (MCDA) is commonly used. A…
Descriptors: Evaluation Methods, Comparative Analysis, Benchmarking, Evaluation Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Granberg-Rademacker, J. Scott – Educational and Psychological Measurement, 2010
The extensive use of survey instruments in the social sciences has long created debate and concern about validity of outcomes, especially among instruments that gather ordinal-level data. Ordinal-level survey measurement of concepts that could be measured at the interval or ratio level produce errors because respondents are forced to truncate or…
Descriptors: Intervals, Rating Scales, Surveys, Markov Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Walters, Glenn D.; Ruscio, John – Psychological Assessment, 2009
Meehl's taxometric method has been shown to differentiate between categorical and dimensional data, but there are many ways to implement taxometric procedures. When analyzing the ordered categorical data typically provided by assessment instruments, summing items to form input indicators has been a popular practice for more than 20 years. A Monte…
Descriptors: Personality Problems, Monte Carlo Methods, Program Effectiveness, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Shin, Seon-Hi – Practical Assessment, Research & Evaluation, 2009
This study investigated the impact of the coding scheme on IRT-based true score equating under a common-item nonequivalent groups design. Two different coding schemes under investigation were carried out by assigning either a zero or a blank to a missing item response in the equating data. The investigation involved a comparison study using actual…
Descriptors: True Scores, Equated Scores, Item Response Theory, Coding
Peer reviewed Peer reviewed
Direct linkDirect link
Steyn, H. S., Jr.; Ellis, S. M. – Multivariate Behavioral Research, 2009
When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…
Descriptors: Effect Size, Multivariate Analysis, Computation, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Fang, Hua; Brooks, Gordon P.; Rizzo, Maria L.; Espy, Kimberly Andrews; Barcikowski, Robert S. – Journal of Experimental Education, 2009
Because the power properties of traditional repeated measures and hierarchical multivariate linear models have not been clearly determined in the balanced design for longitudinal studies in the literature, the authors present a power comparison study of traditional repeated measures and hierarchical multivariate linear models under 3…
Descriptors: Longitudinal Studies, Models, Measurement, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lix, Lisa M.; Deering, Kathleen N.; Fouladi, Rachel T.; Manivong, Phongsack – Educational and Psychological Measurement, 2009
This study considers the problem of testing the difference between treatment and control groups on m [greater than or equal to] 2 measures when it is assumed a priori that the treatment group will perform better than the control group on all measures. Two procedures are investigated that do not rest on the assumptions of covariance homogeneity or…
Descriptors: Control Groups, Experimental Groups, Outcomes of Treatment, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Forero, Carlos G.; Maydeu-Olivares, Alberto; Gallardo-Pujol, David – Structural Equation Modeling: A Multidisciplinary Journal, 2009
Factor analysis models with ordinal indicators are often estimated using a 3-stage procedure where the last stage involves obtaining parameter estimates by least squares from the sample polychoric correlations. A simulation study involving 324 conditions (1,000 replications per condition) was performed to compare the performance of diagonally…
Descriptors: Factor Analysis, Models, Least Squares Statistics, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Young, Michael E.; Clark, M. H.; Goffus, Andrea; Hoane, Michael R. – Learning and Motivation, 2009
Morris water maze data are most commonly analyzed using repeated measures analysis of variance in which daily test sessions are analyzed as an unordered categorical variable. This approach, however, may lack power, relies heavily on post hoc tests of daily performance that can complicate interpretation, and does not target the nonlinear trends…
Descriptors: Monte Carlo Methods, Regression (Statistics), Research Methodology, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Glocker, Daniela – Economics of Education Review, 2011
In this paper I evaluate the effect of student aid on the success of academic studies. I focus on two dimensions, the duration of study and the probability of actually graduating with a degree. To determine the impact of financial student aid, I estimate a discrete-time duration model allowing for competing risks to account for different exit…
Descriptors: Student Financial Aid, Dropout Rate, Graduation Rate, Graduation
Peer reviewed Peer reviewed
Direct linkDirect link
St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane – Applied Psychological Measurement, 2009
To date, there have been no studies comparing parametric and nonparametric Item Characteristic Curve (ICC) estimation methods on the effectiveness of Person-Fit Statistics (PFS). The primary aim of this study was to determine if the use of ICCs estimated by nonparametric methods would increase the accuracy of item response theory-based PFS for…
Descriptors: Sample Size, Monte Carlo Methods, Nonparametric Statistics, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Stadnytska, Tetiana; Braun, Simone; Werner, Joachim – Multivariate Behavioral Research, 2008
This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…
Descriptors: Models, Identification, Multivariate Analysis, Correlation
Thurman, Carol – ProQuest LLC, 2009
The increased use of polytomous item formats has led assessment developers to pay greater attention to the detection of differential item functioning (DIF) in these items. DIF occurs when an item performs differently for two contrasting groups of respondents (e.g., males versus females) after controlling for differences in the abilities of the…
Descriptors: Test Items, Monte Carlo Methods, Test Bias, Educational Testing
Pages: 1  |  ...  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  16  |  ...  |  22