NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Job Training Partnership Act…1
What Works Clearinghouse Rating
Showing 1 to 15 of 100 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Daoxuan Fu; Chunying Qin; Zhaosheng Luo; Yujun Li; Xiaofeng Yu; Ziyu Ye – Journal of Educational and Behavioral Statistics, 2025
One of the central components of cognitive diagnostic assessment is the Q-matrix, which is an essential loading indicator matrix and is typically constructed by subject matter experts. Nonetheless, to a large extent, the construction of Q-matrix remains a subjective process and might lead to misspecifications. Many researchers have recognized the…
Descriptors: Q Methodology, Matrices, Diagnostic Tests, Cognitive Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cobern, William W.; Adams, Betty A. J. – International Journal of Assessment Tools in Education, 2020
Researchers need to know what is an appropriate sample size for interview work, but how does one decide upon an acceptable number of people to interview? This question is not relevant to case study work where one would typically interview every member of a case, or in situations where it is both desirable and feasible to interview all target…
Descriptors: Interviews, Sample Size, Generalization, Qualitative Research
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, André; Hilger, Norbert – Educational and Psychological Measurement, 2022
In the context of Bayesian factor analysis, it is possible to compute plausible values, which might be used as covariates or predictors or to provide individual scores for the Bayesian latent variables. Previous simulation studies ascertained the validity of mean plausible values by the mean squared difference of the mean plausible values and the…
Descriptors: Bayesian Statistics, Factor Analysis, Prediction, Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Howard, Jeffrey N. – Practical Assessment, Research & Evaluation, 2022
The Student Evaluation of Teaching (SET) instrument provides insight for instructors and administrators alike, often touting high response-rates to endorse their validity and reliability. However, response-rate alone omits consideration for "adequate quantity of 'observational sampling opportunity' (OSO) data points" (e.g., high student…
Descriptors: Student Evaluation of Teacher Performance, Validity, Reliability, Longitudinal Studies
Kayla Kleinman – ProQuest LLC, 2021
Continuous performance tests (CPTs) are frequently used to measure attention and impulsivity in children and adults during psychological and neuropsychological evaluations, often to inform differential diagnosis of Attention-Deficit Hyperactivity Disorder (ADHD). Despite the widespread clinical use of CPTs, the majority of research on their…
Descriptors: Attention Deficit Hyperactivity Disorder, Clinical Diagnosis, Diagnostic Tests, Performance Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Loewen, Shawn; Hui, Bronson – Modern Language Journal, 2021
This commentary discusses the issue of small samples in instructed second language acquisition research. We discuss the current state of affairs, and consider the disadvantages of small samples. We also explore other considerations regarding sample size, such as research ethics and ecological validity. We present a range of recommendations for…
Descriptors: Second Language Learning, Second Language Instruction, Sample Size, Language Research
Peer reviewed Peer reviewed
Direct linkDirect link
Koziol, Natalie A.; Goodrich, J. Marc; Yoon, HyeonJin – Educational and Psychological Measurement, 2022
Differential item functioning (DIF) is often used to examine validity evidence of alternate form test accommodations. Unfortunately, traditional approaches for evaluating DIF are prone to selection bias. This article proposes a novel DIF framework that capitalizes on regression discontinuity design analysis to control for selection bias. A…
Descriptors: Regression (Statistics), Item Analysis, Validity, Testing Accommodations
Peer reviewed Peer reviewed
Direct linkDirect link
Luo, Wen; Li, Haoran; Baek, Eunkyeng; Chen, Siqi; Lam, Kwok Hap; Semma, Brandie – Review of Educational Research, 2021
Multilevel modeling (MLM) is a statistical technique for analyzing clustered data. Despite its long history, the technique and accompanying computer programs are rapidly evolving. Given the complexity of multilevel models, it is crucial for researchers to provide complete and transparent descriptions of the data, statistical analyses, and results.…
Descriptors: Hierarchical Linear Modeling, Multivariate Analysis, Prediction, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Barnow, Burt S.; Greenberg, David H. – American Journal of Evaluation, 2020
This paper reviews the use of multiple trials, defined as multiple sites or multiple arms in a single evaluation and replications, in evaluating social programs. After defining key terms, the paper discusses the rationales for conducting multiple trials, which include increasing sample size to increase statistical power; identifying the most…
Descriptors: Evaluation, Randomized Controlled Trials, Experiments, Replication (Evaluation)
Peer reviewed Peer reviewed
Direct linkDirect link
Minchen, Nathan; de la Torre, Jimmy – Measurement: Interdisciplinary Research and Perspectives, 2018
Cognitive diagnosis models (CDMs) allow for the extraction of fine-grained, multidimensional diagnostic information from appropriately designed tests. In recent years, interest in such models has grown as formative assessment grows in popularity. Many dichotomous as well as several polytomous CDMs have been proposed in the last two decades, but…
Descriptors: Cognitive Measurement, Item Response Theory, Formative Evaluation, Models
Reardon, Sean F.; Kalogrides, Demetra; Ho, Andrew D. – Journal of Educational and Behavioral Statistics, 2021
Linking score scales across different tests is considered speculative and fraught, even at the aggregate level. We introduce and illustrate validation methods for aggregate linkages, using the challenge of linking U.S. school district average test scores across states as a motivating example. We show that aggregate linkages can be validated both…
Descriptors: Equated Scores, Validity, Methods, School Districts
Cai, Zhiqiang; Siebert-Evenstone, Amanda; Eagan, Brendan; Shaffer, David Williamson; Hu, Xiangen; Graesser, Arthur C. – Grantee Submission, 2019
Coding is a process of assigning meaning to a given piece of evidence. Evidence may be found in a variety of data types, including documents, research interviews, posts from social media, conversations from learning platforms, or any source of data that may provide insights for the questions under qualitative study. In this study, we focus on text…
Descriptors: Semantics, Computational Linguistics, Evidence, Coding
Peer reviewed Peer reviewed
Direct linkDirect link
Morgan, Grant B.; Moore, Courtney A.; Floyd, Harlee S. – Journal of Psychoeducational Assessment, 2018
Although content validity--how well each item of an instrument represents the construct being measured--is foundational in the development of an instrument, statistical validity is also important to the decisions that are made based on the instrument. The primary purpose of this study is to demonstrate how simulation studies can be used to assist…
Descriptors: Simulation, Decision Making, Test Construction, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Peter, Johannes; Rosman, Tom; Mayer, Anne-Kathrin; Leichner, Nikolas; Krampen, Günter – British Journal of Educational Psychology, 2016
Background: Particularly in higher education, not only a view of science as a means of finding absolute truths (absolutism), but also a view of science as generally tentative (multiplicism) can be unsophisticated and obstructive for learning. Most quantitative epistemic belief inventories neglect this and understand epistemic sophistication as…
Descriptors: Beliefs, Epistemology, Psychology, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hudson, Thom; Llosa, Lorena – Language Learning, 2015
Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,…
Descriptors: Second Language Learning, Language Research, Research Methodology, Correlation
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7