NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 1 to 15 of 659 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Alireza Akbari; Mohammadtaghi Shahnazari – Journal of Applied Research in Higher Education, 2025
Purpose: The primary objective of this research paper was to examine the objectivity of the preselected items evaluation (PIE) method, a prevalent translation scoring method deployed by international institutions such as UAntwerpen, UGent and the University of Granada. Design/methodology/approach: This research critically analyzed the scientific…
Descriptors: Evaluation Methods, Translation, Difficulty Level, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Megan Lee; Danielle Augustine; Melinda Moore – Field Methods, 2025
Narratives that dominant the discourse of the experiences of people of color, specifically in education settings, are incomplete. Therefore, identifying methodological approaches that emphasize the perspectives of minoritized groups is essential. Counternarratives have been applied to help (re)tell stories of the oppressed to challenge dominant…
Descriptors: Minority Groups, Critical Race Theory, Perspective Taking, Ethnicity
Peer reviewed Peer reviewed
Direct linkDirect link
Yinying Wang; Joonkil Ahn – Educational Management Administration & Leadership, 2025
School leadership research literature has a large number of widely used constructs. Could fewer constructs bring more clarity? This study evaluates construct content validity, defined as the extent to which a measure's items reflect a theoretical content domain, in school leadership literature. To do so, we reviewed 29 articles that used Teaching…
Descriptors: Network Analysis, Construct Validity, Content Validity, Instructional Leadership
Peer reviewed Peer reviewed
Direct linkDirect link
Weibel, Stephanie; Popp, Maria; Reis, Stefanie; Skoetz, Nicole; Garner, Paul; Sydenham, Emma – Research Synthesis Methods, 2023
Evidence synthesis findings depend on the assumption that the included studies follow good clinical practice and results are not fabricated or false. Studies which are problematic due to scientific misconduct, poor research practice, or honest error may distort evidence synthesis findings. Authors of evidence synthesis need transparent mechanisms…
Descriptors: Identification, Randomized Controlled Trials, Integrity, Evaluation Methods
Xiangyi Liao – ProQuest LLC, 2024
Educational research outcomes frequently rely on an assumption that measurement metrics have interval-level properties. While most investigators know enough to be suspicious of interval-level claims, and in some cases even question their findings given such doubts, there is a lack of understanding regarding the measurement conditions that create…
Descriptors: Item Response Theory, Educational Research, Measurement, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Carpentras, Dino; Quayle, Michael – International Journal of Social Research Methodology, 2023
Agent-based models (ABMs) often rely on psychometric constructs such as 'opinions', 'stubbornness', 'happiness', etc. The measurement process for these constructs is quite different from the one used in physics as there is no standardized unit of measurement for opinion or happiness. Consequently, measurements are usually affected by 'psychometric…
Descriptors: Psychometrics, Error of Measurement, Models, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Zachary K. Collier; Minji Kong; Olushola Soyoye; Kamal Chawla; Ann M. Aviles; Yasser Payne – Journal of Educational and Behavioral Statistics, 2024
Asymmetric Likert-type items in research studies can present several challenges in data analysis, particularly concerning missing data. These items are often characterized by a skewed scaling, where either there is no neutral response option or an unequal number of possible positive and negative responses. The use of conventional techniques, such…
Descriptors: Likert Scales, Test Items, Item Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Mthuli, Syanda Alpheous; Ruffin, Fayth; Singh, Nikita – International Journal of Social Research Methodology, 2022
Qualitative research sample size determination has always been a contentious and confusing issue. Studies are often vague when explaining the processes and justifications that have been used to determine sample size and strategy. Some provide no mention of sampling at all, whilst others rely too heavily on the concept of saturation for determining…
Descriptors: Qualitative Research, Sample Size, Sampling, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Yuan Tian; Xi Yang; Suhail A. Doi; Luis Furuya-Kanamori; Lifeng Lin; Joey S. W. Kwong; Chang Xu – Research Synthesis Methods, 2024
RobotReviewer is a tool for automatically assessing the risk of bias in randomized controlled trials, but there is limited evidence of its reliability. We evaluated the agreement between RobotReviewer and humans regarding the risk of bias assessment based on 1955 randomized controlled trials. The risk of bias in these trials was assessed via two…
Descriptors: Risk, Randomized Controlled Trials, Classification, Robotics
Paul J. Dizona – ProQuest LLC, 2022
Missing data is a common challenge to any researcher in almost any field of research. In particular, human participants in research do not always respond or return for assessments leaving the researcher to rely on missing data methods. The most common methods (i.e., Multiple Imputation and Full Information Maximum Likelihood) assume that the…
Descriptors: Pretests Posttests, Research Design, Research Problems, Dropouts
Peer reviewed Peer reviewed
Direct linkDirect link
Lu, Jie; Schmidt, Matthew; Lee, Minyoung; Huang, Rui – Educational Technology Research and Development, 2022
This paper presents a systematic literature review characterizing the methodological properties of usability studies conducted on educational and learning technologies in the past 20 years. PRISMA guidelines were followed to identify, select, and review relevant research and report results. Our rigorous review focused on (1) categories of…
Descriptors: Usability, Research Methodology, Educational Technology, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Wendy Chan; Jimin Oh; Katherine Wilson – Society for Research on Educational Effectiveness, 2022
Background: Over the past decade, research on the development and assessment of tools to improve the generalizability of experimental findings has grown extensively (Tipton & Olsen, 2018). However, many experimental studies in education are based on small samples, which may include 30-70 schools while inference populations to which…
Descriptors: Educational Research, Research Problems, Sample Size, Research Methodology
Peer reviewed Peer reviewed
Dongho Shin – Grantee Submission, 2024
We consider Bayesian estimation of a hierarchical linear model (HLM) from small sample sizes. The continuous response Y and covariates C are partially observed and assumed missing at random. With C having linear effects, the HLM may be efficiently estimated by available methods. When C includes cluster-level covariates having interactive or other…
Descriptors: Bayesian Statistics, Computation, Hierarchical Linear Modeling, Data Analysis
Du, Han; Enders, Craig; Keller, Brian; Bradbury, Thomas N.; Karney, Benjamin R. – Grantee Submission, 2022
Missing data are exceedingly common across a variety of disciplines, such as educational, social, and behavioral science areas. Missing not at random (MNAR) mechanism where missingness is related to unobserved data is widespread in real data and has detrimental consequence. However, the existing MNAR-based methods have potential problems such as…
Descriptors: Bayesian Statistics, Data Analysis, Computer Simulation, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Piotr Jabkowski – International Journal of Social Research Methodology, 2023
Social research methodologists have postulated that the transparency of survey procedures and data processing is mandatory for assessing the Total Survey Error. Recent analyses of data from cross-national surveys have demonstrated an increase in the quality of documentation reports over time and significant differences in documentation quality…
Descriptors: Social Science Research, Cross Cultural Studies, Documentation, Error Patterns
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  44