NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 321 results Save | Export
Kylie L. Anglin – Annenberg Institute for School Reform at Brown University, 2025
Since 2018, institutions of higher education have been aware of the "enrollment cliff" which refers to expected declines in future enrollment. This paper attempts to describe how prepared institutions in Ohio are for this future by looking at trends leading up to the anticipated decline. Using IPEDS data from 2012-2022, we analyze trends…
Descriptors: Validity, Artificial Intelligence, Models, Best Practices
Peer reviewed Peer reviewed
Direct linkDirect link
Seungbak Lee; Minsoo Kang; Jae-Hyeon Park; Hyo-Jun Yun – Measurement in Physical Education and Exercise Science, 2025
The PageRank model has been applied in sport ranking systems; however, prior implementations exhibited limitations and failed to produce valid rankings. This study analyzed 1,466 National Collegiate Athletic Association (NCAA) Division 1 football games and developed a novel, modified PageRank model. We also proposed an artificial…
Descriptors: Algorithms, Evaluation Methods, Team Sports, College Athletics
Edgar C. Merkle; Oludare Ariyo; Sonja D. Winter; Mauricio Garnier-Villarreal – Grantee Submission, 2023
We review common situations in Bayesian latent variable models where the prior distribution that a researcher specifies differs from the prior distribution used during estimation. These situations can arise from the positive definite requirement on correlation matrices, from sign indeterminacy of factor loadings, and from order constraints on…
Descriptors: Models, Bayesian Statistics, Correlation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Christina Glasauer; Martin K. Yeh; Lois Anne DeLong; Yu Yan; Yanyan Zhuang – Computer Science Education, 2025
Background and Context: Feedback on one's progress is essential to new programming language learners, particularly in out-of-classroom settings. Though many study materials offer assessment mechanisms, most do not examine the accuracy of the feedback they deliver, nor give evidence on its validity. Objective: We investigate the potential use of a…
Descriptors: Novices, Computer Science Education, Programming, Accuracy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kylie Anglin – AERA Open, 2024
Given the rapid adoption of machine learning methods by education researchers, and the growing acknowledgment of their inherent risks, there is an urgent need for tailored methodological guidance on how to improve and evaluate the validity of inferences drawn from these methods. Drawing on an integrative literature review and extending a…
Descriptors: Validity, Artificial Intelligence, Models, Best Practices
Peer reviewed Peer reviewed
Direct linkDirect link
Tong Wu; Stella Y. Kim; Carl Westine; Michelle Boyer – Journal of Educational Measurement, 2025
While significant attention has been given to test equating to ensure score comparability, limited research has explored equating methods for rater-mediated assessments, where human raters inherently introduce error. If not properly addressed, these errors can undermine score interchangeability and test validity. This study proposes an equating…
Descriptors: Item Response Theory, Evaluators, Error of Measurement, Test Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deborah Oluwadele; Yashik Singh; Timothy Adeliyi – Electronic Journal of e-Learning, 2024
Validation is needed for any newly developed model or framework because it requires several real-life applications. The investment made into e-learning in medical education is daunting, as is the expectation for a positive return on investment. The medical education domain requires data-wise implementation of e-learning as the debate continues…
Descriptors: Electronic Learning, Evaluation Methods, Medical Education, Sustainability
Peer reviewed Peer reviewed
Direct linkDirect link
Hyemin Yoon; HyunJin Kim; Sangjin Kim – Measurement: Interdisciplinary Research and Perspectives, 2024
We have maintained the customer grade system that is being implemented to customers with excellent performance through customer segmentation for years. Currently, financial institutions that operate the customer grade system provide similar services based on the score calculation criteria, but the score calculation criteria vary from the financial…
Descriptors: Classification, Artificial Intelligence, Prediction, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Ji, Xuejun Ryan; Wu, Amery D. – Educational Measurement: Issues and Practice, 2023
The Cross-Classified Mixed Effects Model (CCMEM) has been demonstrated to be a flexible framework for evaluating reliability by measurement specialists. Reliability can be estimated based on the variance components of the test scores. Built upon their accomplishment, this study extends the CCMEM to be used for evaluating validity evidence.…
Descriptors: Measurement, Validity, Reliability, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Manapat, Patrick D.; Edwards, Michael C. – Educational and Psychological Measurement, 2022
When fitting unidimensional item response theory (IRT) models, the population distribution of the latent trait ([theta]) is often assumed to be normally distributed. However, some psychological theories would suggest a nonnormal [theta]. For example, some clinical traits (e.g., alcoholism, depression) are believed to follow a positively skewed…
Descriptors: Robustness (Statistics), Computational Linguistics, Item Response Theory, Psychological Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Price, Heather E.; Smith, Christian – Field Methods, 2021
To identify the dominant cultural models among parents transmitting faith to their children, we find few methodological guidelines to guide coding and analysis of semi-structured interviews. We thus developed a three-phase procedure for our research team. Phase-one follows Campbell et al. by unitizing on meanings rather than words/pages, including…
Descriptors: Semi Structured Interviews, Parents, Religion, Reliability
Heritage, Margaret; Wylie, Caroline – National Research and Development Center to Improve Education for Secondary English Learners at WestEd, 2021
The Comprehensive Assessment System (CAS) Framework presents a vision for a system of assessments for English Learners in secondary grades that brings assessment closer to the classroom and fully involves teachers in assessment development and validation. The CAS Framework is intended to signal a new and equitable direction and to provoke…
Descriptors: Secondary School Students, English Language Learners, Student Evaluation, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sujiyani Kassiavera; A. Suparmi; C. Cari; Sukarmin Sukarmin – Journal of Baltic Science Education, 2024
The challenge of accurately assessing critical thinking in physics education, particularly on topics like work and energy, remains a key issue for educators. The current study aims to address this challenge by exploring students' critical thinking abilities using two-tier test data analyzed through the Rasch model. Data were collected from…
Descriptors: Critical Thinking, Physics, Science Instruction, Foreign Countries
Elizabeth Talbott; Andres De Los Reyes; Devin M. Kearns; Jeannette Mancilla-Martinez; Mo Wang – Exceptional Children, 2023
Evidence-based assessment (EBA) requires that investigators employ scientific theories and research findings to guide decisions about what domains to measure, how and when to measure them, and how to make decisions and interpret results. To implement EBA, investigators need high-quality assessment tools along with evidence-based processes. We…
Descriptors: Evidence Based Practice, Evaluation Methods, Special Education, Educational Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aimee Howley; Craig B. Howley; Marged Dudek – Journal of Educational Leadership and Policy Studies, 2025
This article explores the development and evaluation of the Building Leadership Team Assessment Tool (BLT-AT), designed to measure Professional Learning Communities' (PLCs') use of effective school improvement practices. The BLT-AT is grounded in Ohio's inclusive instructional leadership model, which emphasizes the improvement of teaching and…
Descriptors: Test Construction, Communities of Practice, Instructional Leadership, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  22