NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 202556
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 56 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Nicolas Szilas – International Journal of Game-Based Learning, 2025
The question of integrating learning content into a serious game is a recurring one, although no clear theoretical framework has yet been provided. It is often argued that integration should occur at the core mechanic level, but this simple statement conceals the complexity of serious game design. The authors therefore propose a theoretical…
Descriptors: Game Based Learning, Educational Games, Models, Design
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Teresa M. Ober; Darin G. Johnson; Lei Liu; Devon Kinsey; Karyssa A. Courey – ETS Research Report Series, 2025
Effective communication skills are essential for success in both academic and professional contexts. This concept paper presents a novel framework designed to operationally define and support the development of assessments of communication skills with a specific emphasis on K-12 settings. Through discussions on important considerations for the…
Descriptors: Communication Skills, Elementary Secondary Education, Student Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Jason A. Schoeneberger; Christopher Rhoads – American Journal of Evaluation, 2025
Regression discontinuity (RD) designs are increasingly used for causal evaluations. However, the literature contains little guidance for conducting a moderation analysis within an RDD context. The current article focuses on moderation with a single binary variable. A simulation study compares: (1) different bandwidth selectors and (2) local…
Descriptors: Regression (Statistics), Causal Models, Evaluation Methods, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Paul A. Jewsbury; J. R. Lockwood; Matthew S. Johnson – Large-scale Assessments in Education, 2025
Many large-scale assessments model proficiency with a latent regression on contextual variables. Item-response data are used to estimate the parameters of the latent variable model and are used in conjunction with the contextual data to generate plausible values of individuals' proficiency attributes. These models typically incorporate numerous…
Descriptors: Item Response Theory, Data Use, Models, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Safa Ridha Albo Abdullah; Ahmed Al-Azawei – International Review of Research in Open and Distributed Learning, 2025
This systematic review sheds light on the role of ontologies in predicting achievement among online learners, in order to promote their academic success. In particular, it looks at the available literature on predicting online learners' performance through ontological machine-learning techniques and, using a systematic approach, identifies the…
Descriptors: Electronic Learning, Academic Achievement, Grade Prediction, Data Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tugay Kaçak; Abdullah Faruk Kiliç – International Journal of Assessment Tools in Education, 2025
Researchers continue to choose PCA in scale development and adaptation studies because it is the default setting and overestimates measurement quality. When PCA is utilized in investigations, the explained variance and factor loadings can be exaggerated. PCA, in contrast to the models given in the literature, should be investigated in…
Descriptors: Factor Analysis, Monte Carlo Methods, Mathematical Models, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Jean-Paul Fox – Journal of Educational and Behavioral Statistics, 2025
Popular item response theory (IRT) models are considered complex, mainly due to the inclusion of a random factor variable (latent variable). The random factor variable represents the incidental parameter problem since the number of parameters increases when including data of new persons. Therefore, IRT models require a specific estimation method…
Descriptors: Sample Size, Item Response Theory, Accuracy, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Kangkang Li; Chengyang Qian; Xianmin Yang – Education and Information Technologies, 2025
In learnersourcing, automatic evaluation of student-generated content (SGC) is significant as it streamlines the evaluation process, provides timely feedback, and enhances the objectivity of grading, ultimately supporting more effective and efficient learning outcomes. However, the methods of aggregating students' evaluations of SGC face the…
Descriptors: Student Developed Materials, Educational Quality, Automation, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Sohee Kim; Ki Lynn Cole – International Journal of Testing, 2025
This study conducted a comprehensive comparison of Item Response Theory (IRT) linking methods applied to a bifactor model, examining their performance on both multiple choice (MC) and mixed format tests within the common item nonequivalent group design framework. Four distinct multidimensional IRT linking approaches were explored, consisting of…
Descriptors: Item Response Theory, Comparative Analysis, Models, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Zhengjun Li; Huayang Kang – International Journal of Web-Based Learning and Teaching Technologies, 2025
The rapid development of higher education in China has significantly advanced physical education within universities, contributing to students' comprehensive development and national health improvement. However, the expansion of university enrollment has introduced challenges such as a decrease in per capita sports resources and declines in…
Descriptors: Physical Education Teachers, Teacher Effectiveness, Physical Education, Evaluation Methods
Kylie L. Anglin – Annenberg Institute for School Reform at Brown University, 2025
Since 2018, institutions of higher education have been aware of the "enrollment cliff" which refers to expected declines in future enrollment. This paper attempts to describe how prepared institutions in Ohio are for this future by looking at trends leading up to the anticipated decline. Using IPEDS data from 2012-2022, we analyze trends…
Descriptors: Validity, Artificial Intelligence, Models, Best Practices
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Serena Pontenila; Emily Stephens; Nathan C. Anderson – Intersection: A Journal at the Intersection of Assessment and Learning, 2025
This paper begins by establishing the A+ Inquiry model as a theoretical lens for assessing needs related to program assessment workload by demonstrating its alignment with elements of five published frameworks associated with higher education assessment. Then, it uses the model as a frame of reference to explore faculty needs related to program…
Descriptors: College Faculty, Teacher Attitudes, Faculty Workload, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Ran Bao; Jianyong Chen – Technology, Knowledge and Learning, 2025
Multimodal learning analysis emphasizes using diverse data from various sources and forms for precise examination of learning patterns. Despite recent rapid advancements in this field, conventional learning analysis remains predominantly cross-sectional and group-focused, which is insufficient for understanding continuous and personalized learning…
Descriptors: Learning Analytics, Data Use, Evaluation Methods, Learning Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Lauren A. Mason; Abigail Miller; Gregory Hughes; Holly A. Taylor – Cognitive Research: Principles and Implications, 2025
False alarming, or detecting an error when there is not one, is a pervasive problem across numerous industries. The present study investigated the role of elaboration, or additional information about non-error differences in complex visual displays, for mitigating false error responding. In Experiment 1, learners studied errors and non-error…
Descriptors: Error Correction, Error Patterns, Evaluation Methods, Visual Aids
Peer reviewed Peer reviewed
Direct linkDirect link
Reese Butterfuss; Harold Doran – Educational Measurement: Issues and Practice, 2025
Large language models are increasingly used in educational and psychological measurement activities. Their rapidly evolving sophistication and ability to detect language semantics make them viable tools to supplement subject matter experts and their reviews of large amounts of text statements, such as educational content standards. This paper…
Descriptors: Alignment (Education), Academic Standards, Content Analysis, Concept Mapping
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4