NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 121 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Robert C. Lorenz; Mirjam Jenny; Anja Jacobs; Katja Matthias – Research Synthesis Methods, 2024
Conducting high-quality overviews of reviews (OoR) is time-consuming. Because the quality of systematic reviews (SRs) varies, it is necessary to critically appraise SRs when conducting an OoR. A well-established appraisal tool is A Measurement Tool to Assess Systematic Reviews (AMSTAR) 2, which takes about 15-32 min per application. To save time,…
Descriptors: Decision Making, Time Management, Evaluation Methods, Quality Assurance
Peer reviewed Peer reviewed
Direct linkDirect link
Liang Zhang; Jionghao Lin; John Sabatini; Conrad Borchers; Daniel Weitekamp; Meng Cao; John Hollander; Xiangen Hu; Arthur C. Graesser – IEEE Transactions on Learning Technologies, 2025
Learning performance data, such as correct or incorrect answers and problem-solving attempts in intelligent tutoring systems (ITSs), facilitate the assessment of knowledge mastery and the delivery of effective instructions. However, these data tend to be highly sparse (80%90% missing observations) in most real-world applications. This data…
Descriptors: Artificial Intelligence, Academic Achievement, Data, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Abigail Goben; Megan Sapp Nelson; Shaurya Gaur – College & Research Libraries, 2025
The "Building Your Research Data Management Toolkit" was developed to provide introductory research data management skills training to liaisons in academic libraries. This paper assesses the participants' perceived change in knowledge, behaviors and attitudes as a result of participation in the RoadShow program. Long term changes in…
Descriptors: Academic Libraries, Data, Information Management, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Bin Tan; Hao-Yue Jin; Maria Cutumisu – Computer Science Education, 2024
Background and Context: Computational thinking (CT) has been increasingly added to K-12 curricula, prompting teachers to grade more and more CT artifacts. This has led to a rise in automated CT assessment tools. Objective: This study examines the scope and characteristics of publications that use machine learning (ML) approaches to assess…
Descriptors: Computation, Thinking Skills, Artificial Intelligence, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Carpentras, Dino; Quayle, Michael – International Journal of Social Research Methodology, 2023
Agent-based models (ABMs) often rely on psychometric constructs such as 'opinions', 'stubbornness', 'happiness', etc. The measurement process for these constructs is quite different from the one used in physics as there is no standardized unit of measurement for opinion or happiness. Consequently, measurements are usually affected by 'psychometric…
Descriptors: Psychometrics, Error of Measurement, Models, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Yi-Hsuan; Haberman, Shelby J. – Journal of Educational Measurement, 2021
For assessments that use different forms in different administrations, equating methods are applied to ensure comparability of scores over time. Ideally, a score scale is well maintained throughout the life of a testing program. In reality, instability of a score scale can result from a variety of causes, some are expected while others may be…
Descriptors: Scores, Regression (Statistics), Demography, Data
Peer reviewed Peer reviewed
Direct linkDirect link
Ting Ding; Mengqi Zhang – International Journal of Web-Based Learning and Teaching Technologies, 2024
The level of information technology is increasing, and technology is developed. University English teaching has also changed under its influence. Different from the traditional teaching in the past, more and more students adopt the mode of "Internet + Smartphone" to learn English. This paper proposes a teaching mode evaluation method in…
Descriptors: English for Special Purposes, Educational Change, Business Administration Education, Data
Ying Fang; Rod D. Roscoe; Danielle S. McNamara – Grantee Submission, 2023
Artificial Intelligence (AI) based assessments are commonly used in a variety of settings including business, healthcare, policing, manufacturing, and education. In education, AI-based assessments undergird intelligent tutoring systems as well as many tools used to evaluate students and, in turn, guide learning and instruction. This chapter…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Student Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Zhongqiang Feng; Yi Zhang – International Journal of Web-Based Learning and Teaching Technologies, 2024
OBE concept is a new teaching mode which emphasizes the improvement of students' subjective initiative and professional practice ability. The teaching of animation course is based on drawing and computer, which requires teachers to understand the OBE mode of animation course, carry out targeted teaching innovation of animation course, and adjust…
Descriptors: Animation, MOOCs, Art Education, Teaching Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Guher Gorgun; Okan Bulut – Educational Measurement: Issues and Practice, 2025
Automatic item generation may supply many items instantly and efficiently to assessment and learning environments. Yet, the evaluation of item quality persists to be a bottleneck for deploying generated items in learning and assessment settings. In this study, we investigated the utility of using large-language models, specifically Llama 3-8B, for…
Descriptors: Artificial Intelligence, Quality Control, Technology Uses in Education, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Collier-Meek, Melissa A.; Fallon, Lindsay M.; Gould, Kaitlin – School Psychology Quarterly, 2018
Collecting treatment integrity data is critical for (a) strengthening internal validity within a research study, (b) determining the impact of an intervention on student outcomes, and (c) assessing the need for implementation supports. Although researchers have noted the increased inclusion of treatment integrity data in published articles, there…
Descriptors: Integrity, Data, Feedback (Response), Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Cusker, Jeremy – Issues in Science and Technology Librarianship, 2018
In 2012, this author published a paper describing a method for using the raw data from Web of Science to examine the journals cited by any given group of researchers and then compare that list to lists of 'top journals' of similar disciplines. It was not a straightforward method to use and required a great deal of effort and spreadsheet work by a…
Descriptors: Citation Analysis, Citations (References), Bibliometrics, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Forbes, Claire – Review of Education, 2022
Despite increasing pressure for policy and practice to adopt a more evidence-based approach, transferring evidence into use remains a stubborn challenge. This is largely due to a number of researcher-derived and user-derived barriers at play within institutions, organisations and systems that constrain active engagement with evidence. This paper…
Descriptors: Evidence Based Practice, Theory Practice Relationship, Barriers, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Ying; Chen, Fu; Lutsyk, Alina; Leighton, Jacqueline P.; Cutumisu, Maria – Assessment in Education: Principles, Policy & Practice, 2023
With the exponential increase in the volume of data available in the 21st century, data literacy skills have become vitally important in work places and everyday life. This paper provides a systematic review of available data literacy assessments targeted at different audiences and educational levels. The results can help researchers and…
Descriptors: Data, Information Literacy, 21st Century Skills, Competence
Lotfi Simon Kerzabi – ProQuest LLC, 2021
Monte Carlo methods are an accepted methodology in regards to generation critical values for a Maximum test. The same methods are also applicable to the evaluation of the robustness of the new created test. A table of critical values was created, and the robustness of the new maximum test was evaluated for five different distributions. Robustness…
Descriptors: Data, Monte Carlo Methods, Testing, Evaluation Research
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9