NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hassan Saleh Mahdi; Ahmed Alkhateeb – International Journal of Computer-Assisted Language Learning and Teaching, 2025
This study aims to develop a robust rubric for evaluating artificial intelligence (AI)--assisted essay writing in English as a Foreign Language (EFL) contexts. Employing a modified Delphi technique, we conducted a comprehensive literature review and administered Likert scale questionnaires. This process yielded nine key evaluation criteria,…
Descriptors: Scoring Rubrics, Essays, Writing Evaluation, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Swapna Haresh Teckwani; Amanda Huee-Ping Wong; Nathasha Vihangi Luke; Ivan Cherh Chiet Low – Advances in Physiology Education, 2024
The advent of artificial intelligence (AI), particularly large language models (LLMs) like ChatGPT and Gemini, has significantly impacted the educational landscape, offering unique opportunities for learning and assessment. In the realm of written assessment grading, traditionally viewed as a laborious and subjective process, this study sought to…
Descriptors: Accuracy, Reliability, Computational Linguistics, Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Alwyn Vwen Yen; Luco, Andrés Carlos; Tan, Seng Chee – Educational Technology & Society, 2023
Although artificial Intelligence (AI) is prevalent and impacts facets of daily life, there is limited research on responsible and humanistic design, implementation, and evaluation of AI, especially in the field of education. Afterall, learning is inherently a social endeavor involving human interactions, rendering the need for AI designs to be…
Descriptors: Essays, Scoring, Writing Evaluation, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Al-Harthi, Aisha Salim Ali; Campbell, Chris; Karimi, Arafeh – Computers in the Schools, 2018
This study aimed to develop, validate, and trial a rubric for evaluating the cloud-based learning designs (CBLD) that were developed by teachers using virtual learning environments. The rubric was developed using the technological pedagogical content knowledge (TPACK) framework, with rubric development including content and expert validation of…
Descriptors: Computer Assisted Instruction, Scoring Rubrics, Interrater Reliability, Content Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Lavi, Rea; Dori, Yehudit Judy – International Journal of Science Education, 2019
Systems thinking is an important skill in science and engineering education. Our study objectives were (1) to create the basis for a systems thinking language common to both science education and engineering education, and (2) to apply this language to assess science and engineering teachers' systems thinking. We administered two assignments to…
Descriptors: Engineering Education, Systems Approach, Science Education, Language Usage
Peer reviewed Peer reviewed
Direct linkDirect link
Razi, Salim – SAGE Open, 2015
Similarity reports of plagiarism detectors should be approached with caution as they may not be sufficient to support allegations of plagiarism. This study developed a 50-item rubric to simplify and standardize evaluation of academic papers. In the spring semester of 2011-2012 academic year, 161 freshmen's papers at the English Language Teaching…
Descriptors: Foreign Countries, Scoring Rubrics, Writing Evaluation, Writing (Composition)
Peer reviewed Peer reviewed
Direct linkDirect link
Unal, Zafer; Bodur, Yasar; Unal, Aslihan – Contemporary Issues in Technology and Teacher Education (CITE Journal), 2012
The researchers in this study undertook development of a webquest evaluation rubric and investigated its reliability. The rubric was created using the strengths of the currently available webquest rubrics with improvements based on the comments provided in the literature and feedback received from educators. After the rubric was created, 23…
Descriptors: Test Construction, Test Reliability, Instructional Material Evaluation, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Unal, Zafer; Bodur, Yasar; Unal, Aslihan – Journal of Information Technology Education: Research, 2012
Current literature provides many examples of rubrics that are used to evaluate the quality of web-quest designs. However, reliability of these rubrics has not yet been researched. This is the first study to fully characterize and assess the reliability of a webquest evaluation rubric. The ZUNAL rubric was created to utilize the strengths of the…
Descriptors: Scoring Rubrics, Test Reliability, Test Construction, Evaluation Criteria
Stahl, John; And Others – 1996
On-line performance assessment was developed to maximize the usefulness of performance assessment and to minimize the time and labor costs incurred. This paper reports on the development of an on-line performance assessment instrument, focusing on the establishment and validation of the scoring rubric and its implementation in the Rasch model, the…
Descriptors: Computer Software, Computer Software Development, Cost Effectiveness, Interrater Reliability