NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
William Orwig; Emma R. Edenbaum; Joshua D. Greene; Daniel L. Schacter – Journal of Creative Behavior, 2024
Recent developments in computerized scoring via semantic distance have provided automated assessments of verbal creativity. Here, we extend past work, applying computational linguistic approaches to characterize salient features of creative text. We hypothesize that, in addition to semantic diversity, the degree to which a story includes…
Descriptors: Computer Assisted Testing, Scoring, Creativity, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Chunsong Jiang; Xuan Chen; Aiping Yu; Guiqin Liang – Education and Information Technologies, 2025
Assignments and tests are the main forms of evaluation in the educational process, students usually lose interest in boring exercises during course learning. In spired of elements from human-computer battle game, a course test system is designed to encourage students to take tests more frequently and actively to achieve better learning effect,…
Descriptors: Computer Games, Educational Games, Game Based Learning, Competition
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Laura Kuusemets; Kristin Parve; Kati Ain; Tiina Kraav – International Journal of Education in Mathematics, Science and Technology, 2024
Using multiple-choice questions as learning and assessment tools is standard at all levels of education. However, when discussing the positive and negative aspects of their use, the time and complexity involved in producing plausible distractor options emerge as a disadvantage that offsets the time savings in relation to feedback. The article…
Descriptors: Program Evaluation, Artificial Intelligence, Computer Assisted Testing, Man Machine Systems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Doewes, Afrizal; Pechenizkiy, Mykola – International Educational Data Mining Society, 2021
Scoring essays is generally an exhausting and time-consuming task for teachers. Automated Essay Scoring (AES) facilitates the scoring process to be faster and more consistent. The most logical way to assess the performance of an automated scorer is by measuring the score agreement with the human raters. However, we provide empirical evidence that…
Descriptors: Man Machine Systems, Automation, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Zhai, Xiaoming; Shi, Lehong; Nehm, Ross H. – Journal of Science Education and Technology, 2021
Machine learning (ML) has been increasingly employed in science assessment to facilitate automatic scoring efforts, although with varying degrees of success (i.e., magnitudes of machine-human score agreements [MHAs]). Little work has empirically examined the factors that impact MHA disparities in this growing field, thus constraining the…
Descriptors: Meta Analysis, Man Machine Systems, Artificial Intelligence, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Yildirim-Erbasli, Seyma N.; Bulut, Okan; Demmans Epp, Carrie; Cui, Ying – Journal of Educational Technology Systems, 2023
Conversational agents have been widely used in education to support student learning. There have been recent attempts to design and use conversational agents to conduct assessments (i.e., conversation-based assessments: CBA). In this study, we developed CBA with constructed and selected-response tests using Rasa--an artificial intelligence-based…
Descriptors: Artificial Intelligence, Intelligent Tutoring Systems, Computer Mediated Communication, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Maya Usher – Assessment & Evaluation in Higher Education, 2025
The integration of Generative Artificial Intelligence (GenAI) in education has introduced innovative approaches to assessment. One such approach is AI chatbot-based assessment, which utilizes large language models to provide students with timely and consistent feedback. However, the effectiveness of AI chatbots in generating assessments comparable…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Student Evaluation, Peer Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Areum; Krieger, Florian; Borgonovi, Francesca; Greiff, Samuel – Large-scale Assessments in Education, 2023
Process data are becoming more and more popular in education research. In the field of computer-based assessments of collaborative problem solving (ColPS), process data have been used to identify students' test-taking strategies while working on the assessment, and such data can be used to complement data collected on accuracy and overall…
Descriptors: Behavior Patterns, Cooperative Learning, Problem Solving, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Moon, Jewoong; Ke, Fengfeng; Sokolikj, Zlatko – British Journal of Educational Technology, 2020
Tracking students' learning states to provide tailored learner support is a critical element of an adaptive learning system. This study explores how an automatic assessment is capable of tracking learners' cognitive and emotional states during virtual reality (VR)-based representational-flexibility training. This VR-based training program aims to…
Descriptors: Adolescents, Autism, Pervasive Developmental Disorders, Learning Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Maddox, Bryan – Measurement: Interdisciplinary Research and Perspectives, 2017
This article discusses talk and gesture as neglected sources of process data (Maddox, 2015, Maddox and Zumbo, 2017). The significance of the article is the growing use of various sources of process data in computer-based testing (Ercikan and Pellegrino, (Eds.) 2017; Zumbo and Hubley, (Eds.) 2017). The use of process data on talk and gesture…
Descriptors: Nonverbal Communication, Verbal Communication, Data, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Rosen, Yigal – International Journal of Artificial Intelligence in Education, 2015
How can activities in which collaborative skills of an individual are measured be standardized? In order to understand how students perform on collaborative problem solving (CPS) computer-based assessment, it is necessary to examine empirically the multi-faceted performance that may be distributed across collaboration methods. The aim of this…
Descriptors: Computer Assisted Testing, Problem Solving, Cooperation, Man Machine Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Malik, Kaleem Razzaq; Mir, Rizwan Riaz; Farhan, Muhammad; Rafiq, Tariq; Aslam, Muhammad – EURASIA Journal of Mathematics, Science & Technology Education, 2017
Research in era of data representation to contribute and improve key data policy involving the assessment of learning, training and English language competency. Students are required to communicate in English with high level impact using language and influence. The electronic technology works to assess students' questions positively enabling…
Descriptors: Knowledge Management, Computer Assisted Testing, Student Evaluation, Search Strategies
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Evanini, Keelan; Heilman, Michael; Wang, Xinhao; Blanchard, Daniel – ETS Research Report Series, 2015
This report describes the initial automated scoring results that were obtained using the constructed responses from the Writing and Speaking sections of the pilot forms of the "TOEFL Junior"® Comprehensive test administered in late 2011. For all of the items except one (the edit item in the Writing section), existing automated scoring…
Descriptors: Computer Assisted Testing, Automation, Language Tests, Second Language Learning
Baker, Eva L.; And Others – 1988
Evaluation models are being developed for assessing artificial intelligence (AI) systems in terms of similar performance by groups of people. Natural language understanding and vision systems are the areas of concentration. In simplest terms, the goal is to norm a given natural language system's performance on a sample of people. The specific…
Descriptors: Artificial Intelligence, Comparative Analysis, Computer Assisted Testing, Computer Science
Pritchard, William H., Jr.; And Others – Educational Technology, 1989
Discussion of courseware evaluation for computer-based training (CBT) highlights the CITAR Computer Courseware Evaluation Model (CCEM), which was developed at the Center for Interactive Technology, Applications, and Research (CITAR) at the University of South Florida. Instructional and interactive characteristics of commercially available CBT…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Courseware, Flow Charts
Previous Page | Next Page »
Pages: 1  |  2