Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 5 |
| Since 2017 (last 10 years) | 6 |
| Since 2007 (last 20 years) | 9 |
Descriptor
Source
Author
Publication Type
| Reports - Descriptive | 21 |
| Journal Articles | 15 |
| Opinion Papers | 2 |
| Collected Works - Proceedings | 1 |
| Guides - Non-Classroom | 1 |
| Non-Print Media | 1 |
| Speeches/Meeting Papers | 1 |
| Tests/Questionnaires | 1 |
Education Level
| Higher Education | 2 |
| Postsecondary Education | 1 |
Audience
| Practitioners | 2 |
| Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ricardo Conejo Muñoz; Beatriz Barros Blanco; José del Campo-Ávila; José L. Triviño Rodriguez – IEEE Transactions on Learning Technologies, 2024
Automatic question generation and the assessment of procedural knowledge is still a challenging research topic. This article focuses on the case of it, the techniques of parsing grammars for compiler construction. There are two well-known techniques for parsing: top-down parsing with LL(1) and bottom-up with LR(1). Learning these techniques and…
Descriptors: Automation, Questioning Techniques, Knowledge Level, Language
Han, Chao – Language Testing, 2022
Over the past decade, testing and assessing spoken-language interpreting has garnered an increasing amount of attention from stakeholders in interpreter education, professional certification, and interpreting research. This is because in these fields assessment results provide a critical evidential basis for high-stakes decisions, such as the…
Descriptors: Translation, Language Tests, Testing, Evaluation Methods
Anita Pásztor-Kovács; Attila Pásztor; Gyöngyvér Molnár – Interactive Learning Environments, 2023
In this paper, we present an agenda for the research directions we recommend in addressing the issues of realizing and evaluating communication in CPS instruments. We outline our ideas on potential ways to improve: (1) generalizability in Human-Human assessment tools and ecological validity in Human-Agent ones; (2) flexible and convenient use of…
Descriptors: Cooperation, Problem Solving, Evaluation Methods, Teamwork
Laura K. Allen; Arthur C. Grasser; Danielle S. McNamara – Grantee Submission, 2023
Assessments of natural language can provide vast information about individuals' thoughts and cognitive process, but they often rely on time-intensive human scoring, deterring researchers from collecting these sources of data. Natural language processing (NLP) gives researchers the opportunity to implement automated textual analyses across a…
Descriptors: Psychological Studies, Natural Language Processing, Automation, Research Methodology
Tong Li; Sarah D. Creer; Tracy Arner; Rod D. Roscoe; Laura K. Allen; Danielle S. McNamara – Grantee Submission, 2022
Automated writing evaluation (AWE) tools can facilitate teachers' analysis of and feedback on students' writing. However, increasing evidence indicates that writing instructors experience challenges in implementing AWE tools successfully. For this reason, our development of the Writing Analytics Tool (WAT) has employed a participatory approach…
Descriptors: Automation, Writing Evaluation, Learning Analytics, Participatory Research
O'Leary, Michael; Scully, Darina; Karakolidis, Anastasios; Pitsia, Vasiliki – European Journal of Education, 2018
The role of digital technology in assessment has received a great deal of attention in recent years. Naturally, technology offers many practical benefits, such as increased efficiency with regard to the design, implementation and scoring of existing assessments. More importantly, it also has the potential to have profound, transformative effects…
Descriptors: Computer Assisted Testing, Educational Technology, Technology Uses in Education, Evaluation Methods
Ivaniushin, Dmitrii A.; Shtennikov, Dmitrii G.; Efimchick, Eugene A.; Lyamin, Andrey V. – International Association for Development of the Information Society, 2016
This paper describes an approach to use automated assessments in online courses. Open edX platform is used as the online courses platform. The new assessment type uses Scilab as learning and solution validation tool. This approach allows to use automated individual variant generation and automated solution checks without involving the course…
Descriptors: Online Courses, Evaluation Methods, Computer Assisted Testing, Large Group Instruction
Munoz, Carlos; Garcia-Penalvo, Francisco J.; Morales, Erla Mariela; Conde, Miguel Angel; Seoane, Antonio M. – International Journal of Distance Education Technologies, 2012
Automation toward efficiency is the aim of most intelligent systems in an educational context in which results calculation automation that allows experts to spend most of their time on important tasks, not on retrieving, ordering, and interpreting information. In this paper, the authors provide a tool that easily evaluates Learning Objects quality…
Descriptors: Evaluation Methods, Teaching Methods, Automation, Educational Resources
Georgouli, Katerina; Guerreiro, Pedro – International Journal on E-Learning, 2011
This paper presents the successful integration of the evaluation engine of Mooshak into the open source learning management system Claroline. Mooshak is an open source online automatic judge that has been used for international and national programming competitions. although it was originally designed for programming competitions, Mooshak has also…
Descriptors: Foreign Countries, Electronic Learning, Programming, Internet
Joy, Mike; Griffiths, Nathan; Boyatt, Russell – Journal on Educational Resources in Computing, 2005
Computer programming lends itself to automated assessment. With appropriate software tools, program correctness can be measured, along with an indication of quality according to a set of metrics. Furthermore, the regularity of program code allows plagiarism detection to be an integral part of the tools that support assessment. In this paper, we…
Descriptors: Plagiarism, Evaluation Methods, Programming, Feedback (Response)
Peer reviewedSalton, Gerard; And Others – Information Processing & Management, 1997
Discussion of the use of information retrieval techniques for automatic generation of semantic hypertext links focuses on automatic text summarization. Topics include World Wide Web links, text segmentation, and evaluation of text summarization by comparing automatically generated abstracts with manually prepared abstracts. (Author/LRW)
Descriptors: Abstracts, Automation, Comparative Analysis, Evaluation Methods
Peer reviewedFrench, James C.; And Others – Information Processing & Management, 1997
Describes a prototype system for software documentation management called SLEUTH (Software Literacy Enhancing Usefulness to Humans) being developed at the University of Virginia. Highlights include information retrieval techniques, hypertext links that are installed automatically, a WAIS (Wide Area Information Server) search engine, user…
Descriptors: Automation, Computer Interfaces, Computer Software, Documentation
Peer reviewedBlustein, James; And Others – Information Processing & Management, 1997
Presents two methods for evaluating automatically generated hypertext links: one is based on correlations between shortest paths in the hypertext structure and a semantic similarity measure, and the other is based on measuring users' performances using hypertext. Advantages and disadvantages of computer versus human evaluation are discussed.…
Descriptors: Automation, Comparative Analysis, Correlation, Evaluation Methods
Peer reviewedAgosti, Maristella; And Others – Information Processing & Management, 1996
Describes the design and implementation of TACHIR, a tool for the automatic construction of hypertexts for information retrieval. Topics include a three-level conceptual model; navigating among documents, index terms, and concepts; the use of HTML (Hypertext Markup Language) and the World Wide Web; evaluation of TACHIR; and future possibilities.…
Descriptors: Automation, Computer Software Development, Evaluation Methods, Futures (of Society)
Schuster, James M. – Educational Technology, 1995
Describes an improvement program, based on total quality management concepts, for training employees that was developed at an IBM facility. Topics include student evaluation; follow-up telephone surveys; creating an educational brochure; and automating registration and recordkeeping activities. (LRW)
Descriptors: Automation, Case Studies, Evaluation Methods, Improvement Programs
Previous Page | Next Page »
Pages: 1 | 2
Direct link
