NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 1,060 results Save | Export
Hongwen Guo; Matthew S. Johnson; Luis Saldivia; Michelle Worthington; Kadriye Ercikan – ETS Research Institute, 2025
ETS scientists developed a human-centered AI (HAI) framework that combines data on how students interact with assessments--such as task navigation and time spent--with their performance, providing deeper insights into student performance in large-scale assessments.
Descriptors: Artificial Intelligence, Student Evaluation, Evaluation Methods, Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hon Keung Yau; Choi Ho Man – Turkish Online Journal of Educational Technology - TOJET, 2025
This study explores Hong Kong higher education students' perceptions of E-assessment systems, focusing on factors shaping acceptance of E-examinations over traditional formats. Quantitative analysis of 107 respondents reveals significant positive correlations between diverse pre-exam guidance (e.g., tutorials) and key system features (e.g.,…
Descriptors: Foreign Countries, College Students, Student Attitudes, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Priti Oli; Rabin Banjade; Jeevan Chapagain; Vasile Rus – Grantee Submission, 2024
Assessing students' answers and in particular natural language answers is a crucial challenge in the field of education. Advances in transformer-based models such as Large Language Models (LLMs), have led to significant progress in various natural language tasks. Nevertheless, amidst the growing trend of evaluating LLMs across diverse tasks,…
Descriptors: Student Evaluation, Computer Assisted Testing, Artificial Intelligence, Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Danielle Lottridge; Davis Dimalen; Gerald Weber – ACM Transactions on Computing Education, 2025
Automated assessment is well-established within computer science courses but largely absent from human--computer interaction courses. Automating the assessment of human--computer interaction (HCI) is challenging because the coursework tends not to be computational but rather highly creative, such as designing and implementing interactive…
Descriptors: Computer Science Education, Computer Assisted Testing, Automation, Man Machine Systems
Ildiko Porter-Szucs; Cynthia J. Macknish; Suzanne Toohey – John Wiley & Sons, Inc, 2025
"A Practical Guide to Language Assessment" helps educators at every level redefine their approach to language assessment. Grounded in extensive research and aligned with the latest advances in language education, this comprehensive guide introduces foundational concepts and explores key principles in test development and item writing.…
Descriptors: Student Evaluation, Language Tests, Test Construction, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Renáta Kiss; Beno Csapó – International Journal of Early Childhood, 2025
Previous research has shown that phonological awareness is one of the most important prerequisites for early reading. Monitoring its development requires reliable, easy-to-use instruments especially in the last years of kindergarten. The present study aims to explore the potential for assessing phonological awareness and some of its subskills…
Descriptors: Phonological Awareness, Kindergarten, Reading Skills, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Alhadi, Moosa; Zhang, Dake; Wang, Ting; Maher, Carolyn A. – Computers in the Schools, 2023
This research synthesizes studies that used a Digitalized Interactive Component (DIC) to assess K-12 student performance during Computer-based-Assessments (CBAs) in mathematics. A systematic search identified ten studies, including four that provided language assistance and six that provided response-construction support. We reported on the one…
Descriptors: Computer Assisted Testing, Mathematics Tests, Student Evaluation, Elementary Secondary Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mounia Machkour; Latifa Lamalif; Sophia Faris; Khalifa Mansouri – Educational Process: International Journal, 2025
Background/purpose: This study addresses the problem of demotivation generated by traditional assessment methods, which are often standardized, unengaging, and ill-suited to individual differences. In an increasingly digitized educational context, the primary objective is to assess the ability of an adaptive assessment system, developed on the…
Descriptors: Foreign Countries, High School Seniors, Student Evaluation, Student Motivation
Peer reviewed Peer reviewed
Direct linkDirect link
George Kinnear; Paola Iannone; Ben Davies – Educational Studies in Mathematics, 2025
Example-generation tasks have been suggested as an effective way to both promote students' learning of mathematics and assess students' understanding of concepts. E-assessment offers the potential to use example-generation tasks with large groups of students, but there has been little research on this approach so far. Across two studies, we…
Descriptors: Mathematics Skills, Learning Strategies, Skill Development, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Yang Zhen; Xiaoyan Zhu – Educational and Psychological Measurement, 2024
The pervasive issue of cheating in educational tests has emerged as a paramount concern within the realm of education, prompting scholars to explore diverse methodologies for identifying potential transgressors. While machine learning models have been extensively investigated for this purpose, the untapped potential of TabNet, an intricate deep…
Descriptors: Artificial Intelligence, Models, Cheating, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Jian Zhao; Elaine Chapman; Peyman G. P. Sabet – Education Research and Perspectives, 2024
The launch of ChatGPT and the rapid proliferation of generative AI (GenAI) have brought transformative changes to education, particularly in the field of assessment. This has prompted a fundamental rethinking of traditional assessment practices, presenting both opportunities and challenges in evaluating student learning. While numerous studies…
Descriptors: Literature Reviews, Artificial Intelligence, Evaluation Methods, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Arif Cem Topuz; Kinshuk – Educational Technology Research and Development, 2024
Online assessments of learning, or online exams, have become increasingly widespread with the rise of distance learning. Online exams are preferred by many students and are perceived as a quick and easy tool to measure knowledge. On the contrary, some students are concerned about the possibility of cheating and technological difficulties in online…
Descriptors: Computer Assisted Testing, Student Evaluation, Evaluation Methods, Student Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Ulrike Padó; Yunus Eryilmaz; Larissa Kirschner – International Journal of Artificial Intelligence in Education, 2024
Short-Answer Grading (SAG) is a time-consuming task for teachers that automated SAG models have long promised to make easier. However, there are three challenges for their broad-scale adoption: A technical challenge regarding the need for high-quality models, which is exacerbated for languages with fewer resources than English; a usability…
Descriptors: Grading, Automation, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Burhan Ogut; Ruhan Circi; Huade Huo; Juanita Hicks; Michelle Yin – International Electronic Journal of Elementary Education, 2025
This study explored the effectiveness of extended time (ET) accommodations in the 2017 NAEP Grade 8 Mathematics assessment to enhance educational equity. Analyzing NAEP process data through an XGBoost model, we examined if early interactions with assessment items could predict students' likelihood of requiring ET by identifying those who received…
Descriptors: Identification, Testing Accommodations, National Competency Tests, Equal Education
Peer reviewed Peer reviewed
Direct linkDirect link
Petrilli, Michael J. – Education Next, 2022
In the late 1960s, when federal officials and eminent psychologists were first designing the National Assessment of Educational Progress (NAEP), they probably never contemplated testing students younger than nine. The technology for mass testing at the time--bubble sheets and No. 2 pencils--only worked if students could read the instructions and…
Descriptors: Kindergarten, Student Evaluation, National Competency Tests, Computer Assisted Testing
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  71