NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers6
Laws, Policies, & Programs
Pell Grant Program1
What Works Clearinghouse Rating
Showing 1 to 15 of 214 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Guozhu Ding; Mailin Li; Shan Li; Hao Wu – Asia Pacific Education Review, 2025
This study investigated the optimal feedback intervals for tasks of varying difficulty levels in online testing and whether task difficulty moderates the effect of feedback intervals on student performance. A pre-experimental study with 36 students was conducted to determine the delayed time for providing feedback based on student behavioral data.…
Descriptors: Feedback (Response), Academic Achievement, Computer Assisted Testing, Intervals
Peer reviewed Peer reviewed
Direct linkDirect link
Ebru Balta; Celal Deha Dogan – SAGE Open, 2024
As computer-based testing becomes more prevalent, the attention paid to response time (RT) in assessment practice and psychometric research correspondingly increases. This study explores the rate of Type I error in detecting preknowledge cheating behaviors, the power of the Kullback-Leibler (KL) divergence measure, and the L person fit statistic…
Descriptors: Cheating, Accuracy, Reaction Time, Computer Assisted Testing
Peer reviewed Peer reviewed
Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Jyoti Prakash Meher; Rajib Mall – IEEE Transactions on Education, 2025
Contribution: This article suggests a novel method for diagnosing a learner's cognitive proficiency using deep neural networks (DNNs) based on her answers to a series of questions. The outcome of the forecast can be used for adaptive assistance. Background: Often a learner spends considerable amounts of time in attempting questions on the concepts…
Descriptors: Cognitive Ability, Assistive Technology, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Julian Marvin Jörs; Ernesto William De Luca – Technology, Knowledge and Learning, 2025
The real-time availability of information and the intelligence of information systems have changed the way we deal with information. Current research is primarily concerned with the interplay between internal and external memory, i.e., how much and which forms of cognitively demanding processes we handle internally and when we use external storage…
Descriptors: Ethics, Learning Processes, Technology Uses in Education, Influence of Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Nathaniel Owen; Ananda Senel – Review of Education, 2025
Transparency in high-stakes English language assessment has become crucial for ensuring fairness and maintaining assessment validity in language testing. However, our understanding of how transparency is conceptualised and implemented remains fragmented, particularly in relation to stakeholder experiences and technological innovations. This study…
Descriptors: Accountability, High Stakes Tests, Language Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lahza, Hatim; Smith, Tammy G.; Khosravi, Hassan – British Journal of Educational Technology, 2023
Traditional item analyses such as classical test theory (CTT) use exam-taker responses to assessment items to approximate their difficulty and discrimination. The increased adoption by educational institutions of electronic assessment platforms (EAPs) provides new avenues for assessment analytics by capturing detailed logs of an exam-taker's…
Descriptors: Medical Students, Evaluation, Computer Assisted Testing, Time Factors (Learning)
Peer reviewed Peer reviewed
Direct linkDirect link
Beifang Ma; Maximilian Krötz; Viola Deutscher; Esther Winther – International Journal of Training and Development, 2025
The rapid digital transformation of vocational education and training (VET) has underscored the need to adapt traditional assessment methods to digital formats. However, when transitioning to digital modes, it is crucial to consider factors beyond mere technical implementation, particularly the potential impact of altered presentation formats on…
Descriptors: Job Skills, Competence, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nese Öztürk Gübes – International Journal of Assessment Tools in Education, 2025
The Trends in International Mathematics and Science Study (TIMSS) was administered via computer, eTIMSS, for the first time in 2019. The purpose of this study was to investigate item block position and item format effect on eighth grade mathematics item easiness in low- and high-achieving countries of eTIMSS 2019. Item responses from Chile, Qatar,…
Descriptors: Foreign Countries, International Assessment, Achievement Tests, Mathematics Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Moon, Jung Aa; Lindner, Marlit Annalena; Arslan, Burcu; Keehner, Madeleine – Educational Measurement: Issues and Practice, 2022
Many test items use both an image and text, but present them in a spatially separate manner. This format could potentially cause a split-attention effect in which the test taker's cognitive load is increased by having to split attention between the image and text, while mentally integrating the two sources of information. We investigated the…
Descriptors: Computer Assisted Testing, Cognitive Processes, Difficulty Level, Attention
Peer reviewed Peer reviewed
Direct linkDirect link
Shen, Jing; Wu, Jingwei – Journal of Speech, Language, and Hearing Research, 2022
Purpose: This study examined the performance difference between remote and in-laboratory test modalities with a speech recognition in noise task in older and younger adults. Method: Four groups of participants (younger remote, younger in-laboratory, older remote, and older in-laboratory) were tested on a speech recognition in noise protocol with…
Descriptors: Age Differences, Test Format, Computer Assisted Testing, Auditory Perception
Peer reviewed Peer reviewed
Direct linkDirect link
Gruss, Richard; Clemons, Josh – Journal of Computer Assisted Learning, 2023
Background: The sudden growth in online instruction due to COVID-19 restrictions has given renewed urgency to questions about remote learning that have remained unresolved. Web-based assessment software provides instructors an array of options for varying testing parameters, but the pedagogical impacts of some of these variations has yet to be…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Andrea Révész; Hyeonjeong Jeong; Shungo Suzuki; Haining Cui; Shunsui Matsuura; Kazuya Saito; Motoaki Sugiura – Studies in Second Language Acquisition, 2024
The last three decades have seen significant development in understanding and describing the effects of task complexity on learner internal processes. However, researchers have primarily employed behavioral methods to investigate task-generated cognitive load. Being the first to adopt neuroimaging to study second language (L2) task effects, we…
Descriptors: Foreign Countries, English (Second Language), Second Language Learning, Decision Making Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Spino, LeAnne L.; Echevarría, Megan M.; Wu, Yu – Foreign Language Annals, 2022
The ACTFL Oral Proficiency Interview--computer (OPIc) employs a self-assessment instrument to determine the nature of the speaking prompts to which the test taker will respond and, thus the difficulty of the test. Grounded in research demonstrating varying levels of accuracy in self-assessment among language learners, this study examines the…
Descriptors: Computer Assisted Testing, Oral Language, Language Proficiency, Self Evaluation (Individuals)
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  15