NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 16 to 30 of 832 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Kathleen Bohack – ProQuest LLC, 2021
In mathematics education, assessment is increasingly conducted using computer- or tablet-based technologies as alternatives to the traditional paper-and-pencil format. Several studies have examined the impact of test mode (computer vs. paper) on test performance in mathematics, but these studies have produced mixed results and few studies…
Descriptors: High School Students, Mathematics Tests, Computer Assisted Testing, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Daniel M. Settlage; Jim R. Wollscheid – Journal of the Scholarship of Teaching and Learning, 2024
The examination of the testing mode effect has received increased attention as higher education has shifted to remote testing during the COVID-19 pandemic. We believe the testing mode effect consists of four components: the ability to physically write on the test, the method of answer recording, the proctoring/testing environment, and the effect…
Descriptors: College Students, Macroeconomics, Tests, Answer Sheets
Santi Lestari – Research Matters, 2024
Despite the increasing ubiquity of computer-based tests, many general qualifications examinations remain in a paper-based mode. Insufficient and unequal digital provision across schools is often identified as a major barrier to a full adoption of computer-based exams for general qualifications. One way to overcome this barrier is a gradual…
Descriptors: Keyboarding (Data Entry), Handwriting, Test Format, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Tadd Farmer; Michael C. Johnson; Jorin D. Larsen; Lance E. Davidson – Advances in Physiology Education, 2025
Team-based learning (TBL) is an active learning instructional strategy shown to improve student learning in large-enrollment courses. Although early implementations of TBL proved generally effective in an undergraduate exercise physiology course that delivered an online individual readiness assurance test (iRAT) before class, the instructor…
Descriptors: Cooperative Learning, Active Learning, Undergraduate Students, Exercise Physiology
Jeff Allen; Jay Thomas; Stacy Dreyer; Scott Johanningmeier; Dana Murano; Ty Cruce; Xin Li; Edgar Sanchez – ACT Education Corp., 2025
This report describes the process of developing and validating the enhanced ACT. The report describes the changes made to the test content and the processes by which these design decisions were implemented. The authors describe how they shared the overall scope of the enhancements, including the initial blueprints, with external expert panels,…
Descriptors: College Entrance Examinations, Testing, Change, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Casabianca, Jodi M.; Donoghue, John R.; Shin, Hyo Jeong; Chao, Szu-Fu; Choi, Ikkyu – Journal of Educational Measurement, 2023
Using item-response theory to model rater effects provides an alternative solution for rater monitoring and diagnosis, compared to using standard performance metrics. In order to fit such models, the ratings data must be sufficiently connected in order to estimate rater effects. Due to popular rating designs used in large-scale testing scenarios,…
Descriptors: Item Response Theory, Alternative Assessment, Evaluators, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Dhini, Bachriah Fatwa; Girsang, Abba Suganda; Sufandi, Unggul Utan; Kurniawati, Heny – Asian Association of Open Universities Journal, 2023
Purpose: The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes essay scoring, which is conducted through two parameters, semantic and keyword similarities, using a SentenceTransformers pre-trained model that can construct the…
Descriptors: Computer Assisted Testing, Scoring, Writing Evaluation, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Thuy Ho Hoang Nguyen; Bao Trang Thi Nguyen; Giang Thi Linh Hoang; Nhung Thi Hong Pham; Tu Thi Cam Dang – Language Testing in Asia, 2024
The present study explored the comparability in performance scores between the computer-delivered and face-to-face modes for the two speaking tests in the Vietnamese Standardized Test of English Proficiency (VSTEP) (the VSTEP.2 and VSTEP.3-5 Speaking tests) according to Vietnam's Six-Level Foreign Language Proficiency Framework (VNFLPF) and test…
Descriptors: Test Format, Computer Assisted Testing, Student Attitudes, Language Tests
Collette Marie Lere' London – ProQuest LLC, 2024
The purpose of this quantitative, quasi-experimental study was to determine if, and to extent, there is a statistically significant difference between pre and posttest critical thinking scores of U.S. Navy Operations Specialist A-school participants in an adaptive technology training environment and those in the traditional learning environment.…
Descriptors: Military Training, Armed Forces, Critical Thinking, Skill Development
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Jones, Paul; Tong, Ye; Liu, Jinghua; Borglum, Joshua; Primoli, Vince – Journal of Educational Measurement, 2022
This article studied two methods to detect mode effects in two credentialing exams. In Study 1, we used a "modal scale comparison approach," where the same pool of items was calibrated separately, without transformation, within two TC cohorts (TC1 and TC2) and one OP cohort (OP1) matched on their pool-based scale score distributions. The…
Descriptors: Scores, Credentials, Licensing Examinations (Professions), Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip – Educational Measurement: Issues and Practice, 2021
Technical difficulties occasionally lead to missing item scores and hence to incomplete data on computerized tests. It is not straightforward to report scores to the examinees whose data are incomplete due to technical difficulties. Such reporting essentially involves imputation of missing scores. In this paper, a simulation study based on data…
Descriptors: Data Analysis, Scores, Educational Assessment, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Themistocleous, Charalambos; Neophytou, Kyriaki; Rapp, Brenda; Tsapkini, Kyrana – Journal of Speech, Language, and Hearing Research, 2020
Purpose: The evaluation of spelling performance in aphasia reveals deficits in written language and can facilitate the design of targeted writing treatments. Nevertheless, manual scoring of spelling performance is time-consuming, laborious, and error prone. We propose a novel method based on the use of distance metrics to automatically score…
Descriptors: Computer Assisted Testing, Scoring, Spelling, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
VanDerHeyden, Amanda M.; Codding, Robin; Solomon, Benjamin G. – Remedial and Special Education, 2023
Computer-based curriculum-based measurement (CBM) is a relatively common practice, but surprisingly few studies have examined the reliability of computer-based CBM. This study sought to examine the reliability of CBM administered via paper/pencil versus the computer. Twenty-one of 25 students in two third-grade classes (N = 21) participated in two…
Descriptors: Curriculum Based Assessment, Computer Assisted Testing, Test Format, Grade 3
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  56