NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 34 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yusuf Oc; Hela Hassen – Marketing Education Review, 2025
Driven by technological innovations, continuous digital expansion has transformed fundamentally the landscape of modern higher education, leading to discussions about evaluation techniques. The emergence of generative artificial intelligence raises questions about reliability and academic honesty regarding multiple-choice assessments in online…
Descriptors: Higher Education, Multiple Choice Tests, Computer Assisted Testing, Electronic Learning
Joanna Williamson – Research Matters, 2025
Teachers, examiners and assessment experts know from experience that some candidates annotate exam questions. "Annotation" includes anything the candidate writes or draws outside of the designated response space, such as underlining, jotting, circling, sketching and calculating. Annotations are of interest because they may evidence…
Descriptors: Mathematics, Tests, Documentation, Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Yang Du; Susu Zhang – Journal of Educational and Behavioral Statistics, 2025
Item compromise has long posed challenges in educational measurement, jeopardizing both test validity and test security of continuous tests. Detecting compromised items is therefore crucial to address this concern. The present literature on compromised item detection reveals two notable gaps: First, the majority of existing methods are based upon…
Descriptors: Item Response Theory, Item Analysis, Bayesian Statistics, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Douglas Yeboah – Cogent Education, 2023
Computer-based test has been administered in e-learning environments as part of ICT integration in education. Recently, online test is gaining attention in both regular and distance education institutions, and students' preference or perception of an online test versus paper-based test is crucial in successful adoption or implementation of either…
Descriptors: Foreign Countries, Undergraduate Students, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lemmo, Alice – International Journal of Science and Mathematics Education, 2021
Comparative studies on paper and pencil--and computer-based tests principally focus on statistical analysis of students' performances. In educational assessment, comparing students' performance (in terms of right or wrong results) does not imply a comparison of problem-solving processes followed by students. In this paper, we present a theoretical…
Descriptors: Computer Assisted Testing, Comparative Analysis, Evaluation Methods, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Janse van Rensburg, Cecile; Coetzee, Stephen A.; Schmulian, Astrid – Assessment & Evaluation in Higher Education, 2022
This study reports on the incorporation of mobile instant messaging (MIM) in assessments, as a collaborative learning tool, to enable students to socially construct knowledge and develop their collaborative problem solving competence, while being assessed individually. In particular, this study explores: what is the extent and timing of students'…
Descriptors: Computer Mediated Communication, Student Evaluation, Peer Relationship, Cooperative Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Necati Taskin; Kerem Erzurumlu – Asian Journal of Distance Education, 2023
In this study, online test scores and paper-pencil test scores of students studying through online learning were examined. Causal-comparative research was used to determine the distribution of students' test scores and to examine the relationship between them. The participants of the research are freshman students studying in 12 faculties and 8…
Descriptors: Computer Assisted Testing, Scores, Test Format, Paper (Material)
Peer reviewed Peer reviewed
Direct linkDirect link
Jung Youn, Soo – Language Testing, 2023
As access to smartphones and emerging technologies has become ubiquitous in our daily lives and in language learning, technology-mediated social interaction has become common in teaching and assessing L2 speaking. The changing ecology of L2 spoken interaction provides language educators and testers with opportunities for renewed test design and…
Descriptors: Test Construction, Test Validity, Second Language Learning, Telecommunications
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Senadheera, Prasad; Kulasekara, Geetha Udayangani – Open Praxis, 2021
COVID-19 outbreak brought about many challenges including the shifting of university assessments to conduct in online mode. This research study tries to explore the impact of newly designed online formative assessments on students' learning, in a Plant Physiology course. The designing of assessments were carried out focusing on constructive…
Descriptors: Formative Evaluation, Evaluation Methods, Electronic Learning, Educational Environment
Christine G. Casey, Editor – Centers for Disease Control and Prevention, 2024
The "Morbidity and Mortality Weekly Report" ("MMWR") series of publications is published by the Office of Science, Centers for Disease Control and Prevention (CDC), U.S. Department of Health and Human Services. Articles included in this supplement are: (1) Overview and Methods for the Youth Risk Behavior Surveillance System --…
Descriptors: High School Students, At Risk Students, Health Behavior, National Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Stephen G. Sireci; Javier Suárez-Álvarez; April L. Zenisky; Maria Elena Oliveri – Grantee Submission, 2024
The goal in personalized assessment is to best fit the needs of each individual test taker, given the assessment purposes. Design-In-Real-Time (DIRTy) assessment reflects the progressive evolution in testing from a single test, to an adaptive test, to an adaptive assessment "system." In this paper, we lay the foundation for DIRTy…
Descriptors: Educational Assessment, Student Needs, Test Format, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Stephen G. Sireci; Javier Suárez-Álvarez; April L. Zenisky; Maria Elena Oliveri – Educational Measurement: Issues and Practice, 2024
The goal in personalized assessment is to best fit the needs of each individual test taker, given the assessment purposes. Design-in-Real-Time (DIRTy) assessment reflects the progressive evolution in testing from a single test, to an adaptive test, to an adaptive assessment "system." In this article, we lay the foundation for DIRTy…
Descriptors: Educational Assessment, Student Needs, Test Format, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Al Roomy, Muhammad A. – Arab World English Journal, 2022
The emergency transition to online learning due to COVID-19 has forced many sectors to respond quickly. The readiness of educational institutes to attend to the abrupt crisis and shift to teach remotely is practiced at different levels. Online assessment is one of them. Rapid advances in technology and software applications are changing the…
Descriptors: Language Teachers, Second Language Instruction, English (Second Language), Teacher Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Abdullah Al Fraidan – International Journal of Distance Education Technologies, 2025
This study explores vocabulary assessment practices in Saudi Arabia's hybrid EFL ecosystem, leveraging platforms like Blackboard and Google Forms. The focus is on identifying prevalent test formats and evaluating their alignment with modern pedagogical goals. To classify vocabulary assessment formats in hybridized EFL contexts and recommend the…
Descriptors: Vocabulary Development, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lynch, Sarah – Practical Assessment, Research & Evaluation, 2022
In today's digital age, tests are increasingly being delivered on computers. Many of these computer-based tests (CBTs) have been adapted from paper-based tests (PBTs). However, this change in mode of test administration has the potential to introduce construct-irrelevant variance, affecting the validity of score interpretations. Because of this,…
Descriptors: Computer Assisted Testing, Tests, Scores, Scoring
Previous Page | Next Page »
Pages: 1  |  2  |  3