NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 151 to 165 of 3,123 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Samsa, Gregory – Journal of Curriculum and Teaching, 2021
Objective: Our master's program in biostatistics requires a qualifying examination (QE). A curriculum review led us to question whether to replace a closed-book format with an open-book one. Our goal was to improve the QE. Methods: This is a case study and commentary, where we describe the evolution of the QE, both in its goals and its content.…
Descriptors: Testing, Cooperative Learning, Evaluation Methods, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yilmaz, Erdi Okan; Toker, Türker – International Journal of Psychology and Educational Studies, 2022
This study examines the online assessment-evaluation activities in distance education processes. The effects of different online exam application styles considering the online assessment-evaluation in distance education processes, including all programs of a higher education institution, were documented. The population for online…
Descriptors: Foreign Countries, Computer Assisted Testing, Test Format, Distance Education
Peer reviewed Peer reviewed
Direct linkDirect link
Shen, Jing; Wu, Jingwei – Journal of Speech, Language, and Hearing Research, 2022
Purpose: This study examined the performance difference between remote and in-laboratory test modalities with a speech recognition in noise task in older and younger adults. Method: Four groups of participants (younger remote, younger in-laboratory, older remote, and older in-laboratory) were tested on a speech recognition in noise protocol with…
Descriptors: Age Differences, Test Format, Computer Assisted Testing, Auditory Perception
Green, Clare; Hughes, Sarah – Cambridge University Press & Assessment, 2022
The Digital High Stakes Assessment Programme in Cambridge University Press & Assessment is developing digital assessments for UK and global teachers and learners. In one development, the team are making decisions about the assessment models to use to assess computing systems knowledge and understanding. This research took place as part of the…
Descriptors: Test Items, Computer Science, Achievement Tests, Objective Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Haug, Tobias; Mann, Wolfgang; Holzknecht, Franz – Sign Language Studies, 2023
This study is a follow-up to previous research conducted in 2012 on computer-assisted language testing (CALT) that applied a survey approach to investigate the use of technology in sign language testing worldwide. The goal of the current study was to replicate the 2012 study and to obtain updated information on the use of technology in sign…
Descriptors: Computer Assisted Testing, Sign Language, Natural Language Processing, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Gruss, Richard; Clemons, Josh – Journal of Computer Assisted Learning, 2023
Background: The sudden growth in online instruction due to COVID-19 restrictions has given renewed urgency to questions about remote learning that have remained unresolved. Web-based assessment software provides instructors an array of options for varying testing parameters, but the pedagogical impacts of some of these variations has yet to be…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Gustafsson, Martin; Barakat, Bilal Fouad – Comparative Education Review, 2023
International assessments inform education policy debates, yet little is known about their floor effects: To what extent do they fail to differentiate between the lowest performers, and what are the implications of this? TIMSS, SACMEQ, and LLECE data are analyzed to answer this question. In TIMSS, floor effects have been reduced through the…
Descriptors: Achievement Tests, Elementary Secondary Education, International Assessment, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Botelho, Anthony; Baral, Sami; Erickson, John A.; Benachamardi, Priyanka; Heffernan, Neil T. – Journal of Computer Assisted Learning, 2023
Background: Teachers often rely on the use of open-ended questions to assess students' conceptual understanding of assigned content. Particularly in the context of mathematics; teachers use these types of questions to gain insight into the processes and strategies adopted by students in solving mathematical problems beyond what is possible through…
Descriptors: Natural Language Processing, Artificial Intelligence, Computer Assisted Testing, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Julia Jochim; Vera Kristina Lenz-Kesekamp – Information and Learning Sciences, 2025
Purpose: Large language models such as ChatGPT are a challenge to academic principles, calling into question well-established practices, teaching and exam formats. This study aims to explore the adaptation process regarding text-generative artificial intelligence (AI) of students and teachers in higher education and to identify needs for change.…
Descriptors: Artificial Intelligence, Student Needs, Higher Education, Technology Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Jude Edeigba; Solomon Opare; Fawzi Laswad – Accounting Education, 2025
Given the debate in tertiary institutions on the use of technology for assessments due to the coronavirus (COVID-19) pandemic, we compare the performance of accounting students in proctored paper-based and online exams. We examine exam scores across different demographic variables to provide early insights into the impact of these demographics on…
Descriptors: Accounting, Business Education, Academic Achievement, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Pasquale Anselmi; Jürgen Heller; Luca Stefanutti; Egidio Robusto; Giulia Barillari – Education and Information Technologies, 2025
Competence-based test development (CbTD) is a novel method for constructing tests that are as informative as possible about the competence state (the set of skills an individual masters) underlying the item responses. If desired, the tests can also be minimal, meaning that no item can be eliminated without reducing their informativeness. To…
Descriptors: Competency Based Education, Test Construction, Test Length, Usability
Peer reviewed Peer reviewed
Direct linkDirect link
Mi-Hyun Bang; Young-Min Lee – Education and Information Technologies, 2024
The Human Resources Development Service of Korea developed a digital exam for five representative engineering categories and conducted a pilot study comparing the findings with the paper-and-pencil exam results from the last three years. This study aimed to compare the test efficiency between digital and paper-and-pencil examinations. A digital…
Descriptors: Engineering Education, Computer Assisted Testing, Foreign Countries, Human Resources
Peer reviewed Peer reviewed
Direct linkDirect link
Wim J. van der Linden; Luping Niu; Seung W. Choi – Journal of Educational and Behavioral Statistics, 2024
A test battery with two different levels of adaptation is presented: a within-subtest level for the selection of the items in the subtests and a between-subtest level to move from one subtest to the next. The battery runs on a two-level model consisting of a regular response model for each of the subtests extended with a second level for the joint…
Descriptors: Adaptive Testing, Test Construction, Test Format, Test Reliability
Emma Walland – Research Matters, 2024
GCSE examinations (taken by students aged 16 years in England) are not intended to be speeded (i.e. to be partly a test of how quickly students can answer questions). However, there has been little research exploring this. The aim of this research was to explore the speededness of past GCSE written examinations, using only the data from scored…
Descriptors: Educational Change, Test Items, Item Analysis, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Courey, Karyssa A.; Lee, Michael D. – AERA Open, 2021
Student evaluations of teaching are widely used to assess instructors and courses. Using a model-based approach and Bayesian methods, we examine how the direction of the scale, labels on scales, and the number of options affect the ratings. We conduct a within-participants experiment in which respondents evaluate instructors and lectures using…
Descriptors: Student Evaluation of Teacher Performance, Rating Scales, Response Style (Tests), College Students
Pages: 1  |  ...  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  ...  |  209