Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 7 |
Descriptor
Source
Cambridge University Press &… | 1 |
Educational Studies in… | 1 |
English Language Teaching | 1 |
European Journal of Open,… | 1 |
International Journal of… | 1 |
Research Matters | 1 |
Research in Learning… | 1 |
Author
Amrane-Cooper, Linda | 1 |
Benton, Tom | 1 |
Cameron, Harriet | 1 |
Cheilari, Angeliki | 1 |
Coniam, David | 1 |
Elgueta, Herman | 1 |
Green, Clare | 1 |
Hatzipanagos, Stylianos | 1 |
Hughes, Sarah | 1 |
Jones, Ian | 1 |
Lampropoulou, Leda | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 6 |
Reports - Descriptive | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 4 |
Secondary Education | 1 |
Audience
Location
United Kingdom | 7 |
Laws, Policies, & Programs
Assessments and Surveys
International English… | 1 |
What Works Clearinghouse Rating
Green, Clare; Hughes, Sarah – Cambridge University Press & Assessment, 2022
The Digital High Stakes Assessment Programme in Cambridge University Press & Assessment is developing digital assessments for UK and global teachers and learners. In one development, the team are making decisions about the assessment models to use to assess computing systems knowledge and understanding. This research took place as part of the…
Descriptors: Test Items, Computer Science, Achievement Tests, Objective Tests
Stefan O'Grady – International Journal of Listening, 2025
Language assessment is increasingly computermediated. This development presents opportunities with new task formats and equally a need for renewed scrutiny of established conventions. Recent recommendations to increase integrated skills assessment in lecture comprehension tests is premised on empirical research that demonstrates enhanced construct…
Descriptors: Language Tests, Lecture Method, Listening Comprehension Tests, Multiple Choice Tests
Benton, Tom – Research Matters, 2021
Computer adaptive testing is intended to make assessment more reliable by tailoring the difficulty of the questions a student has to answer to their level of ability. Most commonly, this benefit is used to justify the length of tests being shortened whilst retaining the reliability of a longer, non-adaptive test. Improvements due to adaptive…
Descriptors: Risk, Item Response Theory, Computer Assisted Testing, Difficulty Level
Amrane-Cooper, Linda; Hatzipanagos, Stylianos; Tait, Alan – European Journal of Open, Distance and E-Learning, 2023
In 2020, because of the COVID-19 pandemic the higher education sector, in the United Kingdom and internationally, transitioned to online assessment, at a speed and scale which might have been unimaginable under normal circumstances. The priority in the sector was to ensure that fundamental principles of assessment, including integrity, were…
Descriptors: Pandemics, COVID-19, Educational Change, Integrity
Coniam, David; Lampropoulou, Leda; Cheilari, Angeliki – English Language Teaching, 2021
This paper reports reactions by candidates to the use of online proctoring (OLP), 'invigilation', in the delivery of high-stakes English language examinations. The paper first sets the scene in terms of the move from face-to-face to online modes of delivery. It explores the challenges and benefits that both modes offer, in terms of accessibility,…
Descriptors: High Stakes Tests, Language Tests, Second Language Learning, English (Second Language)
Sangwin, Christopher J.; Jones, Ian – Educational Studies in Mathematics, 2017
In this paper we report the results of an experiment designed to test the hypothesis that when faced with a question involving the inverse direction of a reversible mathematical process, students solve a multiple-choice version by verifying the answers presented to them by the direct method, not by undertaking the actual inverse calculation.…
Descriptors: Mathematics Achievement, Mathematics Tests, Multiple Choice Tests, Computer Assisted Testing
Stafford, Tom; Elgueta, Herman; Cameron, Harriet – Research in Learning Technology, 2014
We introduced voluntary wiki-based exercises to a long-running cognitive psychology course, part of the core curriculum for an undergraduate degree in psychology. Over 2 yearly cohorts, students who used the wiki more also scored higher on the final written exam. Using regression analysis, it is possible to account for students' tendency to score…
Descriptors: Web 2.0 Technologies, Predictor Variables, Student Participation, Test Format