Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 3 |
Descriptor
Source
| Assessment & Evaluation in… | 5 |
Author
| Bush, Martin | 1 |
| Clarke, Rufus | 1 |
| Craig, Pippa | 1 |
| Gordon, Jill | 1 |
| Gu, Lin | 1 |
| Hazel, Elizabeth | 1 |
| Kardanova, Elena | 1 |
| Li, Guirong | 1 |
| Ling, Guangming | 1 |
| Liu, Ou Lydia | 1 |
| Lloyd, D. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 5 |
| Reports - Research | 3 |
| Reports - Evaluative | 2 |
Education Level
| Higher Education | 2 |
| Postsecondary Education | 2 |
Audience
Location
| Australia | 2 |
| China | 1 |
| United Kingdom (England) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Gu, Lin; Ling, Guangming; Liu, Ou Lydia; Yang, Zhitong; Li, Guirong; Kardanova, Elena; Loyalka, Prashant – Assessment & Evaluation in Higher Education, 2021
We examine the effects of computer-based versus paper-based assessment of critical thinking skills, adapted from English (in the U.S.) to Chinese. Using data collected based on a random assignment between the two modes in multiple Chinese colleges, we investigate mode effects from multiple perspectives: mean scores, measurement precision, item…
Descriptors: Critical Thinking, Tests, Test Format, Computer Assisted Testing
Bush, Martin – Assessment & Evaluation in Higher Education, 2015
The humble multiple-choice test is very widely used within education at all levels, but its susceptibility to guesswork makes it a suboptimal assessment tool. The reliability of a multiple-choice test is partly governed by the number of items it contains; however, longer tests are more time consuming to take, and for some subject areas, it can be…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Format, Test Reliability
Craig, Pippa; Gordon, Jill; Clarke, Rufus; Oldmeadow, Wendy – Assessment & Evaluation in Higher Education, 2009
This study aimed to provide evidence to guide decisions on the type and timing of assessments in a graduate medical programme, by identifying whether students from particular degree backgrounds face greater difficulty in satisfying the current assessment requirements. We examined the performance rank of students in three types of assessments and…
Descriptors: Student Evaluation, Medical Education, Student Characteristics, Correlation
Peer reviewedLloyd, D.; And Others – Assessment & Evaluation in Higher Education, 1996
In an engineering technology course at Coventry University (England), the utility of computer-assisted tests was compared with that of traditional paper-based tests. It was found that the computer-based technique was acceptable to students, produced valid results, and demonstrated potential for saving staff time. (Author/MSE)
Descriptors: Comparative Analysis, Computer Assisted Testing, Efficiency, Engineering Education
Peer reviewedLogan, Peter; Hazel, Elizabeth – Assessment & Evaluation in Higher Education, 1999
An Australian study examined the role of language background and gender in the assessment of native and non-native English-speaking university students enrolled in physics courses. Attention was given to differential performance and was analyzed in relation to test-item type and level of student communication skills. Implications for academic…
Descriptors: College Students, Comparative Analysis, English (Second Language), Foreign Countries

Direct link
