Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 2 |
Descriptor
| Computer Assisted Testing | 3 |
| Regression (Statistics) | 3 |
| Sample Size | 3 |
| Scores | 2 |
| Adaptive Testing | 1 |
| Cheating | 1 |
| Comparative Analysis | 1 |
| Computer Software | 1 |
| Correlation | 1 |
| Distance Education | 1 |
| English (Second Language) | 1 |
| More ▼ | |
Author
| Breyer, F. Jay | 1 |
| Englander, Fred | 1 |
| Fask, Alan | 1 |
| Lorenz, Florian | 1 |
| Nandakumar, Ratna | 1 |
| Roussos, Louis | 1 |
| Wang, Zhaobo | 1 |
| Zhang, Mo | 1 |
Publication Type
| Journal Articles | 2 |
| Reports - Research | 2 |
| Reports - Evaluative | 1 |
| Speeches/Meeting Papers | 1 |
| Tests/Questionnaires | 1 |
Education Level
| Higher Education | 1 |
| Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Fask, Alan; Englander, Fred; Wang, Zhaobo – Practical Assessment, Research & Evaluation, 2015
There has been a remarkable growth in distance learning courses in higher education. Despite indications that distance learning courses are more vulnerable to cheating behavior than traditional courses, there has been little research studying whether online exams facilitate a relatively greater level of cheating. This article examines this issue…
Descriptors: Distance Education, Introductory Courses, Statistics, Cheating
Zhang, Mo; Breyer, F. Jay; Lorenz, Florian – ETS Research Report Series, 2013
In this research, we investigated the suitability of implementing "e-rater"® automated essay scoring in a high-stakes large-scale English language testing program. We examined the effectiveness of generic scoring and 2 variants of prompt-based scoring approaches. Effectiveness was evaluated on a number of dimensions, including agreement…
Descriptors: Computer Assisted Testing, Computer Software, Scoring, Language Tests
Nandakumar, Ratna; Roussos, Louis – 1997
This paper investigates the performance of CATSIB (a modified version of the SIBTEST computer program) to assess differential item functioning (DIF) in the context of computerized adaptive testing (CAT). One of the distinguishing features of CATSIB is its theoretically built-in regression correction to control for the Type I error rates when the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Bias, Power (Statistics)

Peer reviewed
