Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 2 |
Descriptor
| Correlation | 2 |
| Demography | 2 |
| Evaluation Methods | 2 |
| Pretesting | 2 |
| Automation | 1 |
| College Entrance Examinations | 1 |
| Comparative Analysis | 1 |
| Data | 1 |
| Essays | 1 |
| Evaluation Research | 1 |
| Graduate Study | 1 |
| More ▼ | |
Author
| Bridgeman, Brent | 1 |
| Cook, Thomas D. | 1 |
| Davey, Tim | 1 |
| Hallberg, Kelly | 1 |
| Ramineni, Chaitanya | 1 |
| St. Clair, Travis | 1 |
| Trapani, Catherine S. | 1 |
| Williamson, David M. | 1 |
Publication Type
| Journal Articles | 2 |
| Reports - Research | 2 |
Education Level
| Higher Education | 1 |
| Postsecondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
St. Clair, Travis; Hallberg, Kelly; Cook, Thomas D. – Journal of Educational and Behavioral Statistics, 2016
We explore the conditions under which short, comparative interrupted time-series (CITS) designs represent valid alternatives to randomized experiments in educational evaluations. To do so, we conduct three within-study comparisons, each of which uses a unique data set to test the validity of the CITS design by comparing its causal estimates to…
Descriptors: Research Methodology, Randomized Controlled Trials, Comparative Analysis, Time
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models

Peer reviewed
Direct link
