NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models
Peer reviewed Peer reviewed
Reilly, Richard R. – Educational and Psychological Measurement, 1975
Because previous reports have suggested that the lowered validity of tests scored with empirical option weights might be explained by a capitalization of the keying procedures on omitting tendencies, a procedure was devised to key options empirically with a "correction-for-guessing" constraint. (Author)
Descriptors: Achievement Tests, Graduate Study, Guessing (Tests), Scoring Formulas
Kingston, Neal M. – 1985
Birnbaum's three-parameter logistic item response model was used to study guessing behavior of low ability examinees on the Graduate Record Examinations (GRE) General Test, Verbal Measure. GRE scoring procedures had recently changed, from a scoring formula which corrected for guessing, to number-right scoring. The three-parameter theory was used…
Descriptors: Academic Aptitude, Analysis of Variance, College Entrance Examinations, Difficulty Level