Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 3 |
| Since 2007 (last 20 years) | 13 |
Descriptor
| Interrater Reliability | 13 |
| Scoring Formulas | 13 |
| Evaluation Methods | 5 |
| Test Reliability | 5 |
| Test Scoring Machines | 4 |
| Accuracy | 3 |
| Comparative Analysis | 3 |
| Computer Assisted Testing | 3 |
| Correlation | 3 |
| Essays | 3 |
| Grading | 3 |
| More ▼ | |
Source
Author
| Bardhoshi, Gerta | 1 |
| Barkaoui, Khaled | 1 |
| Beltrán, Jorge | 1 |
| Ben-Simon, Anat | 1 |
| Bennett, Randy Elliott | 1 |
| Bridgeman, Brent | 1 |
| Cohen, Allan | 1 |
| Davey, Tim | 1 |
| Erdogan, Semra | 1 |
| Erford, Bradley T. | 1 |
| Guskey, Thomas R. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 12 |
| Reports - Research | 11 |
| Tests/Questionnaires | 2 |
| Numerical/Quantitative Data | 1 |
| Reports - Descriptive | 1 |
| Reports - Evaluative | 1 |
Education Level
| Higher Education | 4 |
| Postsecondary Education | 3 |
| Adult Education | 2 |
| Elementary Education | 1 |
| Elementary Secondary Education | 1 |
| Grade 7 | 1 |
| Grade 8 | 1 |
Audience
Location
| Malaysia | 1 |
| New York (New York) | 1 |
| Turkey | 1 |
| West Virginia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Wyse, Adam E. – Practical Assessment, Research & Evaluation, 2018
One common modification to the Angoff standard-setting method is to have panelists round their ratings to the nearest 0.05 or 0.10 instead of 0.01. Several reasons have been offered as to why it may make sense to have panelists round their ratings to the nearest 0.05 or 0.10. In this article, we examine one reason that has been suggested, which is…
Descriptors: Interrater Reliability, Evaluation Criteria, Scoring Formulas, Achievement Rating
Raczynski, Kevin; Cohen, Allan – Applied Measurement in Education, 2018
The literature on Automated Essay Scoring (AES) systems has provided useful validation frameworks for any assessment that includes AES scoring. Furthermore, evidence for the scoring fidelity of AES systems is accumulating. Yet questions remain when appraising the scoring performance of AES systems. These questions include: (a) which essays are…
Descriptors: Essay Tests, Test Scoring Machines, Test Validity, Evaluators
Bardhoshi, Gerta; Erford, Bradley T. – Measurement and Evaluation in Counseling and Development, 2017
Precision is a key facet of test development, with score reliability determined primarily according to the types of error one wants to approximate and demonstrate. This article identifies and discusses several primary forms of reliability estimation: internal consistency (i.e., split-half, KR-20, a), test-retest, alternate forms, interscorer, and…
Descriptors: Scores, Test Reliability, Accuracy, Pretests Posttests
Guskey, Thomas R.; Jung, Lee Ann – Educational Leadership, 2016
Many educators consider grades calculated from statistical algorithms more accurate, objective, and reliable than grades they calculate themselves. But in this research, the authors first asked teachers to use their professional judgment to choose a summary grade for hypothetical students. When the researchers compared the teachers' grade with the…
Descriptors: Grading, Computer Assisted Testing, Interrater Reliability, Grades (Scholastic)
Tarricone, Pina; Newhouse, C. Paul – Australian Educational Researcher, 2016
Traditional moderation of student assessments is often carried out with groups of teachers working face-to-face in a specified location making judgements concerning the quality of representations of achievement. This traditional model has relied little on modern information communications technologies and has been logistically challenging. We…
Descriptors: Visual Arts, Art Education, Art Materials, Alternative Assessment
Temel, Gülhan Orekici; Erdogan, Semra; Selvi, Hüseyin; Kaya, Irem Ersöz – Educational Sciences: Theory and Practice, 2016
Studies based on longitudinal data focus on the change and development of the situation being investigated and allow for examining cases regarding education, individual development, cultural change, and socioeconomic improvement in time. However, as these studies require taking repeated measures in different time periods, they may include various…
Descriptors: Investigations, Sample Size, Longitudinal Studies, Interrater Reliability
Herbert, Ian P.; Joyce, John; Hassall, Trevor – Accounting Education, 2014
The design, delivery and assessment of a complete educational scheme, such as a degree programme or a professional qualification course, is a complex matter. Maintaining alignment between the stated aims of the curriculum and the scoring of student achievement is an overarching concern. The potential for drift across individual aspects of an…
Descriptors: Higher Education, Student Evaluation, Communities of Practice, Interrater Reliability
Beltrán, Jorge – Working Papers in TESOL & Applied Linguistics, 2016
In the assessment of aural skills of second language learners, the study of the inclusion of visual stimuli has almost exclusively been conducted in the context of listening assessment. While the inclusion of contextual information in test input has been advocated for by numerous researchers (Ockey, 2010), little has been said regarding the…
Descriptors: Achievement Tests, Speech Skills, Speech Tests, Second Language Learning
Hixson, Nate; Rhudy, Vaughn – West Virginia Department of Education, 2013
Student responses to the West Virginia Educational Standards Test (WESTEST) 2 Online Writing Assessment are scored by a computer-scoring engine. The scoring method is not widely understood among educators, and there exists a misperception that it is not comparable to hand scoring. To address these issues, the West Virginia Department of Education…
Descriptors: Scoring Formulas, Scoring Rubrics, Interrater Reliability, Test Scoring Machines
Samad, Arshad Abd; bt Ahmad, Zamzam – Advances in Language and Literary Studies, 2012
Raimes (1983) has identified nine components necessary to produce a piece of writing that is clear, fluent and effective. These are also the aspects that are considered when assessing writing. The common practice is to have raters score the essays and they are provided with a rating scale for this purpose. A training and practice session is also…
Descriptors: Writing Evaluation, Writing Achievement, Interrater Reliability, Scoring Formulas
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models
Barkaoui, Khaled – Assessment in Education: Principles, Policy & Practice, 2011
This study examined the effects of marking method and rater experience on ESL (English as a Second Language) essay test scores and rater performance. Each of 31 novice and 29 experienced raters rated a sample of ESL essays both holistically and analytically. Essay scores were analysed using a multi-faceted Rasch model to compare test-takers'…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Interrater Reliability
Ben-Simon, Anat; Bennett, Randy Elliott – Journal of Technology, Learning, and Assessment, 2007
This study evaluated a "substantively driven" method for scoring NAEP writing assessments automatically. The study used variations of an existing commercial program, e-rater[R], to compare the performance of three approaches to automated essay scoring: a "brute-empirical" approach in which variables are selected and weighted solely according to…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays

Peer reviewed
Direct link
