ERIC Number: ED668872
Record Type: Non-Journal
Publication Date: 2021
Pages: 140
Abstractor: As Provided
ISBN: 979-8-5355-1591-5
ISSN: N/A
EISSN: N/A
Available Date: 0000-00-00
Developing a Validity Argument Case for Locally Developed University English Preparedness Testing from an Ethical Perspective
Lynsey Joohyun Lee
ProQuest LLC, Ph.D. Dissertation, Indiana University of Pennsylvania
Reliability and validity are two important topics that have been studied for many decades in the educational measurement field, including discussions of Writing Studies' subfield of writing assessment, since the establishment of the College Entrance Exam Board [CEEB] in 1899 (Huot et al., 2010). In recent years, scholarly conversations of fairness and ethics of writing assessment have been refocused, specifically regarding factors beyond the tests themselves, such as the impact on students and contextual concerns (AERA et al., 2014; Elliot, 2016; Kelly-Riley & Whithaus, 2016; Slomp, 2016a; Slomp, 2016b). The discussion of fairness and ethics extends to college placement testing. Through this dissertation study, I explored a meaning of validity in the context of writing assessment and examine how it applies to fair and ethical college writing placement testing from multiple situations: large-scale assessments that function as placement testing (Broad, 2016), the Common Core State Standards' impact on four-year institutions (Barnett & Cormier, 2014; Bracco, Austin et al., 2015; Bracco, Dadgar et al., 2014; Bracco, Klarin et al., 2014; Miller et al., 2017), and two-year college placement (Toth et al., 2019). From a social constructivist worldview, I conducted a case study that takes an exploratory, sequential, mixed-methods design, where the qualitative part builds toward the quantitative part. I attempted to evaluate a four-year university's locally developed English preparedness test. The two purposes of this study were: (1) to explore how the raters of a locally developed placement test describe their assessment, focusing on their scoring procedures, scoring criteria, and plans for unintentional advantages and disadvantages and (2) to compare the claims made about the assessment, with their testing practice for these variables. Data collection and analysis were performed in two phases. Phase 1 consisted of a qualitative analysis of the scoring rubric and the survey responses. Phase 2 included a quantitative comparison between the Phase 1 findings and the assessment results with grading information. The study demonstrated a model for locally evaluating writing assessment that focuses on specific needs of the assessment from an ethical perspective. The approach and design of this case study are practical and replicable to be utilized in other institutions when localized evaluation of assessment is considered necessary. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com.bibliotheek.ehb.be/en-US/products/dissertations/individuals.shtml.]
Descriptors: Writing Evaluation, Test Validity, Test Reliability, Case Studies, Ethics, Test Bias, Placement Tests, Scoring, Instructional Material Evaluation, Student Placement, Interrater Reliability
ProQuest LLC. 789 East Eisenhower Parkway, P.O. Box 1346, Ann Arbor, MI 48106. Tel: 800-521-0600; Web site: http://www.proquest.com.bibliotheek.ehb.be/en-US/products/dissertations/individuals.shtml
Publication Type: Dissertations/Theses - Doctoral Dissertations
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Author Affiliations: N/A