NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
ERIC Number: ED640204
Record Type: Non-Journal
Publication Date: 2023
Pages: 239
Abstractor: As Provided
ISBN: 979-8-3808-5008-7
ISSN: N/A
EISSN: N/A
Available Date: N/A
A Comparison of Student and Research-Based Evaluations of Explanation Quality in an Introductory Physics Course for Engineers
Joe Olsen
ProQuest LLC, Ph.D. Dissertation, Rutgers The State University of New Jersey, School of Graduate Studies
Instructional explanations are an ubiquitous component of classroom instruction, but are relatively neglected in science education when compared to other facets of teaching and learning. The ubiquity of instructional explanations and their potential to stimulate learning in students suggests that they should garner more attention from science education scholars. Students are likely to have developed opinions on the quality of effective explanations given the sheer number of explanations the typical student has encountered in their education, but only a few efforts have documented the criteria that students use to evaluate instructional explanations. The current thesis has four goals: i) to develop an initial model for the dimensions of explanations along which students evaluate explanations generated by highly regarded instructors of physics, ii) to determine what level of agreement exists between students on which explanations are good versus poor quality explanations, iii) to document the research-supported strategies that are utilized in those same explanations, and iv) to compare how evaluations of explanations based on research compare to the opinion from the students concerning which explanations are best. To accomplish these goals, nine highly rated instructors wrote 45 explanations in response to five physics questions. The 45 explanations were scored based on a research-derived rubric that identifies research-supported strategies related to learning outcomes. Next, students in an introductory calculus-based physics course for engineers were invited to make pairwise comparisons of explanations and to rank order the explanations. Students also described how they made their judgments. The comparative judgement data was analyzed with the Bradley-Terry model and agreement was estimated with a split-half correlation method. The rank orders were analyzed using a relative placement algorithm, which, to my knowledge, has not yet been used in research, and with sequential rank agreement metric measuring agreement between rank orders. The top-rated explanations from the relative placement algorithm were compared with the research-driven scoring to highlight areas of agreement and disagreement between the research team and student samples. From the analysis, I discuss patterns in features that appear in the explanations used in this study. I also present a model for effective explanations from the student perspective together with three general hypotheses that seem necessary to model good explaining practice from the student perspective. These hypotheses are that i) students generally do not share the same criteria for the essence of good explaining, that ii) students at the introductory level are sensitive to context when judging explanations, and that iii) students may not be consciously aware of all of the criteria that drive their judgements. I conclude based on my findings that students use a wide range of criteria evaluate explanations and that these criteria often align with the literature on explaining. As a consequence, I suggest that students are likely to provide valuable feedback to instructors on explanation quality. I show that the item quality estimates produced by the Bradley-Terry model and the level of agreement between students estimated through split-half correlations disagree with item quality estimates produced from the relative placement algorithm and the agreement estimate from sequential rank agreement metric. These results are consist with the possibility that the Bradley-Terry model and the split-halves correlation method do not produce item quality and agreement estimates that generalize beyond the context of comparative judgement. Alternatively, the results may suggest that the Bradley-Terry model and split-halves correlation method are inapplicable to contexts where judgement criteria are more subjective and/or flexible. Finally, I raise questions for future studies about interesting patterns in student responses and rank-order methodologies in general that warrant further investigation. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com.bibliotheek.ehb.be/en-US/products/dissertations/individuals.shtml.]
ProQuest LLC. 789 East Eisenhower Parkway, P.O. Box 1346, Ann Arbor, MI 48106. Tel: 800-521-0600; Web site: http://www.proquest.com.bibliotheek.ehb.be/en-US/products/dissertations/individuals.shtml
Publication Type: Dissertations/Theses - Doctoral Dissertations
Education Level: Higher Education; Postsecondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Author Affiliations: N/A