NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Peter F. Halpin – Society for Research on Educational Effectiveness, 2024
Background: Meta-analyses of educational interventions have consistently documented the importance of methodological factors related to the choice of outcome measures. In particular, when interventions are evaluated using measures developed by researchers involved with the intervention or its evaluation, the effect sizes tend to be larger than…
Descriptors: College Students, College Faculty, STEM Education, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Wida Wemmer-Rogh; Urs Grob; Charalambos Y. Charalambous; Anna-Katharina Praetorius – ZDM: Mathematics Education, 2024
Recent publications emphasize the need to take greater account of differences in teaching quality between subjects. The empirical analysis of this topic requires a comparison of teaching quality in different subjects to distinguish generic aspects of teaching quality from subject-specific ones. In this paper, we compare teaching quality in…
Descriptors: Foreign Countries, Elementary School Mathematics, Elementary School Students, Elementary School Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Williams, Rihana Shiri; Ari, Omer; Santamaria, Carmen Nicole – Journal of Research in Reading, 2011
Recent investigations challenge the construct validity of sustained silent reading tests. Performance of two groups of post-secondary students (e.g. struggling and non-struggling) on a sustained silent reading test and two types of cloze test (i.e. maze and open-ended) was compared in order to identify the test format that contributes greater…
Descriptors: Evidence, Cloze Procedure, Reading Comprehension, Investigations
Peer reviewed Peer reviewed
Direct linkDirect link
Sparfeldt, Jorn R.; Kimmel, Rumena; Lowenkamp, Lena; Steingraber, Antje; Rost, Detlef H. – Educational Assessment, 2012
Multiple-choice (MC) reading comprehension test items comprise three components: text passage, questions about the text, and MC answers. The construct validity of this format has been repeatedly criticized. In three between-subjects experiments, fourth graders (N[subscript 1] = 230, N[subscript 2] = 340, N[subscript 3] = 194) worked on three…
Descriptors: Test Items, Reading Comprehension, Construct Validity, Grade 4
Peer reviewed Peer reviewed
Direct linkDirect link
Stone, Gregory Ethan; Koskey, Kristin L. K.; Sondergeld, Toni A. – Educational and Psychological Measurement, 2011
Typical validation studies on standard setting models, most notably the Angoff and modified Angoff models, have ignored construct development, a critical aspect associated with all conceptualizations of measurement processes. Stone compared the Angoff and objective standard setting (OSS) models and found that Angoff failed to define a legitimate…
Descriptors: Cutting Scores, Standard Setting (Scoring), Models, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Do-Hong; Huynh, Huynh – Educational and Psychological Measurement, 2008
The current study compared student performance between paper-and-pencil testing (PPT) and computer-based testing (CBT) on a large-scale statewide end-of-course English examination. Analyses were conducted at both the item and test levels. The overall results suggest that scores obtained from PPT and CBT were comparable. However, at the content…
Descriptors: Reading Comprehension, Computer Assisted Testing, Factor Analysis, Comparative Testing