NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Yuejin; Iran-Nejad, Asghar; Thoma, Stephen J. – Journal of Interactive Online Learning, 2007
The purpose of the study was to determine comparability of an online version to the original paper-pencil version of Defining Issues Test 2 (DIT2). This study employed methods from both Classical Test Theory (CTT) and Item Response Theory (IRT). Findings from CTT analyses supported the reliability and discriminant validity of both versions.…
Descriptors: Computer Assisted Testing, Test Format, Comparative Analysis, Test Theory
Chissom, Brad; Chukabarah, Prince C. O. – 1985
The comparative effects of various sequences of test items were examined for over 900 graduate students enrolled in an educational research course at The University of Alabama, Tuscaloosa. experiment, which was conducted a total of four times using four separate tests, presented three different arrangements of 50 multiple-choice items: (1)…
Descriptors: Analysis of Variance, Comparative Testing, Difficulty Level, Graduate Students
Chang, Lei – 1993
Equivalence in reliability and validity across 4-point and 6-point scales was assessed by fitting different measurement models through confirmatory factor analysis of a multitrait-multimethod covariance matrix. Responses to nine Likert-type items designed to measure perceived quantitative ability, self-perceived usefulness of quantitative…
Descriptors: Ability, Comparative Testing, Education Majors, Graduate Students