Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 2 |
Descriptor
Difficulty Level | 8 |
Graduate Students | 8 |
Test Format | 8 |
Test Items | 7 |
Higher Education | 5 |
Multiple Choice Tests | 5 |
Computer Assisted Testing | 3 |
Item Analysis | 3 |
Analysis of Variance | 2 |
Comparative Testing | 2 |
Item Response Theory | 2 |
More ▼ |
Source
Educational Process:… | 1 |
International Journal of… | 1 |
Journal of Educational… | 1 |
Journal of Interactive Online… | 1 |
Author
Publication Type
Reports - Research | 7 |
Journal Articles | 4 |
Speeches/Meeting Papers | 3 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 2 |
Audience
Researchers | 1 |
Location
Saudi Arabia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Defining Issues Test | 1 |
What Works Clearinghouse Rating
Abdullah Al Fraidan; Meznah Saud Abdulaziz Alsubaie – Educational Process: International Journal, 2025
Background: This study examines the effect of test anxiety on the academic performance of postgraduate female students, focusing on their perceptions and experiences in open-book exams (OBE) and closed-book exams (CBE). Method: A qualitative case study design was employed using the Thinking Aloud Protocol (TAP) to collect data from five Saudi…
Descriptors: Test Anxiety, Vocabulary, Females, Books
Xu, Yuejin; Iran-Nejad, Asghar; Thoma, Stephen J. – Journal of Interactive Online Learning, 2007
The purpose of the study was to determine comparability of an online version to the original paper-pencil version of Defining Issues Test 2 (DIT2). This study employed methods from both Classical Test Theory (CTT) and Item Response Theory (IRT). Findings from CTT analyses supported the reliability and discriminant validity of both versions.…
Descriptors: Computer Assisted Testing, Test Format, Comparative Analysis, Test Theory
Tollefson, Nona; Chen, Ju Shan – 1986
This study compared item difficulty and item discrimination indices for parallel multiple-choice items in three content areas: measurement concepts, statistical terminology, and synonyms. The statistics and measurement items were administered in classes where graduate students taking the test were studying the content. Vocabulary items represented…
Descriptors: Difficulty Level, Graduate Students, Higher Education, Item Analysis
Tollefson, Nona; Tripp, Alice – 1986
The item difficulty and item discrimination of three multiple-choice item formats were compared in experimental and non-experimental settings. In the experimental study, 104 graduate students were randomly assigned to complete one of three forms of a multiple-choice test: (1) a complex alternative ("none of the above") as the correct answer; (2) a…
Descriptors: Achievement Tests, Difficulty Level, Discriminant Analysis, Graduate Students
Tollefson, Nona; Tripp, Alice – 1983
This study compared the item difficulty and item discrimination of three multiple choice item formats. The multiple choice formats studied were: a complex alternative (none of the above) as the correct answer; a complex alternative as a foil, and the one-correct answer format. One hundred four graduate students were randomly assigned to complete…
Descriptors: Analysis of Variance, Difficulty Level, Graduate Students, Higher Education

Clariana, Roy B. – International Journal of Instructional Media, 2004
This investigation considers the instructional effects of color as an over-arching context variable when learning from computer displays. The purpose of this investigation is to examine the posttest retrieval effects of color as a local, extra-item non-verbal lesson context variable for constructed-response versus multiple-choice posttest…
Descriptors: Instructional Effectiveness, Graduate Students, Color, Computer System Design
Chissom, Brad; Chukabarah, Prince C. O. – 1985
The comparative effects of various sequences of test items were examined for over 900 graduate students enrolled in an educational research course at The University of Alabama, Tuscaloosa. experiment, which was conducted a total of four times using four separate tests, presented three different arrangements of 50 multiple-choice items: (1)…
Descriptors: Analysis of Variance, Comparative Testing, Difficulty Level, Graduate Students

Wise, Steven L.; And Others – Journal of Educational Measurement, 1992
Performance of 156 undergraduate and 48 graduate students on a self-adapted test (SFAT)--students choose the difficulty level of their test items--was compared with performance on a computer-adapted test (CAT). Those taking the SFAT obtained higher ability scores and reported lower posttest state anxiety than did CAT takers. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level