Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 4 |
Descriptor
Graduate Students | 11 |
Multiple Choice Tests | 11 |
Test Format | 11 |
Higher Education | 8 |
Test Items | 7 |
Difficulty Level | 5 |
Undergraduate Students | 4 |
Comparative Analysis | 3 |
Correlation | 3 |
Foreign Countries | 3 |
Item Analysis | 3 |
More ▼ |
Source
Accounting Education | 1 |
International Journal of… | 1 |
Journal of Education for… | 1 |
Journal of Educational… | 1 |
Journal of Experimental… | 1 |
Language Assessment Quarterly | 1 |
Author
Tollefson, Nona | 3 |
Tripp, Alice | 2 |
Arthur, Neal | 1 |
Chen, Ju Shan | 1 |
Chissom, Brad | 1 |
Chukabarah, Prince C. O. | 1 |
Everaert, Patricia | 1 |
Hancock, Gregory R. | 1 |
Kremmel, Benjamin | 1 |
Schmitt, Norbert | 1 |
Sinkavich, Frank J. | 1 |
More ▼ |
Publication Type
Reports - Research | 10 |
Journal Articles | 6 |
Speeches/Meeting Papers | 3 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 3 |
Audience
Researchers | 1 |
Location
United Kingdom | 2 |
Belgium | 1 |
Laws, Policies, & Programs
Assessments and Surveys
International English… | 1 |
What Works Clearinghouse Rating
Stefan O'Grady – International Journal of Listening, 2025
Language assessment is increasingly computermediated. This development presents opportunities with new task formats and equally a need for renewed scrutiny of established conventions. Recent recommendations to increase integrated skills assessment in lecture comprehension tests is premised on empirical research that demonstrates enhanced construct…
Descriptors: Language Tests, Lecture Method, Listening Comprehension Tests, Multiple Choice Tests
Kremmel, Benjamin; Schmitt, Norbert – Language Assessment Quarterly, 2016
The scores from vocabulary size tests have typically been interpreted as demonstrating that the target words are "known" or "learned." But "knowing" a word should entail the ability to use it in real language communication in one or more of the four skills. It should also entail deeper knowledge, such as knowing the…
Descriptors: Vocabulary Development, Language Tests, Scores, Test Items
Arthur, Neal; Everaert, Patricia – Accounting Education, 2012
This paper addresses the question of whether the increasing use of multiple-choice questions will favour particular student groups, i.e. male or female students. Using data from Belgium, this paper empirically examines the existence of a gender effect by comparing the relative performance of male and female students in both multiple-choice and…
Descriptors: Accounting, Business Administration Education, Gender Differences, Multiple Choice Tests
Swartz, Stephen M. – Journal of Education for Business, 2006
The confidence level (information-referenced testing; IRT) design is an attempt to improve upon the multiple choice format by allowing students to express a level of confidence in the answers they choose. In this study, the author evaluated student perceptions of the ease of use and accuracy of and general preference for traditional multiple…
Descriptors: Multiple Choice Tests, Essay Tests, Graduate Students, Student Attitudes
Tollefson, Nona; Chen, Ju Shan – 1986
This study compared item difficulty and item discrimination indices for parallel multiple-choice items in three content areas: measurement concepts, statistical terminology, and synonyms. The statistics and measurement items were administered in classes where graduate students taking the test were studying the content. Vocabulary items represented…
Descriptors: Difficulty Level, Graduate Students, Higher Education, Item Analysis
Sinkavich, Frank J. – 1988
The relationship between metamemorial accuracy and student test performance was assessed in a graduate educational psychology class. Metamemory is defined as information about the content of one's own memory. Adult students in two courses (N=67) were asked to select multiple choice answers in two midterms and a final examination and to rate their…
Descriptors: Cognitive Processes, Confidence Testing, Educational Psychology, Graduate Students
Tollefson, Nona; Tripp, Alice – 1986
The item difficulty and item discrimination of three multiple-choice item formats were compared in experimental and non-experimental settings. In the experimental study, 104 graduate students were randomly assigned to complete one of three forms of a multiple-choice test: (1) a complex alternative ("none of the above") as the correct answer; (2) a…
Descriptors: Achievement Tests, Difficulty Level, Discriminant Analysis, Graduate Students
Tollefson, Nona; Tripp, Alice – 1983
This study compared the item difficulty and item discrimination of three multiple choice item formats. The multiple choice formats studied were: a complex alternative (none of the above) as the correct answer; a complex alternative as a foil, and the one-correct answer format. One hundred four graduate students were randomly assigned to complete…
Descriptors: Analysis of Variance, Difficulty Level, Graduate Students, Higher Education
Cognitive Complexity and the Comparability of Multiple-Choice and Constructed-Response Test Formats.

Hancock, Gregory R. – Journal of Experimental Education, 1994
To investigate the ability of multiple-choice tests to assess higher order thinking skills, examinations were constructed as half multiple choice and half constructed response. Results with 90 undergraduate and graduate students indicate that the 2 formats measure similar constructs at different levels of complexity. (SLD)
Descriptors: Cognitive Processes, Comparative Analysis, Constructed Response, Educational Assessment
Chissom, Brad; Chukabarah, Prince C. O. – 1985
The comparative effects of various sequences of test items were examined for over 900 graduate students enrolled in an educational research course at The University of Alabama, Tuscaloosa. experiment, which was conducted a total of four times using four separate tests, presented three different arrangements of 50 multiple-choice items: (1)…
Descriptors: Analysis of Variance, Comparative Testing, Difficulty Level, Graduate Students

Wise, Steven L.; And Others – Journal of Educational Measurement, 1992
Performance of 156 undergraduate and 48 graduate students on a self-adapted test (SFAT)--students choose the difficulty level of their test items--was compared with performance on a computer-adapted test (CAT). Those taking the SFAT obtained higher ability scores and reported lower posttest state anxiety than did CAT takers. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level