NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Stefan O'Grady – International Journal of Listening, 2025
Language assessment is increasingly computermediated. This development presents opportunities with new task formats and equally a need for renewed scrutiny of established conventions. Recent recommendations to increase integrated skills assessment in lecture comprehension tests is premised on empirical research that demonstrates enhanced construct…
Descriptors: Language Tests, Lecture Method, Listening Comprehension Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kremmel, Benjamin; Schmitt, Norbert – Language Assessment Quarterly, 2016
The scores from vocabulary size tests have typically been interpreted as demonstrating that the target words are "known" or "learned." But "knowing" a word should entail the ability to use it in real language communication in one or more of the four skills. It should also entail deeper knowledge, such as knowing the…
Descriptors: Vocabulary Development, Language Tests, Scores, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Arthur, Neal; Everaert, Patricia – Accounting Education, 2012
This paper addresses the question of whether the increasing use of multiple-choice questions will favour particular student groups, i.e. male or female students. Using data from Belgium, this paper empirically examines the existence of a gender effect by comparing the relative performance of male and female students in both multiple-choice and…
Descriptors: Accounting, Business Administration Education, Gender Differences, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Swartz, Stephen M. – Journal of Education for Business, 2006
The confidence level (information-referenced testing; IRT) design is an attempt to improve upon the multiple choice format by allowing students to express a level of confidence in the answers they choose. In this study, the author evaluated student perceptions of the ease of use and accuracy of and general preference for traditional multiple…
Descriptors: Multiple Choice Tests, Essay Tests, Graduate Students, Student Attitudes
Tollefson, Nona; Chen, Ju Shan – 1986
This study compared item difficulty and item discrimination indices for parallel multiple-choice items in three content areas: measurement concepts, statistical terminology, and synonyms. The statistics and measurement items were administered in classes where graduate students taking the test were studying the content. Vocabulary items represented…
Descriptors: Difficulty Level, Graduate Students, Higher Education, Item Analysis
Sinkavich, Frank J. – 1988
The relationship between metamemorial accuracy and student test performance was assessed in a graduate educational psychology class. Metamemory is defined as information about the content of one's own memory. Adult students in two courses (N=67) were asked to select multiple choice answers in two midterms and a final examination and to rate their…
Descriptors: Cognitive Processes, Confidence Testing, Educational Psychology, Graduate Students
Tollefson, Nona; Tripp, Alice – 1986
The item difficulty and item discrimination of three multiple-choice item formats were compared in experimental and non-experimental settings. In the experimental study, 104 graduate students were randomly assigned to complete one of three forms of a multiple-choice test: (1) a complex alternative ("none of the above") as the correct answer; (2) a…
Descriptors: Achievement Tests, Difficulty Level, Discriminant Analysis, Graduate Students
Tollefson, Nona; Tripp, Alice – 1983
This study compared the item difficulty and item discrimination of three multiple choice item formats. The multiple choice formats studied were: a complex alternative (none of the above) as the correct answer; a complex alternative as a foil, and the one-correct answer format. One hundred four graduate students were randomly assigned to complete…
Descriptors: Analysis of Variance, Difficulty Level, Graduate Students, Higher Education
Peer reviewed Peer reviewed
Hancock, Gregory R. – Journal of Experimental Education, 1994
To investigate the ability of multiple-choice tests to assess higher order thinking skills, examinations were constructed as half multiple choice and half constructed response. Results with 90 undergraduate and graduate students indicate that the 2 formats measure similar constructs at different levels of complexity. (SLD)
Descriptors: Cognitive Processes, Comparative Analysis, Constructed Response, Educational Assessment
Chissom, Brad; Chukabarah, Prince C. O. – 1985
The comparative effects of various sequences of test items were examined for over 900 graduate students enrolled in an educational research course at The University of Alabama, Tuscaloosa. experiment, which was conducted a total of four times using four separate tests, presented three different arrangements of 50 multiple-choice items: (1)…
Descriptors: Analysis of Variance, Comparative Testing, Difficulty Level, Graduate Students
Peer reviewed Peer reviewed
Wise, Steven L.; And Others – Journal of Educational Measurement, 1992
Performance of 156 undergraduate and 48 graduate students on a self-adapted test (SFAT)--students choose the difficulty level of their test items--was compared with performance on a computer-adapted test (CAT). Those taking the SFAT obtained higher ability scores and reported lower posttest state anxiety than did CAT takers. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level