NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Papenberg, Martin; Diedenhofen, Birk; Musch, Jochen – Journal of Experimental Education, 2021
Testwiseness may introduce construct-irrelevant variance to multiple-choice test scores. Presenting response options sequentially has been proposed as a potential solution to this problem. In an experimental validation, we determined the psychometric properties of a test based on the sequential presentation of response options. We created a strong…
Descriptors: Test Wiseness, Test Validity, Test Reliability, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
DiBattista, David; Sinnige-Egger, Jo-Anne; Fortuna, Glenda – Journal of Experimental Education, 2014
The authors assessed the effects of using "none of the above" as an option in a 40-item, general-knowledge multiple-choice test administered to undergraduate students. Examinees who selected "none of the above" were given an incentive to write the correct answer to the question posed. Using "none of the above" as the…
Descriptors: Multiple Choice Tests, Testing, Undergraduate Students, Test Items
Peer reviewed Peer reviewed
Crehan, Kevin; Haladyna, Thomas M. – Journal of Experimental Education, 1991
Two item-writing rules were tested: phrasing stems as questions versus partial sentences; and using the "none-of-the-above" option instead of a specific content option. Results with 228 college students do not support the use of either stem type and provide limited evidence to caution against the "none-of-the-above" option.…
Descriptors: College Students, Higher Education, Multiple Choice Tests, Test Construction
Peer reviewed Peer reviewed
Weiten, Wayne – Journal of Experimental Education, 1982
A comparison of double as opposed to single multiple-choice questions yielded significant differences in regard to item difficulty, item discrimination, and internal reliability, but not concurrent validity. (Author/PN)
Descriptors: Difficulty Level, Educational Testing, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Green, Kathy – Journal of Experimental Education, 1979
Reliabilities and concurrent validities of teacher-made multiple-choice and true-false tests were compared. No significant differences were found even when multiple-choice reliability was adjusted to equate testing time. (Author/MH)
Descriptors: Comparative Testing, Higher Education, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
Pajares, Frank; Miller, M. David – Journal of Experimental Education, 1997
The mathematics self-efficacy and problem-solving performance of 327 middle school students were assessed with multiple-choice and open-ended methods. No differences in self-efficacy resulted from the different forms of assessment, although those who took the multiple-choice test had higher scores and better calibration of ability. (SLD)
Descriptors: Ability, Educational Assessment, Mathematics, Middle School Students
Peer reviewed Peer reviewed
Plake, Barbara S.; And Others – Journal of Experimental Education, 1981
Number right and elimination scores were analyzed on a college level mathematics exam assembled from pretest data. Anxiety measures were administered along with the experimental forms to undergraduates. Results suggest that neither test scores nor attitudes are influenced by item order knowledge thereof, or anxiety level. (Author/GK)
Descriptors: College Mathematics, Difficulty Level, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Foos, Paul W. – Journal of Experimental Education, 1992
Effects of expected form and expected difficulty of a test were examined for 84 college students expecting an easy or difficult multiple-choice or essay examination but taking a combined test. Results support the hypothesis that individuals work harder, rather than reduce their effort, when difficult work is expected. (SLD)
Descriptors: College Students, Difficulty Level, Essay Tests, Expectation
Peer reviewed Peer reviewed
Hancock, Gregory R. – Journal of Experimental Education, 1994
To investigate the ability of multiple-choice tests to assess higher order thinking skills, examinations were constructed as half multiple choice and half constructed response. Results with 90 undergraduate and graduate students indicate that the 2 formats measure similar constructs at different levels of complexity. (SLD)
Descriptors: Cognitive Processes, Comparative Analysis, Constructed Response, Educational Assessment
Peer reviewed Peer reviewed
Crocker, Linda; Schmitt, Alicia – Journal of Experimental Education, 1987
The effectiveness of a strategy for improving performance on multiple-choice items for examinees was assessed using an aptitude-treatment interaction model. Results showed that for low-anxious examinees, generation of an answer before selecting a response led to higher performance, but not for highly anxious examinees. (Author/JAZ)
Descriptors: Achievement Tests, Aptitude Treatment Interaction, Behavior Rating Scales, College Students