NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Location
Israel1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Okan Bulut; Guher Gorgun; Hacer Karamese – Journal of Educational Measurement, 2025
The use of multistage adaptive testing (MST) has gradually increased in large-scale testing programs as MST achieves a balanced compromise between linear test design and item-level adaptive testing. MST works on the premise that each examinee gives their best effort when attempting the items, and their responses truly reflect what they know or can…
Descriptors: Response Style (Tests), Testing Problems, Testing Accommodations, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Hill, Laura G. – International Journal of Behavioral Development, 2020
Retrospective pretests ask respondents to report after an intervention on their aptitudes, knowledge, or beliefs before the intervention. A primary reason to administer a retrospective pretest is that in some situations, program participants may over the course of an intervention revise or recalibrate their prior understanding of program content,…
Descriptors: Pretesting, Response Style (Tests), Bias, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2011
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method against the oral examination (OE) method. MCQs are widely used and their importance seems likely to grow, due to their inherent suitability for electronic assessment. However, MCQs are influenced by the tendency of examinees to guess…
Descriptors: Grades (Scholastic), Scoring, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
Bolter, John F.; And Others – Journal of Consulting and Clinical Psychology, 1984
Contends that the Speech Sounds Perception Test form (Adult and Midrange versions) is structured such that correct responses can be determined rationally. If a patient identifies and responds according to that structure, the validity of the test is compromised. Posttest interview is suggested as a simple solution. (Author/JAC)
Descriptors: Response Style (Tests), Test Format, Test Validity, Testing Problems
Peer reviewed Peer reviewed
Dixon, Paul N.; And Others – Educational and Psychological Measurement, 1984
The influence of scale format on results was examined. Two Likert type formats, one with all choice points defined and one with only end-points defined, were administered. Each subject completed half the items in each format. Results indicated little difference between forms, nor did subjects indicate a format preference. (Author/DWH)
Descriptors: Higher Education, Rating Scales, Response Style (Tests), Test Format
Peer reviewed Peer reviewed
Schriesheim, Chester A. – Educational and Psychological Measurement, 1981
Effects of item presentation mode on degree of leniency bias in responses to field research questionnaires were studied. Two modes were examined: first with items measuring the same dimensions grouped together and second with such items distributed randomly. The random mode showed substantially less leniency response bias. (Author/BW)
Descriptors: Adults, Leadership Qualities, Questionnaires, Response Style (Tests)
Peer reviewed Peer reviewed
Schriesheim, Chester A.; Hill, Kenneth D. – Educational and Psychological Measurement, 1981
The empirical evidence does not support the prevailing conventional wisdom that it is advisable to mix positively and negatively worded items in psychological measures to counteract acquiescence response bias. An experiment, evaluating subjects' ability to respond accurately to both positive and reversed items on a questionnaire, analyzed post-hoc…
Descriptors: Bias, Higher Education, Questionnaires, Response Style (Tests)
Edwards, John; McCombie, Randy – 1983
The major purpose of the three studies reported here was to investigate possible differences in agreement/disagreement with attitude statements as a function of their type (with regard to positivity/negativity) and personalism. In the first study, 90 students completed scales on energy conservation and on having good study habits. Agreement varied…
Descriptors: Attitude Measures, Higher Education, Response Style (Tests), Semantic Differential
Peer reviewed Peer reviewed
Schaeffer, Nora Cate – Journal of Marriage and the Family, 1989
Examined, through analysis of questions about conflict between separated parents (N=327), three hypotheses about the relationship among frequency and intensity response questions. Found associations among related intensity items possibly stronger than those among related frequency items; associated intensity items and associated frequency items…
Descriptors: Divorce, Measurement Objectives, Measurement Techniques, Reliability
Slem, Charles M. – 1981
Over the years many criticisms have been offered against the multiple choice test format. Ambiguous, and emphasizing isolated information, they are also the most difficult objective tests to construct. Over-interpretation is a danger of multiple choice examinations with students picking subtle answers the test makers consider incorrect. Yet, the…
Descriptors: Constructed Response, Essay Tests, Higher Education, Multiple Choice Tests
Siskind, Theresa G.; Anderson, Lorin W. – 1982
The study was designed to examine the similarity of response options generated by different item writers using a systematic approach to item writing. The similarity of response options to student responses for the same item stems presented in an open-ended format was also examined. A non-systematic (subject matter expertise) approach and a…
Descriptors: Algorithms, Item Analysis, Multiple Choice Tests, Quality Control
Suhadolnik, Debra; Weiss, David J. – 1983
The present study was an attempt to alleviate some of the difficulties inherent in multiple-choice items by having examinees respond to multiple-choice items in a probabilistic manner. Using this format, examinees are able to respond to each alternative and to provide indications of any partial knowledge they may possess concerning the item. The…
Descriptors: Confidence Testing, Multiple Choice Tests, Probability, Response Style (Tests)
Peer reviewed Peer reviewed
Kolstad, Rosemarie K.; Kolstad, Robert A. – Educational Research Quarterly, 1989
The effect on examinee performance of the rule that multiple-choice (MC) test items require the acceptance of 1 choice was examined for 106 dental students presented with choices in MC and multiple true-false formats. MC items force examinees to select one choice, which causes artificial acceptance of correct/incorrect choices. (SLD)
Descriptors: Comparative Testing, Dental Students, Higher Education, Multiple Choice Tests
Shaha, Steven H. – 1982
Traditionally, matching test formats have been avoided in favor of multiple-choice items for several reasons, including item analysis properties and chance performance characteristics. In the light of research on test format and anxiety, this study postulates that, if a matching test could assess knowledge for a given topic as effectively as an…
Descriptors: Comparative Analysis, Multiple Choice Tests, Objective Tests, Response Style (Tests)
Burton, Nancy W.; And Others – 1976
Assessment exercises (items) in three different formats--multiple-choice with an "I don't know" (IDK) option, multiple-choice without the IDK, and open-ended--were placed at the beginning, middle and end of 45-minute assessment packages (instruments). A balanced incomplete blocks analysis of variance was computed to determine the biasing…
Descriptors: Age Differences, Difficulty Level, Educational Assessment, Guessing (Tests)
Previous Page | Next Page ยป
Pages: 1  |  2