NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 3,556 to 3,570 of 4,794 results Save | Export
Peer reviewed Peer reviewed
Bosher, Susan – Nursing Education Perspectives, 2003
Nineteen multiple-choice nursing tests containing 673 items were analyzed for test wiseness, irrelevant difficulty in stem or option, linguistic/structural bias, or cultural bias. Twenty-eight types of flaws occurred at least 10 times each. (Contains 28 references.) (SK)
Descriptors: Culture Fair Tests, Higher Education, Item Analysis, Item Bias
Peer reviewed Peer reviewed
Rosa, Elena; O'Neill, Michael D. – Studies in Second Language Acquisition, 1999
Investigates how second language intake was affected both by awareness and by the conditions under which a problem-solving task was performed. Spanish conditional sentences were presented to learners through five different degrees of explicitness. Intake was measured through a multiple-choice recognition test administered immediately after the…
Descriptors: College Students, Higher Education, Language Research, Multiple Choice Tests
Peer reviewed Peer reviewed
Kolstad, Rosemarie K.; Kolstad, Robert A. – Educational Research Quarterly, 1989
The effect on examinee performance of the rule that multiple-choice (MC) test items require the acceptance of 1 choice was examined for 106 dental students presented with choices in MC and multiple true-false formats. MC items force examinees to select one choice, which causes artificial acceptance of correct/incorrect choices. (SLD)
Descriptors: Comparative Testing, Dental Students, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Wilhite, Stephen C. – Journal of Reading Behavior, 1986
Examines the effects of headings and adjunct questions embedded in expository text on the delayed multiple-choice test performance of college students. Finds that headings may promote the organization of passage information so as to increase its general availability, while the overall effect of adjunct questions was not significant. (MM)
Descriptors: College Students, Higher Education, Locus of Control, Multiple Choice Tests
Stokes, Michael T.; And Others – Journal of Computer-Based Instruction, 1988
Results of a study that assessed the effect of requiring students to wait for a short time interval before responding to computer-generated multiple choice test items support the notion that moderate delays enhance user performance on cognitive tasks. Three conditions of computer lockout were examined in a university psychology course. (12…
Descriptors: Academic Achievement, Analysis of Variance, Computer Assisted Testing, Higher Education
Peer reviewed Peer reviewed
Lennon, Paul – ELT Journal, 1989
Analysis of advanced English-as-a-second-language students' responses to proficiency tests and conversational cloze tests after a six-month residency in England revealed that, while written multiple-choice tests clearly showed linguistic improvement, the oral cloze tests separated out subjects more effectively. (Author/CB)
Descriptors: Advanced Students, Cloze Procedure, English (Second Language), Foreign Countries
Peer reviewed Peer reviewed
Foos, Paul W. – Journal of Experimental Education, 1995
Performances of 75 college students, matched for total study time, who wrote 1, 2, or no summaries while studying a text for recall were compared. Results support the hypothesis that less frequent summarizing (only 1) produces better performance. The effect can be obtained for recognition as well as recall. (SLD)
Descriptors: College Students, Higher Education, Multiple Choice Tests, Recall (Psychology)
Peer reviewed Peer reviewed
Harasym, P. H.; And Others – Evaluation and the Health Professions, 1992
Findings from a study with approximately 200 first-year University of Calgary (Canada) nursing students provide evidence that the use of negation (e.g., not, except) should be limited in stems of multiple-choice test items and that a single-response negatively worded item should be converted to a multiple-response positively worded item. (SLD)
Descriptors: College Students, Foreign Countries, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Schwarz, Shirley P.; And Others – Journal of Educational Measurement, 1991
Interviews were conducted with 104 students in masters' level classes to determine their reasons for changing test answers. Subjects previously had been instructed in answer-changing strategies. Most changes were for thought out reasons; few were because of clerical errors. Reconsideration of test items is probably underestimated in…
Descriptors: Achievement Gains, Graduate Students, Guessing (Tests), Higher Education
Peer reviewed Peer reviewed
Wainer, Howard; Thissen, David – Applied Measurement in Education, 1993
Because assessment instruments of the future may well be composed of a combination of types of questions, a way to combine those scores effectively is discussed. Two new graphic tools are presented that show that it may not be practical to equalize the reliability of different components. (SLD)
Descriptors: Constructed Response, Educational Assessment, Graphs, Item Response Theory
Peer reviewed Peer reviewed
Beckwith, J. B. – Higher Education, 1991
Relationships between three approaches to learning (surface, deep, and achieving), prior knowledge of subject area, and performance on a multiple-choice test following a unit in basic psychology were investigated with 105 college freshmen. Approaches to learning were unrelated to test performance. Prior knowledge did not relate to a deep approach…
Descriptors: Cognitive Style, College Freshmen, Educational Attitudes, Goal Orientation
Peer reviewed Peer reviewed
Frary, Robert B. – Applied Measurement in Education, 1991
The use of the "none-of-the-above" option (NOTA) in 20 college-level multiple-choice tests was evaluated for classes with 100 or more students. Eight academic disciplines were represented, and 295 NOTA and 724 regular test items were used. It appears that the NOTA can be compatible with good classroom measurement. (TJH)
Descriptors: College Students, Comparative Testing, Difficulty Level, Discriminant Analysis
Peer reviewed Peer reviewed
Anbar, Michael – Academic Medicine, 1991
Interactive computerized tests accepting unrestricted natural-language input were used to assess knowledge of clinical biophysics at the State University of New York at Buffalo. Comparison of responses to open-ended sequential questions and multiple-choice questions on the same material found the two formats test different aspects of competence.…
Descriptors: Biology, Comparative Analysis, Computer Assisted Testing, Higher Education
Peer reviewed Peer reviewed
Laufer, Batia – Applied Linguistics, 1990
Native speaking learners of English were compared with foreign learners with regard to confusion of "synforms" (similar lexical forms). Synform-induced errors were similar in native speaking learners and foreign learners indicating that all learners, native and foreign, follow coinciding developmental sequences. (24 references)…
Descriptors: Comparative Analysis, English (Second Language), Error Analysis (Language), Language Research
Peer reviewed Peer reviewed
Lewis, Robert; Berghoff, Paul; Pheeney, Pierette – Innovative Higher Education, 1999
Three professors share techniques for helping students focus on assessments required in classes. Charts are used to show students the specific concepts, principles, and problems that will be included on multiple-choice tests; rubrics developed for assigned work are used to increase student expectations and direct their explorations; and negotiated…
Descriptors: Academic Standards, Assignments, Attention Control, Charts
Pages: 1  |  ...  |  234  |  235  |  236  |  237  |  238  |  239  |  240  |  241  |  242  |  ...  |  320