NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 3,541 to 3,555 of 4,794 results Save | Export
Dulaney, Chuck; Burch, Glenda – 2001
This report presents the results of Wake County Public School System (WCPSS) students on the North Carolina End-of-Grade (EOG) multiple-choice tests from May and June 2001. The EOG tests have several components, but all are designed to measure student achievement of the knowledge and skills of the North Carolina Standard Course of Study for…
Descriptors: Achievement Tests, Elementary Education, Elementary School Students, Mathematics Achievement
Peer reviewed Peer reviewed
Traub, Ross E.; Hambleton, Ronald K. – Educational and Psychological Measurement, 1972
Findings of this study suggest that it is preferable to attempt to control guessing through the use of the reward instruction rather than to attempt to control it using the penalty instruction or to encourage it using the insttruction to guess. (Authors/MB)
Descriptors: Grade 8, Guessing (Tests), Multiple Choice Tests, Pacing
Peer reviewed Peer reviewed
Bertou, Patrick D.; And Others – Journal of Educational Research, 1972
Descriptors: Educational Television, Grade 9, Instructional Materials, Learning
Carver, Ronald P. – J Educ Meas, 1970
Two studies analyze and evaluate the use of the chunk," a small group of meaningfully related words within a sentence, as a test item in reading and listening tests. (PR)
Descriptors: Listening Comprehension, Listening Comprehension Tests, Measurement Techniques, Multiple Choice Tests
Toth, Erwin – Neueren Sprachen, 1971
Descriptors: Achievement Tests, Adult Education, English (Second Language), Higher Education
Peer reviewed Peer reviewed
McCarthy, William H. – Journal of Medical Education, 1971
Descriptors: Attendance, Instructional Improvement, Instructional Innovation, Learning Processes
Peer reviewed Peer reviewed
Townsend, Michael A. R.; And Others – Educational Research Quarterly, 1983
Undergraduate students completed a regular class test of 35 multiple-choice items, interspersed with five humorous verbal items written in multiple-choice format or selected syndicated cartoons. A questionnaire revealed that, although student perceptions of test humor were positive, they were less positive about verbal items. (Author/CM)
Descriptors: Cartoons, Higher Education, Humor, Multiple Choice Tests
Peer reviewed Peer reviewed
Arkin, Robert M.; Walts, Elizabeth A. – Journal of Educational Psychology, 1983
The effects of corrective testing and how such feedback might affect high- and low-test-anxious students differently are indicated. Subjects were 286 college students in three classes--one using mastery testing and two using multiple choice tests. (Author/PN)
Descriptors: Attribution Theory, Feedback, Higher Education, Mastery Tests
Peer reviewed Peer reviewed
Brooks, Larry W.; And Others – Journal of Educational Psychology, 1983
Two experiments examined the effects of embedded and intact (outline) headings on the processing of complex text material by college students. Results indicated that embedded headings reliably improved delayed test performance. It was further found that instructions in the use of headings as processing aids facilitated test performance. (Author/PN)
Descriptors: Advance Organizers, Comprehension, Cues, Higher Education
Peer reviewed Peer reviewed
D'Ydewalle, Gery; And Others – Contemporary Educational Psychology, 1983
Study time and test performance change as a function of expecting either open or multiple-choice questions on a history test. Subjects tested in either format were led to expect the same test format on a second test. Subjects expecting open questions studied more and performed better on both test formats. (Author/CM)
Descriptors: Essay Tests, Expectation, Foreign Countries, Higher Education
Peer reviewed Peer reviewed
Smith, Jeffrey K. – Journal of Educational Measurement, 1982
Two studies examined the extent to which test takers use plausibility as a method for locating correct responses when guessing and the extent to which scores can be improved by teaching test takers this approach. Results confirm that this aspect of multiple choice items merits further consideration by test constructors. (Author/BW)
Descriptors: Guessing (Tests), Higher Education, Multiple Choice Tests, Scores
Peer reviewed Peer reviewed
Hsu, Louis M. – Applied Psychological Measurement, 1979
A comparison of the relative ordering power of separate and grouped-items true-false tests indicated that neither type of test was uniformly superior to the other across all levels of knowledge of examinees. Grouped-item tests were found superior for examinees with low levels of knowledge. (Author/CTM)
Descriptors: Academic Ability, Knowledge Level, Multiple Choice Tests, Scores
Peer reviewed Peer reviewed
Hudson, H. T.; Hudson, Carolyn K. – American Journal of Physics, 1981
Presents data indicating that a large number of multiple-choice problems gives a reasonable correlation with hand-graded problems. (SK)
Descriptors: College Science, Guessing (Tests), Guidelines, Higher Education
Kielhoefer, Bernd – Neusprachliche Mitteilungen, 1979
Reports on a testing experiment (at the university level) dealing with the construction and validation of a university entrance test in Romance languages. Discusses the subtest "vocabulary," as to problems of validation--specifically, a self-rating test and an association-speed test. (IFS/WGA)
Descriptors: Language Instruction, Language Tests, Multiple Choice Tests, Second Language Learning
Breland, Hunter M. – College Board Review, 1977
One reliable way to measure student writing ability is to gather and evaluate a series of writing samples or essays over a period of time. The use of multiple-choice tests in combination with essay assignments can be the most educationally sound solution to the administrative problems involved in college course placements. (Editor/LBH)
Descriptors: Comparative Analysis, Essay Tests, Essays, Expository Writing
Pages: 1  |  ...  |  233  |  234  |  235  |  236  |  237  |  238  |  239  |  240  |  241  |  ...  |  320