Descriptor
| Multiple Choice Tests | 11 |
| Testing Problems | 11 |
| Higher Education | 7 |
| Test Items | 6 |
| Item Analysis | 5 |
| Cheating | 4 |
| Latent Trait Theory | 4 |
| Scores | 4 |
| Test Format | 4 |
| Mathematical Models | 3 |
| Scoring Formulas | 3 |
| More ▼ | |
Source
Author
| Bolus, Roger | 1 |
| Drasgow, Fritz | 1 |
| Ferguson, William F. | 1 |
| Gilmer, Jerry S. | 1 |
| Klein, Stephen P. | 1 |
| Lenel, Julia C. | 1 |
| Levine, Michael V. | 1 |
| Livingston, Samuel A. | 1 |
| Lutz, William | 1 |
| McBee, Janice K. | 1 |
| Melican, Gerald | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 10 |
| Speeches/Meeting Papers | 9 |
| Information Analyses | 1 |
Education Level
Audience
| Researchers | 11 |
Location
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Record Examinations | 1 |
| SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Ferguson, William F. – 1983
College undergraduates (n=38) were administered identical multiple choice tests with randomly presented answer-sheets numbered either vertically or horizontally. Of the originally-scheduled four tests during the semester, tests one and three were retested with entirely different test questions, also multiple choice, resulting in scores from tests,…
Descriptors: Answer Sheets, Cheating, Higher Education, Multiple Choice Tests
Weber, Larry J.; McBee, Janice K. – 1983
Using multiple choice tests and a statistical method designed to identify flagrant cheaters, the authors undertook to determine (1) the magnitude of cheating on take-home and open-book exams; (2) whether the amount of cheating varied according to three types of examinations (closed-book, open-book or take-home); and (3) if cheating was affected by…
Descriptors: Cheating, College Credits, Higher Education, Multiple Choice Tests
Lutz, William – 1983
After an extensive review of the available research on large-scale writing assessment, certain issues in writing assessment seem to be unresolved, and still other issues are not supported by adequate research. This paper reviews the basic issues in writing assessment, points out which topics are supported by strong research, and which topics are…
Descriptors: Educational Assessment, Essay Tests, Higher Education, Multiple Choice Tests
Klein, Stephen P.; Bolus, Roger – 1983
A solution to reduce the likelihood of one examinee copying another's answers on large scale tests that require all examinees to answer the same set of questions is to use multiple test forms that differ in terms of item ordering. This study was conducted to determine whether varying the sequence in which blocks of items were presented to…
Descriptors: Adults, Cheating, Cost Effectiveness, Item Analysis
Livingston, Samuel A. – 1986
This paper deals with test fairness regarding a test consisting of two parts: (1) a "common" section, taken by all students; and (2) a "variable" section, in which some students may answer a different set of questions from other students. For example, a test taken by several thousand students each year contains a common multiple-choice portion and…
Descriptors: Difficulty Level, Error of Measurement, Essay Tests, Mathematical Models
Melican, Gerald; Plake, Barbara S. – 1984
The validity of combining a correction for guessing with the Nedelsky-based cutscore was investigated. A five option multiple choice Mathematics Achievement Test was used in the study. Items were selected to meet several criteria. These included: the capability of measuring mathematics concepts related to performance in introductory statistics;…
Descriptors: Cutting Scores, Guessing (Tests), Higher Education, Multiple Choice Tests
Lenel, Julia C.; Gilmer, Jerry S. – 1986
In some testing programs an early item analysis is performed before final scoring in order to validate the intended keys. As a result, some items which are flawed and do not discriminate well may be keyed so as to give credit to examinees no matter which answer was chosen. This is referred to as allkeying. This research examined how varying the…
Descriptors: Equated Scores, Item Analysis, Latent Trait Theory, Licensing Examinations (Professions)
Levine, Michael V.; Drasgow, Fritz – 1984
Some examinees' test-taking behavior may be so idiosyncratic that their scores are not comparable to the scores of more typical examinees. Appropriateness indices, which provide quantitative measures of response-pattern atypicality, can be viewed as statistics for testing a null hypothesis of normal test-taking behavior against an alternative…
Descriptors: Cheating, College Entrance Examinations, Computer Simulation, Estimation (Mathematics)
O'Neill, Kathleen A. – 1986
When test questions are not intended to measure language skills, it is important to know if language is an extraneous characteristic that affects item performance. This study investigates whether certain stylistic changes in the way items are presented affect item performance on examinations for a health profession. The subjects were medical…
Descriptors: Abbreviations, Analysis of Variance, Drug Education, Graduate Medical Students
Waller, Michael I. – 1986
This study compares the fit of the 3-parameter model to the Ability Removing Random Guessing (ARRG) model on data from a wide range of tests of cognitive ability in three representative samples. When the guessing parameters under the 3-parameter model are estimated individually for each item, the 3-parameter model yields the better fit to…
Descriptors: Cognitive Tests, Cohort Analysis, Elementary Secondary Education, Equations (Mathematics)
Scheuneman, Janice – 1985
A number of hypotheses were tested concerning elements of Graduate Record Examinations (GRE) items that might affect the performance of blacks and whites differently. These elements were characteristics common to several items that otherwise measured different concepts. Seven general hypotheses were tested in the form of sixteen specific…
Descriptors: Black Students, College Entrance Examinations, Graduate Study, Higher Education


