Descriptor
| Mathematical Models | 5 |
| Multiple Choice Tests | 5 |
| Test Format | 5 |
| Test Items | 3 |
| Latent Trait Theory | 2 |
| Test Construction | 2 |
| Test Validity | 2 |
| Achievement Tests | 1 |
| Adaptive Testing | 1 |
| Cognitive Tests | 1 |
| College Entrance Examinations | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 4 |
| Journal Articles | 2 |
| Reports - Evaluative | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
| Researchers | 1 |
Location
| Japan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedWilcox, Rand R. – Educational and Psychological Measurement, 1982
Results in the engineering literature on "k out of n system reliability" can be used to characterize tests based on estimates of the probability of correctly determining whether the examinee knows the correct response. In particular, the minimum number of distractors required for multiple-choice tests can be empirically determined.…
Descriptors: Achievement Tests, Mathematical Models, Multiple Choice Tests, Test Format
Peer reviewedWilcox, Rand R.; And Others – Journal of Educational Measurement, 1988
The second response conditional probability model of decision-making strategies used by examinees answering multiple choice test items was revised. Increasing the number of distractors or providing distractors giving examinees (N=106) the option to follow the model improved results and gave a good fit to data for 29 of 30 items. (SLD)
Descriptors: Cognitive Tests, Decision Making, Mathematical Models, Multiple Choice Tests
Samejima, Fumiko – 1980
Research related to the multiple choice test item is reported, as it is conducted by educational technologists in Japan. Sato's number of hypothetical equivalent alternatives is introduced. The based idea behind this index is that the expected uncertainty of the m events, or alternatives, be large and the number of hypothetical, equivalent…
Descriptors: Foreign Countries, Latent Trait Theory, Mathematical Models, Multiple Choice Tests
Nicewander, W. Alan; And Others – 1980
Two methods of interactive, computer-assisted testing methods for multiple-choice items were compared with each other and with conventional multiple-choice tests. The interactive testing methods compared were tailored testing and the respond-until-correct (RUC) item response method. In tailored testing, examinee ability is successively estimated…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Guessing (Tests)
Huntley, Renee M.; Carlson, James E. – 1986
This study compared student performance on language-usage test items presented in two different formats: as discrete sentences and as items embedded in passages. American College Testing (ACT) Program's Assessment experimental units were constructed that presented 40 items in the two different formats. Results suggest item presentation may not…
Descriptors: College Entrance Examinations, Difficulty Level, Goodness of Fit, Item Analysis


