Publication Date
| In 2026 | 0 |
| Since 2025 | 8 |
| Since 2022 (last 5 years) | 57 |
| Since 2017 (last 10 years) | 148 |
| Since 2007 (last 20 years) | 246 |
Descriptor
| Multiple Choice Tests | 526 |
| Test Format | 526 |
| Test Items | 260 |
| Foreign Countries | 145 |
| Test Construction | 139 |
| Higher Education | 115 |
| Difficulty Level | 96 |
| Comparative Analysis | 93 |
| Scores | 86 |
| Test Reliability | 68 |
| Computer Assisted Testing | 64 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 25 |
| Teachers | 21 |
| Researchers | 17 |
| Students | 7 |
| Administrators | 1 |
| Parents | 1 |
Location
| Canada | 13 |
| Turkey | 12 |
| Netherlands | 9 |
| Germany | 8 |
| Australia | 6 |
| Japan | 6 |
| California | 5 |
| Iran | 5 |
| South Korea | 5 |
| United Kingdom | 5 |
| China | 4 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Bolton, David L.; And Others – 1989
A study was conducted to assess the validity of translations of two different forms of a licensing examination for cosmetologists in Florida to ensure that both Spanish and English candidates have equal chances of being licensed. The LISREL computer program was used to test the equivalence of factor structure, units of measurement, and standard…
Descriptors: Construct Validity, Cosmetology, English, Factor Analysis
Schoen, Harold L.; And Others – 1987
The estimation processes used by fifth through eighth grade students as they responded to computational estimation test items were examined. Interview-based process descriptions were cross-validated using large group test data from an open-ended test and a multiple choice test. Five question formats were used to test different estimation…
Descriptors: Age Differences, Cognitive Processes, Cognitive Structures, Cognitive Style
Peer reviewedHarris, Robert B.; Kerby, William C. – Journal of Economic Education, 1997
Recommends including essay questions on state economics examinations to prevent misclassification of students. Briefly reviews the literature arguing that certain groups of students do poorly on multiple choice tests. Discusses California's experience with adopting a combined-format type test. (MJP)
Descriptors: Academic Standards, Economics, Economics Education, Educational Assessment
Peer reviewedBolger, Niall; Kellaghan, Thomas – Journal of Educational Measurement, 1990
Gender differences in scholastic achievement as a function of measurement method were examined by comparing performance of 739 15-year-old boys and 758 15-year-old girls in Irish high schools on multiple-choice and free-response tests of mathematics, Irish, and English achievement. Method-based gender differences are discussed. (SLD)
Descriptors: Academic Achievement, Adolescents, Comparative Testing, English
Peer reviewedSchoen, Harold L.; And Others – Journal for Research in Mathematics Education, 1990
Describes responses of fifth to eighth grade students to different types of test items requiring estimation. Reports that performance differed by item format, types of numbers and operations in the items, and grade level of students. (Author/YP)
Descriptors: Cognitive Processes, Computation, Elementary School Mathematics, Elementary Secondary Education
Peer reviewedBarnett-Foster, Debora; Nagy, Philip – Alberta Journal of Educational Research, 1995
Analysis of response strategies employed by 261 undergraduate chemistry students when answering multiple-choice and stem-equivalent constructed-response questions revealed no significant differences in types of solution strategies or types of errors across test format. However, analysis of student oral reports revealed a higher frequency of…
Descriptors: Chemistry, Constructed Response, Educational Research, Educational Testing
Peer reviewedBridgeman, Brent – Journal of Educational Measurement, 1992
Examinees in a regular administration of the quantitative portion of the Graduate Record Examination responded to particular items in a machine-scannable multiple-choice format. Volunteers (n=364) used a computer to answer open-ended counterparts of these items. Scores for both formats demonstrated similar correlational patterns. (SLD)
Descriptors: Answer Sheets, College Entrance Examinations, College Students, Comparative Testing
Sugrue, Brenda; Webb, Noreen; Schlackman, Jonah – 1998
This paper describes a study that investigated the interchangeability of four different assessment methods for measuring middle-school students' understanding of science concepts. The four methods compared were hands-on tasks with associated multiple-choice and written justification items, written analogues of the hands-on tasks, and two types of…
Descriptors: Correlation, Educational Assessment, Evaluation Methods, Grade 7
Tsagari, Constance – 1994
This study investigated and compared the effects of two test formats (free response and multiple choice) on English-as-a-Second-Language (ESL) learners' reading comprehension. The tests, together with a checklist of test-taking strategies and retrospective questionnaires concerning more general reading strategies, were administered to 57 ESL…
Descriptors: Comparative Analysis, English (Second Language), Foreign Countries, Language Processing
PDF pending restorationHyers, Albert D.; Anderson, Paul S. – 1991
Using matched pairs of geography questions, a new testing method for machine-scored fill-in-the-blank, multiple-digit testing (MDT) questions was compared to the traditional multiple-choice (MC) style. Data were from 118 matched or parallel test items for 4 tests from 764 college students of geography. The new method produced superior results when…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Difficulty Level
McDaniel, Mark A.; And Others – 1991
To explore the suggestion that subjects modulate their reading strategies in accordance with how they expect to be tested, several test expectancies (multiple-choice, true/false, essay, and cloze) were implemented in addition to a non-specific test expectancy as a control. Subjects were 124 students at Purdue University (Indiana). After reading…
Descriptors: Cloze Procedure, College Students, Essay Tests, Expectation
Ward, William C.; And Others – 1986
The keylist format (rather than the conventional multiple-choice format) for item presentation provides a machine-scorable surrogate for a truly free-response test. In this format, the examinee is required to think of an answer, look it up in a long ordered list, and enter its number on an answer sheet. The introduction of keylist items into…
Descriptors: Analogy, Aptitude Tests, Construct Validity, Correlation
Oaster, T. R. F.; And Others – 1986
This study hypothesized that items in the one-question-per-passage format would be less easily answered when administered without their associated contexts than conventional reading comprehension items. A total of 256 seventh and eighth grade students were administered both Forms 3A and 3B of the Sequential Tests of Educational Progress (STEP 11).…
Descriptors: Context Effect, Difficulty Level, Grade 7, Grade 8
Plake, Barbara S.; Wise, Steven L. – 1986
One question regarding the utility of adaptive testing is the effect of individualized item arrangements on examinee test scores. The purpose of this study was to analyze the item difficulty choices by examinees as a function of previous item performance. The examination was a 25-item test of basic algebra skills given to 36 students in an…
Descriptors: Adaptive Testing, Algebra, College Students, Computer Assisted Testing
O'Neill, Kathleen A. – 1986
When test questions are not intended to measure language skills, it is important to know if language is an extraneous characteristic that affects item performance. This study investigates whether certain stylistic changes in the way items are presented affect item performance on examinations for a health profession. The subjects were medical…
Descriptors: Abbreviations, Analysis of Variance, Drug Education, Graduate Medical Students


