Publication Date
| In 2026 | 0 |
| Since 2025 | 85 |
| Since 2022 (last 5 years) | 453 |
| Since 2017 (last 10 years) | 1241 |
| Since 2007 (last 20 years) | 2515 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 122 |
| Teachers | 105 |
| Researchers | 64 |
| Students | 46 |
| Administrators | 14 |
| Policymakers | 7 |
| Counselors | 3 |
| Parents | 3 |
Location
| Canada | 134 |
| Turkey | 131 |
| Australia | 123 |
| Iran | 66 |
| Indonesia | 61 |
| United Kingdom | 51 |
| Germany | 50 |
| Taiwan | 46 |
| United States | 43 |
| China | 39 |
| California | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 3 |
| Meets WWC Standards with or without Reservations | 5 |
| Does not meet standards | 6 |
Humphry, Betty – 1973
The two phases in the development and tryout of a Guidance Counselor Test to be added to the National Teacher Examinations Program are discussed. In Phase One, a 150-item written test and a 50-item written test based on taped stimulus material were produced. Each test consisted of five-choice multiple-choice questions. In Phase Two, the tests were…
Descriptors: Counselor Evaluation, Graduate Students, Guidance Personnel, Higher Education
Hanna, Gerald S. – 1974
Although the "Don't Know" (DK) option has received telling criticism in maximum performance summative tests, its potential use in formative evaluation was considered and judged to be more promising. The pretest of an instructional module was administered with DK options. Examinees were then required to answer each question to which they had…
Descriptors: Formative Evaluation, Guessing (Tests), Multiple Choice Tests, Response Style (Tests)
Hazlett, C. B. – 1970
Medsirch (Medical Search) is an information retrieval system designed to aid in preparing examinations for medical students. There are two versions of the system: a sequential access file suitable for shallow indexing with a broad choice of search terms and a random direct access file for deep indexing with a restricted range of choices for search…
Descriptors: Computer Oriented Programs, Computer Programs, Coordinate Indexes, Costs
Ryan, Joseph P.; Hamm, Debra W. – 1976
A procedure is described for increasing the reliability of tests after they have been given and for developing shorter but more reliable tests. Eight tests administered to 200 graduate students studying educational research are analyzed. The analysis considers the original tests, the items loading on the first factor of the test, and the items…
Descriptors: Career Development, Factor Analysis, Factor Structure, Item Analysis
Peer reviewedFitzgerald, Thomas P.; Fitzgerald, Ellen F. – Educational Research Quarterly, 1978
This study investigated the differential performance of subjects across cultures (U.S. and Ireland); grade levels (grades 2, 3, and 4); and three test formats (multiple-choice-cloze, maze, and cloze). Recognition test formats produced higher scores than the cloze format. Cultural influences were also reported. (Author/GDC)
Descriptors: Cloze Procedure, Cross Cultural Studies, Cultural Influences, Elementary Education
Peer reviewedCross, Lawrence; Frary, Robert – Journal of Educational Measurement, 1977
Corrected-for-guessing scores on multiple-choice tests depend upon the ability and willingness of examinees to guess when they have some basis for answering, and to avoid guessing when they have no basis. The present study determined the extent to which college students were able and willing to comply with formula-scoring directions. (Author/CTM)
Descriptors: Guessing (Tests), Higher Education, Individual Characteristics, Multiple Choice Tests
Peer reviewedTraub, Ross E.; Fisher, Charles W. – Applied Psychological Measurement, 1977
Two sets of mathematical reasoning and two sets of verbal comprehension items were cast into each of three formats--constructed response, standard multiple-choice, and Coombs multiple-choice--in order to assess whether tests with identical content but different formats measure the same attribute. (Author/CTM)
Descriptors: Comparative Testing, Confidence Testing, Constructed Response, Factor Analysis
Newsom, Robert S.; And Others – Evaluation Quarterly, 1978
For the training and placement of professional workers, multiple-choice instruments are the norm for wide-scale measurement and evaluation efforts. These instruments contain fundamental problems. Computer-based management simulations may provide solutions to these problems, appear scoreable and reliable, offer increased validity, and are better…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Occupational Tests, Personnel Evaluation
Peer reviewedHuynh, Huynh; Casteel, Jim – Journal of Experimental Education, 1987
In the context of pass/fail decisions, using the Bock multi-nominal latent trait model for moderate-length tests does not produce decisions that differ substantially from those based on the raw scores. The Bock decisions appear to relate less strongly to outside criteria than those based on the raw scores. (Author/JAZ)
Descriptors: Cutting Scores, Error Patterns, Grade 6, Intermediate Grades
Peer reviewedJoycey, E. – System, 1987
Among techniques which can be used by foreign language teachers to help learners use multiple-choice tests (after reading a text) to become better readers are: have students attempt to answer questions before reading the text; rearrange the order of the questions; and have students make up multiple choice questions. (CB)
Descriptors: Classroom Techniques, Language Teachers, Multiple Choice Tests, Reading Comprehension
Peer reviewedLederman, Marie Jean – Journal of Basic Writing, 1988
Explores the history of testing, motivations for testing, testing procedures, and the inevitable limitations of testing. Argues that writing program faculty and administrators must clarify and profess their values, decide what they want students to know and what sort of thinkers they should be, and develop tests reflecting those needs. (SR)
Descriptors: Educational Objectives, Educational Testing, Essay Tests, Multiple Choice Tests
Peer reviewedNorcini, John J.; And Others – Evaluation and the Health Professions, 1986
This study compares physician performance on the Computer-Aided Simulation of the Clinical Encounter with peer ratings and performance on multiple choice questions and patient management problems. Results indicate that all formats are equally valid, although multiple choice is the most reliable method of assessment per unit of testing time.…
Descriptors: Certification, Competence, Computer Assisted Testing, Computer Simulation
Peer reviewedGrosse, Martin E. – Evaluation and the Health Professions, 1986
Scores based on the number of correct answers were compared with scores based on dangerous responses to items in the same multiple choice test developed by American Board of Orthopaedic Surgery. Results showed construct validity for both sets of scores. However, both scores were redundant when evaluated by correlation coefficient. (Author/JAZ)
Descriptors: Certification, Construct Validity, Correlation, Foreign Countries
Brightman, Harvey J.; And Others – Educational Technology, 1984
Describes the development and evaluation of interactive computer-based formative tests containing multiple choice questions based on Bloom's taxonomy and their use in a core-level higher education business statistics course prior to graded examinations to determine where students are experiencing difficulties. (MBR)
Descriptors: Cognitive Objectives, Computer Assisted Testing, Computer Software, Diagnostic Tests
Choppin, Bruce – Evaluation in Education: An International Review Series, 1985
During 1969 the International Association for the Evaluation of Educational Achievement began a series of cross-cultural studies to investigate the workings of multiple-choice achievement tests and student guessing behaviors. Empirical models to correct for guessing are discussed in terms of test item difficulty, number of response choices,…
Descriptors: Achievement Tests, Cross Cultural Studies, Educational Testing, Guessing (Tests)


