Publication Date
| In 2026 | 0 |
| Since 2025 | 81 |
| Since 2022 (last 5 years) | 449 |
| Since 2017 (last 10 years) | 1237 |
| Since 2007 (last 20 years) | 2511 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 122 |
| Teachers | 105 |
| Researchers | 64 |
| Students | 46 |
| Administrators | 14 |
| Policymakers | 7 |
| Counselors | 3 |
| Parents | 3 |
Location
| Canada | 134 |
| Turkey | 130 |
| Australia | 123 |
| Iran | 66 |
| Indonesia | 61 |
| United Kingdom | 51 |
| Germany | 50 |
| Taiwan | 46 |
| United States | 43 |
| China | 39 |
| California | 34 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 3 |
| Meets WWC Standards with or without Reservations | 5 |
| Does not meet standards | 6 |
Peer reviewedBudescu, David V. – Applied Psychological Measurement, 1988
A multiple matching test--a 24-item Hebrew vocabulary test--was examined, in which distractors from several items are pooled into one list at the test's end. Construction of such tests was feasible. Reliability, validity, and reduction of random guessing were satisfactory when applied to data from 717 applicants to Israeli universities. (SLD)
Descriptors: College Applicants, Feasibility Studies, Foreign Countries, Guessing (Tests)
Zhu, Daming; Thompson, Tony D. – 1995
This study attempted to control differences in achievement when examining omitting tendencies of examinees. Test data of randomly sampled examinees (7 samples of 2,000 examinees each) from one national administration of the ACT Assessment were used. The number of responses omitted per examinee was examined over all examinees and over only those…
Descriptors: Academic Achievement, Black Students, Ethnic Groups, High School Students
Perkhounkova, Yelena; Dunbar, Stephen B. – 1999
The DIMTEST statistical procedure was used in a confirmatory manner to explore the dimensionality structures of three kinds of achievement tests: multiple-choice tests, constructed-response tests, and tests combining both formats. The DIMTEST procedure is based on estimating conditional covariances of the responses to the item pairs. The analysis…
Descriptors: Achievement Tests, Constructed Response, Estimation (Mathematics), Grade 7
Katz, Irvin R.; Friedman, Debra E.; Bennett, Randy Elliot; Berger, Aliza E. – College Entrance Examination Board, 1996
This study investigated the strategies subjects adopted to solve STEM-equivalent SAT-Mathematics (SAT-M) word problems in constructed-response (CR) and multiple-choice (MC) formats. Parallel test forms of CR and MC items were administered to subjects representing a range of mathematical abilities. Format-related differences in difficulty were more…
Descriptors: Multiple Choice Tests, College Entrance Examinations, Problem Solving, Cognitive Style
Kobrin, Jennifer L.; Kimmel, Ernest W. – College Board, 2006
Based on statistics from the first few administrations of the SAT writing section, the test is performing as expected. The reliability of the writing section is very similar to that of other writing assessments. Based on preliminary validity research, the writing section is expected to add modestly to the prediction of college performance when…
Descriptors: Test Construction, Writing Tests, Cognitive Tests, College Entrance Examinations
Taylor, John F. – Programmed Learning and Educational Technology, 1982
The development of a low cost input system is discussed in terms of the appropriateness of imposing program-controlled answering strategies on candidates attempting a conventional multiple choice objective question achievement test. Systems used for the management of learning and strategies students use to answer tests are highlighted. (Author/EJS)
Descriptors: Computer Assisted Testing, Computer Managed Instruction, Computer Programs, Data Collection
Peer reviewedJournal of Reading, 1983
Offers suggestions for (1) using microcomputer programs for reading and spelling instruction, (2) helping students analyze multiple-choice tests, and (3) motivating reluctant readers through sports. (AEA)
Descriptors: Athletics, Computer Assisted Instruction, Computer Programs, Higher Education
Peer reviewedFisher, K.; And Others – Journal of Research in Science Teaching, 1981
Assessed affects of Computer-Assisted Self-Evaluation (CASE) system of multiple-choice testing. Learning and retention were examined in two equivalent groups of undergraduates. Students (N=34) receiving 24 quizzes with immediate feedback outperformed students (N=30) receiving two midterms with delayed feedback. (Author/DS)
Descriptors: Academic Achievement, College Science, College Students, Computer Assisted Testing
Peer reviewedReynolds, William M. – Educational and Psychological Measurement, 1979
This study determined if mildly mentally retarded secondary school students could respond to a verbally presented multiple-choice test of social and personal knowledge. Teacher ratings were also obtained. Results supported the use of two- and three-alternative multiple choice tests. (Author/JKS)
Descriptors: Adolescents, Behavior Rating Scales, Educational Testing, Feasibility Studies
Peer reviewedAmer, Aly A. – ELT Journal, 1997
Using a multiple-choice test and a story frame test, this study measured the effect on reading comprehension of reading a story aloud to English-as-a-Second-Language students. Findings indicate that these students outperformed on both tests those students who read the story silently. (Six references) (Author/CK)
Descriptors: Control Groups, Elementary School Students, English (Second Language), Experimental Groups
Peer reviewedHay, Louise – Computers in the Schools, 1997
Describes a grant-funded project that used technology to adapt the same instructional materials for 240 fourth-grade students by using captioned video scripts at different reading levels to provide individualization. A multiple-choice quiz of comprehension and vocabulary-related questions was developed as an assessment tool. Student reactions…
Descriptors: Captions, Computer Assisted Instruction, Grade 4, Individualized Instruction
Peer reviewedAnderson, Paul S. – International Journal of Educology, 1988
Seven formats of educational testing were compared according to student preferences/perceptions of how well each test method evaluates learning. Formats compared include true/false, multiple-choice, matching, multi-digit testing (MDT), fill-in-the-blank, short answer, and essay. Subjects were 1,440 university students. Results indicate that tests…
Descriptors: Achievement Tests, College Students, Comparative Analysis, Computer Assisted Testing
Peer reviewedRyan, Joseph M.; Miyasaka, Jeanne R. – NASSP Bulletin, 1995
Reviews current student-assessment practices, highlighting alternative-assessment definitions and terminology, portfolio-assessment methods, holistic and analytic scoring rubrics, and improved traditional approaches. Educators must reconsider the wisdom of traditional practice, while giving a fair hearing to new approaches. Fundamental changes in…
Descriptors: Alternative Assessment, Curriculum Development, Definitions, Educational Change
Bouton, Lawrence F. – IDEAL, 1989
Discusses the importance of conversational implicatures in cross-cultural communication and argues that the use of opened-ended questions to study such implicatures is inherently flawed. It is asserted that multiple-choice tests are better investigational devices, and an ongoing investigation of the cross-cultural interpretation of implicature…
Descriptors: Adults, College Students, Discourse Analysis, English (Second Language)
Peer reviewedSaayman, Rikus – Physics Education, 1991
Discusses the structure and results of a university physics entry examination that measures students' expertise with mathematical tools and formal logic operations required for the study of college physics. Indicates that the examination adequately serves incoming students: to evaluate academic potential; to emphasize specific deficiencies; and to…
Descriptors: Academic Ability, Aptitude Tests, College Science, Foreign Countries


