Publication Date
| In 2026 | 0 |
| Since 2025 | 62 |
| Since 2022 (last 5 years) | 388 |
| Since 2017 (last 10 years) | 831 |
| Since 2007 (last 20 years) | 1345 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 195 |
| Teachers | 161 |
| Researchers | 93 |
| Administrators | 50 |
| Students | 34 |
| Policymakers | 15 |
| Parents | 12 |
| Counselors | 2 |
| Community | 1 |
| Media Staff | 1 |
| Support Staff | 1 |
| More ▼ | |
Location
| Canada | 63 |
| Turkey | 59 |
| Germany | 41 |
| United Kingdom | 37 |
| Australia | 36 |
| Japan | 35 |
| China | 33 |
| United States | 32 |
| California | 25 |
| Iran | 25 |
| United Kingdom (England) | 25 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedSinger, Peter A.; And Others – Academic Medicine, 1996
Final-year Ontario medical students (n=88) took a 4-station objective structured clinical examination (OSCE) using standardized patients and involving decisions to forgo life-sustaining treatment. Performance was scored on a checklist of behaviors unique to each case. Results indicated that because of low reliability, the OSCE is not a feasible…
Descriptors: Clinical Experience, Competency Based Education, Ethics, Foreign Countries
Peer reviewedSukigara, Masune – Educational and Psychological Measurement, 1996
The New Japanese version of the Minnesota Multiphasic Personality Inventory (MMPI) was administered twice to 200 Japanese female college students to verify the equivalence of the computer- and booklet-administered formats. For four scales, scores from the computer version were statistically significantly higher than those from the booklet…
Descriptors: College Students, Computer Assisted Testing, Females, Foreign Countries
Learning about Students' Knowledge and Thinking in Science through Large-Scale Quantitative Studies.
Peer reviewedOlsen, Rolf V.; Turmo, Are; Lie, Svein – European Journal of Psychology of Education, 2001
Discusses how responses to multiple-choice items could be interpreted, demonstrates how responses on constructed-response items can be analyzed, and examines interactions between item characteristics and student responses. Uses information, specifically items and student responses, from the Third International Mathematics and Science Study…
Descriptors: Educational Research, Higher Education, Mathematics Education, Science Education
Peer reviewedO'Leary, Michael – Educational Measurement: Issues and Practice, 2002
Examined the performance of Irish students on multiple-choice, short-answer, and extended-response item sets from the Third International Mathematics and Science Study to determine whether Ireland's relative rank among the more than 40 countries involved remained stable. Findings provide additional evidence that comparing student achievement…
Descriptors: Comparative Analysis, Foreign Countries, International Education, Mathematics Achievement
Peer reviewedKobrin, Jennifer L.; Young, John W. – Applied Measurement in Education, 2003
Studied the cognitive equivalence of computerized and paper-and-pencil reading comprehension tests using verbal protocol analysis. Results for 48 college students indicate that the only significant difference between the computerized and paper-and-pencil tests was in the frequency of identifying important information in the passage. (SLD)
Descriptors: Cognitive Processes, College Students, Computer Assisted Testing, Difficulty Level
Peer reviewedMcCallum, R. Steve; Karnes, Frances A. – Journal of School Psychology, 1990
Compared area scores from short-form version of Stanford-Binet Intelligence Test (Fourth) with those from long form for 33 gifted children. Found three of five mean difference contrasts were significantly different and correlation coefficients between corresponding area scores and Test Composite were statistically significant. Suggests that…
Descriptors: Academically Gifted, Comparative Testing, Elementary Education, Elementary School Students
Watanabe, Addison; Algozzine, Bob – Diagnostique, 1989
This article discusses ways teachers may vary and modify teacher-made tests to better facilitate prescriptive programing for special education students. Formats featuring statements, object identification, gestures, and writing are discussed, as are item alterations in terms of both item presentation and student responses. (PB)
Descriptors: Achievement Tests, Disabilities, Elementary Secondary Education, Evaluation Methods
Peer reviewedVeloski, J. Jon; And Others – Evaluation and the Health Professions, 1990
Part III of the National Board Examination--a certifying examination of medical knowledge and patient management abilities--was assessed using 1,866 first-year residents. This 15-year study comparing Part III results with those of Parts I and II and with superiors' ratings indicates Part III's validity and provides a model for future research.…
Descriptors: Analysis of Covariance, Clinical Diagnosis, Computer Assisted Testing, Licensing Examinations (Professions)
Peer reviewedTamir, Pinchas – Journal of Biological Education, 1989
Students' justifications for the three-item formats provided were compared. The study confirmed the usefulness of justifications as a diagnostic tool and offered recommendations regarding the use of justifications, including their use in the construction of two-tier items. (Author/CW)
Descriptors: Foreign Countries, Science Education, Science Instruction, Science Tests
Peer reviewedHenly, Susan J.; And Others – Applied Psychological Measurement, 1989
A group of covariance structure models was examined to ascertain the similarity between conventionally administered and computerized adaptive versions of the Differential Aptitude Test (DAT). Results for 332 students indicate that the computerized version of the DAT is an adequate representation of the conventional test battery. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Peer reviewedNickerson, Raymond S. – Educational Researcher, 1989
Discusses issues involved in the construction, validity, and use of tests that evaluate educational progress, especially those that assess higher-order cognitive functioning. Reviews the four articles in this special issue. (FMW)
Descriptors: Cognitive Measurement, Educational Testing, Elementary Secondary Education, Evaluation
Peer reviewedSchriesheim, Chester A.; And Others – Educational and Psychological Measurement, 1989
Three studies explored the effects of grouping versus randomized items in questionnaires on internal consistency and test-retest reliability with samples of 80, 80, and 100, respectively, university students and undergraduates. The 2 correlational and 1 experimental studies were reasonably consistent in demonstrating that neither format was…
Descriptors: Classification, College Students, Evaluation Methods, Higher Education
Peer reviewedStatman, Stella – SYSTEM, 1988
Multiple choice items formatted as a question with one of four distractors giving the correct answer are a clearer and more valid way of testing the reading comprehension of foreign learners of English than is the common format in which the testee must complete a sentence stem by choosing one of four distractors. (Author/CB)
Descriptors: Cloze Procedure, English (Second Language), Language Tests, Multiple Choice Tests
Bracey, Gerald W. – High School Magazine, 1993
Describes four criteria that can be used to evaluate methods of assessment: (1) "What are the consequences of using the test?" (2) "Is this assessment fair?" (3) "Do the skills and knowledge of this assessment transfer or generalize?" and (4) "Does this assessment cover cognitively complex task?" (KDP)
Descriptors: Alternative Assessment, Evaluation Methods, High Schools, Performance Based Assessment
Peer reviewedPage, Gordon; And Others – Academic Medicine, 1995
An approach to testing medical students' clinical decision-making skills identifies key features (critical steps in resolution of a clinical problem) and presents a clinical case scenario followed by questions focusing on those key features. Key-feature problems provide flexibility on issues of question format, multiple responses to questions, and…
Descriptors: Clinical Diagnosis, Decision Making, Evaluation Methods, Higher Education


