Publication Date
| In 2026 | 0 |
| Since 2025 | 62 |
| Since 2022 (last 5 years) | 388 |
| Since 2017 (last 10 years) | 831 |
| Since 2007 (last 20 years) | 1345 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 195 |
| Teachers | 161 |
| Researchers | 93 |
| Administrators | 50 |
| Students | 34 |
| Policymakers | 15 |
| Parents | 12 |
| Counselors | 2 |
| Community | 1 |
| Media Staff | 1 |
| Support Staff | 1 |
| More ▼ | |
Location
| Canada | 63 |
| Turkey | 59 |
| Germany | 41 |
| United Kingdom | 37 |
| Australia | 36 |
| Japan | 35 |
| China | 33 |
| United States | 32 |
| California | 25 |
| Iran | 25 |
| United Kingdom (England) | 25 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedBardo, John W.; Yeager, Samuel J. – Perceptual and Motor Skills, 1982
Responses to various fixed test-response formats were examined for "reliability" due to systematic error; Cronbach's alphas up to .67 were obtained. Of formats tested, four-point Likert Scales were least affected while forms of lines and faces were most problematic. Possible modification in alpha to account for systematic bias is…
Descriptors: Higher Education, Measures (Individuals), Psychometrics, Response Style (Tests)
Peer reviewedPolitzer, Robert L.; McGroarty, Mary – International Review of Applied Linguistics in Language Teaching, 1983
Discusses the difference between communicative competence and linguistic performance. Describes the development, administration, and results of a three-part discrete point test based on rather specific definitions of communicative competence. (EKN)
Descriptors: Communicative Competence (Languages), English (Second Language), Language Tests, Linguistic Performance
Peer reviewedButter, Eliot J.; Snyder, Frederick R. – Perceptual and Motor Skills, 1982
Third grade children (n=24) who were administered the standard, simultaneous version of the Matching Familiar Figures test committed fewer errors when administered a sequential version of the same test than did subjects (n=24) who took the more difficult sequential version first. (PN)
Descriptors: Cognitive Style, Comparative Analysis, Individual Testing, Learning Experience
Peer reviewedAlbanese, Mark A. – Evaluation and the Health Professions, 1982
Findings regarding formats and scoring formulas for multiple-choice test items with more than one correct response are presented. Strong cluing effects in the Type K format, increasing the correct score percentage and reducing test reliability, recommend using the Type X format. Alternative scoring methods are discussed. (Author/CM)
Descriptors: Health Occupations, Multiple Choice Tests, Professional Education, Response Style (Tests)
Shick, Jacqueline – Journal of Physical Education and Recreation, 1981
It is important that teachers test not only knowledge, but also comprehension, application, synthesis, and evaluation. Several suggestions for test items in physical education are presented. (CJ)
Descriptors: Cognitive Measurement, Cognitive Processes, Comprehension, Physical Education
Hardy, Helen – Georgia Social Science Journal, 1981
This report describes the design and field testing of a 50-item objective test designed to measure high school students' understanding of the state of Georgia's history, geography, government, econcomics, and culture. A copy of the test is included in the appendix. (AM)
Descriptors: Cultural Context, Objective Tests, Secondary Education, Social Studies
Peer reviewedBenson, Jeri – Educational and Psychological Measurement, 1981
A review of the research on item writing, item format, test instructions, and item readability indicated the importance of instrument structure in the interpretation of test data. The effect of failing to consider these areas on the content validity of achievement test scores is discussed. (Author/GK)
Descriptors: Achievement Tests, Elementary Secondary Education, Literature Reviews, Scores
Peer reviewedMorgan, Anne; Wainer, Howard – Journal of Educational Statistics, 1980
Two estimation procedures for the Rasch Model of test analysis are reviewed in detail, particularly with respect to new developments that make the more statistically rigorous conditional maximum likelihood estimation practical for use with longish tests. (Author/JKS)
Descriptors: Error of Measurement, Latent Trait Theory, Maximum Likelihood Statistics, Psychometrics
Peer reviewedStraton, Ralph G.; Catts, Ralph M. – Educational and Psychological Measurement, 1980
Multiple-choice tests composed entirely of two-, three-, or four-choice items were investigated. Results indicated that number of alternatives per item was inversely related to item difficulty, but directly related to item discrimination. Reliability and standard error of measurement of three-choice item tests was equivalent or superior.…
Descriptors: Difficulty Level, Error of Measurement, Foreign Countries, Higher Education
Peer reviewedGreen, Kathy – Journal of Experimental Education, 1979
Reliabilities and concurrent validities of teacher-made multiple-choice and true-false tests were compared. No significant differences were found even when multiple-choice reliability was adjusted to equate testing time. (Author/MH)
Descriptors: Comparative Testing, Higher Education, Multiple Choice Tests, Test Format
Duke, Nell K.; Ritchart, Ron – Instructor, 1997
There are many connections between good test taking practices and good general learning practices. This article offers strategies related to reading and math instruction and testing. It also describes how to teach students the fundamentals of standardized tests. Tips for reducing test stress are provided. (SM)
Descriptors: Elementary Education, Mathematics Skills, Reading Strategies, Standardized Tests
Peer reviewedHansen, Jo-Ida C.; Neuman, Jody L.; Haverkamp, Beth E.; Lubinski, Barbara R. – Measurement and Evaluation in Counseling and Development, 1997
Examined user reaction to computer-administered and paper-and-pencil-administered forms of the Strong Interest Inventory. Results indicate that user reactions to the two administration modes were reasonably similar in most areas. However, the computer group indicated more often that their version was easier to use and follow. (RJM)
Descriptors: College Students, Computer Assisted Testing, Higher Education, Interest Inventories
Peer reviewedZumwalt, Marcus – Reading Teacher, 2003
Explains the game of "Words of Fortune" in which students act out vocabulary words. Notes that this activity provides students the opportunity to make strong visual, aural, and kinesthetic connections with vocabulary lists. Concludes that "Words of Fortune" helps students write better sentences for vocabulary assessment. (PM)
Descriptors: Educational Games, Kinesthetic Methods, Primary Education, Reading Comprehension
Peer reviewedSchraw, Gregory – Journal of Experimental Education, 1997
The basis of students' confidence in their answers to test items was studied with 95 undergraduates. Results support the domain-general hypothesis that predicts that confidence judgments will be related to performance on a particular test and also to confidence judgments and performance on unrelated tests. (SLD)
Descriptors: Higher Education, Metacognition, Performance Factors, Scores
Peer reviewedIoannidou, Mary Koutselini – Studies in Educational Evaluation, 1997
Student achievement was compared for open-book and closed-book examinations taken by 72 college students in Cyprus. There were no significant differences in total examination score between the two types of tests, although those who took the closed-book examination had slightly higher scores. (SLD)
Descriptors: Achievement Tests, College Students, Educational Testing, Foreign Countries


