Publication Date
| In 2026 | 0 |
| Since 2025 | 74 |
| Since 2022 (last 5 years) | 509 |
| Since 2017 (last 10 years) | 1084 |
| Since 2007 (last 20 years) | 2603 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 169 |
| Practitioners | 49 |
| Teachers | 32 |
| Administrators | 8 |
| Policymakers | 8 |
| Counselors | 4 |
| Students | 4 |
| Media Staff | 1 |
Location
| Turkey | 173 |
| Australia | 81 |
| Canada | 79 |
| China | 72 |
| United States | 56 |
| Taiwan | 44 |
| Germany | 43 |
| Japan | 41 |
| United Kingdom | 39 |
| Iran | 37 |
| Indonesia | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Ratter, George S.; Tinkleman, Vera – Educ Psychol Meas, 1970
The placement or ordering of items on behavior rating scales which elicit extreme responses (anchor stimulus items) affect the responses given to neutral items. (DG)
Descriptors: Behavior Rating Scales, Content Analysis, Item Analysis, Measurement Techniques
Peer reviewedFinkbeiner, Daniel T.; And Others – Mathematics Teacher, 1971
Descriptors: Advanced Placement Programs, Calculus, College Entrance Examinations, Item Analysis
Peer reviewedOller, John W., Jr.; Inal, Nevin – TESOL Quarterly, 1971
Descriptors: Cloze Procedure, Comparative Analysis, Educational Experiments, English (Second Language)
Peer reviewedSchrock, Timothy J.; Mueller, Daniel J. – Journal of Educational Research, 1982
Three item-construction principles for multiple-choice tests were studied to determine how they affected test results for high school students: (1) use of incomplete sentence stem; (2) location of blank in the stem; and (3) presence of noncueing material. Differences in item construction had a slight effect on test results. (Authors/CJ)
Descriptors: Cues, High School Students, High Schools, Item Analysis
Stratton, N. J. – Teaching at a Distance, 1981
A study of recurrent faults in multiple-choice items in Britain's Open University's computer-marked tests has led to a procedure for avoiding these faults. A description of the study covers the incidence and sources of faults (obviousness, memorization, unclear instruction, ambiguity, distractors, inter-item effects, and structure) and…
Descriptors: Error Patterns, Foreign Countries, Higher Education, Item Analysis
Peer reviewedScheuneman, Janice – Journal of Educational Measurement, 1979
This paper presents a chi square method for assessing bias in test items. In this procedure an unbiased item is defined as an item for which the probability of a correct response is the same for any person of a given ability level regardless of that person's ethnic group. (Author/CTM)
Descriptors: Cultural Differences, Culture Fair Tests, Item Analysis, Primary Education
Peer reviewedEkstrom, Ruth B.; And Others – Educational Horizons, 1979
Content analyses for sex bias were conducted on items from three widely-used achievement tests which together span the grade levels 1-12. A significant but modest correlation was found between an item's content bias and performance on that item by male and female students. (SJL)
Descriptors: Academic Achievement, Achievement Tests, Content Analysis, Elementary Secondary Education
Findley, Warren G. – College Board Review, 1981
A psychometrician and acquaintance of Carl C. Brigham, the "father of the Scholastic Aptitude Test," comments on the man's presonality traits and the culture that shaped his thinking. Areas of testing research that should be pursued are suggested. (MSE)
Descriptors: College Entrance Examinations, Cultural Influences, Educational History, Higher Education
Peer reviewedBohning, Gerry – Psychology in the Schools, 1980
An item analysis profile sheet to accompany the Slosson Intelligence Test (SIT) is helpful in providing a functional test interpretation. The lack of recorded technical and statistical information is a serious concern. Without such information, a practitioner could not use the Item Analysis of SIT with confidence. (Author)
Descriptors: Children, Educational Diagnosis, Elementary Secondary Education, Intelligence Tests
Peer reviewedPage, Roger; Bode, James – Educational and Psychological Measurement, 1980
The Ethical Reasoning Inventory (ERI) is an objective test derived from Kohlberg's Moral Judgment Interview. It correlated higher with Kohlberg , and has higher internal consistency than the Defining Issues Test and the Moral Judgment Scale. (CP)
Descriptors: Abstract Reasoning, Higher Education, Item Analysis, Moral Issues
Peer reviewedPlake, Barbara S. – Educational and Psychological Measurement, 1980
Analysis of variance and subjective rating by curriculum specialists were used to identify biased items on the Iowa Tests of Basic Skills. Results show little agreement between statistical and subjective methods. Test developers should statistically support a reviewer's selection of biased items. (Author/CP)
Descriptors: Achievement Tests, Analysis of Variance, Elementary Education, Evaluation Methods
Peer reviewedOakland, Thomas; Feigenbaum, David – Journal of Consulting and Clinical Psychology, 1979
Assessed test bias on the Wechsler Intelligence Test for Children-Revised (WISC-R) and Bender-Gestalt. On the Bender, evidence of bias was infrequent and irregular. On the WISC-R, group differences were most discernible for age, sex, family structure, and race. Consistent patterns of bias were not apparent among comparison groups. (Author)
Descriptors: Age Differences, Children, Comparative Analysis, Intelligence Tests
Blachowicz, Camille L. Z. – Curriculum Review, 1979
Presented are 10 examples of ways in which teachers can use standardized test results (not just test scores) after the testing procedure is over to determine ways in which students take tests and then use this knowledge to structure learning experiences without biasing or influencing the testing outcomes in advance. (KC)
Descriptors: Diagnostic Teaching, Elementary Secondary Education, Item Analysis, Reading Diagnosis
Peer reviewedvan der Linden, Wim J. – Journal of Educational Statistics, 1978
Macready and Dayton introduced two probabilistic models for mastery assessment based on an idealistic all-or-none conception of mastery. Alternatively, an application of latent trait theory to mastery testing is proposed (a three parameter logistic model) as a more plausible model for test theory. (Author/CTM)
Descriptors: Criterion Referenced Tests, Guessing (Tests), Item Analysis, Latent Trait Theory
Peer reviewedWilson, Mark – Journal for Research in Mathematics Education, 1990
Summarizes a reanalysis of the data from an investigation of a test designed to measure a learning sequence in geometry based on the work of van Hiele (1986). Discusses the test based on the Rasch model. (YP)
Descriptors: Geometric Concepts, Geometry, Item Analysis, Mathematical Concepts


