Publication Date
| In 2026 | 0 |
| Since 2025 | 74 |
| Since 2022 (last 5 years) | 509 |
| Since 2017 (last 10 years) | 1084 |
| Since 2007 (last 20 years) | 2603 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 169 |
| Practitioners | 49 |
| Teachers | 32 |
| Administrators | 8 |
| Policymakers | 8 |
| Counselors | 4 |
| Students | 4 |
| Media Staff | 1 |
Location
| Turkey | 173 |
| Australia | 81 |
| Canada | 79 |
| China | 72 |
| United States | 56 |
| Taiwan | 44 |
| Germany | 43 |
| Japan | 41 |
| United Kingdom | 39 |
| Iran | 37 |
| Indonesia | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Peer reviewedBradshaw, Charles W., Jr. – Educational and Psychological Measurement, 1980
Two alternative procedures to Rogers' method of using control charts to display item statistics are discussed. The data itself determines limit and centerline values, thus permitting these values to be compared to any criterion difficulty level(s) deemed appropriate for a given set of test items. (Author/RL)
Descriptors: Flow Charts, Item Analysis, Mathematical Formulas, Quality Control
Peer reviewedCudeck, Robert – Journal of Educational Measurement, 1980
Methods for evaluating the consistency of responses to test items were compared. When a researcher is unwilling to make the assumptions of classical test theory, has only a small number of items, or is in a tailored testing context, Cliff's dominance indices may be useful. (Author/CTM)
Descriptors: Error Patterns, Item Analysis, Test Items, Test Reliability
Peer reviewedLueptow, Lloyd B.; And Others – Educational and Psychological Measurement, 1976
After taking tests in introductory college courses, students were asked to rate the quality of the items. Correlations between student ratings and item-test point biserial correlations revealed little or no relationship except for a subset of students who had performed well when taking the tests. (JKS)
Descriptors: College Students, Correlation, Course Evaluation, Item Analysis
Peer reviewedGarfinkel, Robin; Thorndike, Robert L. – Child Development, 1976
This study was conducted to determine how items of the Stanford-Binet Intelligence Scale, Form L-M, had performed in the 1930's standardization sample in comparison with the 1972 standardization sample. (SB)
Descriptors: Comparative Analysis, Comparative Testing, Group Testing, Intelligence Tests
Peer reviewedBart, William M.; Williams-Morris, Ruth – Applied Measurement in Education, 1990
Refined item digraph analysis (RIDA) is a way of studying diagnostic and prescriptive testing. It permits assessment of a test item's diagnostic value by examining the extent to which the item has properties of ideal items. RIDA is illustrated with the Orange Juice Test, which assesses the proportionality concept. (TJH)
Descriptors: Diagnostic Tests, Evaluation Methods, Item Analysis, Mathematical Models
Peer reviewedSiegert, Richard J.; And Others – Multivariate Behavioral Research, 1988
A study concluding that the Wechsler Adult Intelligence Scale (Revised) (WAIS-R) has three clear factors in its structure is critiqued. An alternative factor comparison technique, FACTOREP, is used with identical data. It is demonstrated that the WAIS-R has only two strong factors--verbal comprehension and perceptual organization. (TJH)
Descriptors: Factor Analysis, Factor Structure, Intelligence Tests, Item Analysis
Peer reviewedHoepfl, Marie C. – Technology Teacher, 1994
Provides guidelines for writing multiple-choice tests and ways to evaluate the quality of test items. (SK)
Descriptors: Item Analysis, Multiple Choice Tests, Teacher Made Tests, Test Construction
Taylor, Steven; McKay, Dean; Abramowitz, Jonathan S. – Psychological Review, 2005
This paper comments on the response offered by Szechtman and Woody to Taylor et al's initial comments on Szechtman and Woody's original article. Taylor et al highlight one problem with their model that Woody and Szechtman seem to think is unimportant: the treatment relevance of their model. The analogy of aspirin and colds was used, suggesting…
Descriptors: Motivation, Item Analysis, Reader Response, Criticism
Auewarakul, Chirayu; Downing, Steven M.; Praditsuwan, Rungnirand; Jaturatamrong, Uapong – Advances in Health Sciences Education, 2005
Utilization of objective structured clinical examinations (OSCEs) for final assessment of medical students in Internal Medicine requires a representative sample of OSCE stations. The reliability and generalizability of OSCE scores provides validity evidence for OSCE scores and supports its contribution to the final clinical grade of medical…
Descriptors: Medical Students, Reliability, Internal Medicine, Item Analysis
Ringenberg, Matthew C.; Funk, Vanessa; Mullen, Kacy; Wilford, Amy; Kramer, Jessica – School Community Journal, 2005
The Parent And School Survey (PASS) is an instrument designed to quickly, easily, and accurately measure parental involvement in their children's education. It is based on Epstein's six-construct framework, with four items devoted to each construct. A test-retest reliability study of the PASS was conducted with 40 subjects to refine the 24 items…
Descriptors: Parent Participation, Reliability, School Surveys, Parent School Relationship
Bruinsma, M.; Jansen, E. P. W. A. – School Effectiveness and School Improvement, 2007
Several factors in the H. J. Walberg Educational Productivity Model, which assumes that 9 factors affect academic achievement, were examined with a limited sample of 1st-year students in the University of Groningen. Information concerning 8 of these factors--grades, motivation, age, prior achievement, home environment, support from peers,…
Descriptors: Academic Achievement, Program Validation, Item Analysis, Models
ChanLin, Lih-Juan – Innovations in Education and Teaching International, 2007
In this study, the perceived importance and manageability of teachers on the factors in technology integration were assessed among 407 elementary and secondary schoolteachers. A questionnaire containing 28 items focusing on environmental, personal, social and curricular factors related to technology integration was used. Two sets of data resources…
Descriptors: Questionnaires, Computers, Technology Integration, Factor Analysis
Maguire, Phil; Devereux, Barry; Costello, Fintan; Cater, Arthur – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2007
The competition among relations in nominals (CARIN) theory of conceptual combination (C. L. Gagne & E. J. Shoben, 1997) proposes that people interpret nominal compounds by selecting a relation from a pool of competing alternatives and that relation availability is influenced by the frequency with which relations have been previously associated…
Descriptors: Competition, Program Validation, Item Analysis, Human Relations
Elliott, C. D. (2007). "Differential Ability Scales" (2nd Ed.). San Antonio, TX: Harcourt Assessment
Beran, Tanya N. – Canadian Journal of School Psychology, 2007
"Differential Ability Scales," Second Edition (DAS-II; Elliott, 2007), was developed to measure the cognitive abilities of children and adolescents from age 2 years, 6 months to 17 years, 11 months. The test is designed to be individually administered, measuring general conceptual and reasoning ability ("g") as well as specific…
Descriptors: Cognitive Ability, Psychometrics, Foreign Countries, Test Construction
Fisher, Douglas; Kopenski, Donna – Teaching Children Mathematics, 2007
This article describes the significant gains in student achievement in an urban elementary school when teachers worked together to develop, administer, and review assessment items. In grade-level teams, teachers completed item analyses and engaged in instructional conversations about students' needed instruction. (Contains 3 tables and 2 figures.)
Descriptors: Mathematics Achievement, Test Construction, Item Analysis, Elementary School Teachers

Direct link
