Publication Date
| In 2026 | 0 |
| Since 2025 | 75 |
| Since 2022 (last 5 years) | 510 |
| Since 2017 (last 10 years) | 1085 |
| Since 2007 (last 20 years) | 2604 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 169 |
| Practitioners | 49 |
| Teachers | 32 |
| Administrators | 8 |
| Policymakers | 8 |
| Counselors | 4 |
| Students | 4 |
| Media Staff | 1 |
Location
| Turkey | 174 |
| Australia | 81 |
| Canada | 79 |
| China | 72 |
| United States | 56 |
| Taiwan | 44 |
| Germany | 43 |
| Japan | 41 |
| United Kingdom | 39 |
| Iran | 37 |
| Indonesia | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Peer reviewedClopton, James R. – Journal of Educational and Psychological Measurement, 1974
Descriptors: Comparative Analysis, Computer Programs, Hypothesis Testing, Item Analysis
Peer reviewedBohrnstedt, George W.; Campbell, Richard T. – Educational and Psychological Measurement, 1972
Descriptors: Computer Programs, Data Analysis, Item Analysis, Rating Scales
Peer reviewedWhitney, Douglas R.; Sabers, Darrell L. – Journal of Experimental Education, 1971
Descriptors: Discriminant Analysis, Essay Tests, Item Analysis, Statistical Analysis
Gunn, Robert L.; Pearman, H. Egar – J Clin Psychol, 1970
A schedule was developed for assessing the future outlook of hospitalized psychiatric patients and administered to samples of patients from two different hospitals. A factor analysis was done for each sample. (CK)
Descriptors: Attitudes, Factor Analysis, Item Analysis, Patients
Simon, George B. – J Educ Meas, 1969
Descriptors: Item Analysis, Measurement Instruments, Test Construction, Test Results
Hunt, Richard A. – Educ Psychol Meas, 1970
Descriptors: Computer Programs, Item Analysis, Psychological Evaluation, Rating Scales
Koppel, Mark A.; Sechrest, Lee – Educ Psychol Meas, 1970
Descriptors: Correlation, Experimental Groups, Humor, Intelligence
Peer reviewedFrisbie, David A. – Educational and Psychological Measurement, 1981
The Relative Difficulty Ratio (RDR) was developed as an index of test or item difficulty for use when raw score means or item p-values are not directly comparable because of chance score differences. Computational RDR are described. Applications of the RDR at both the test and item level are illustrated. (Author/BW)
Descriptors: Difficulty Level, Item Analysis, Mathematical Formulas, Test Items
Peer reviewedJackson, Paul H. – Psychometrika, 1979
Use of the same term "split-half" for division of an n-item test into two subtests containing equal (Cronbach), and possibly unequal (Guttman), numbers of items sometimes leads to a misunderstanding about the relation between Guttman's maximum split-half bound and Cronbach's coefficient alpha. This distinction is clarified. (Author/JKS)
Descriptors: Item Analysis, Mathematical Formulas, Technical Reports, Test Reliability
Peer reviewedHills, John R. – Educational Measurement: Issues and Practice, 1989
Test bias detection methods based on item response theory (IRT) are reviewed. Five such methods are commonly used: (1) equality of item parameters; (2) area between item characteristic curves; (3) sums of squares; (4) pseudo-IRT; and (5) one-parameter-IRT. A table compares these and six newer or less tested methods. (SLD)
Descriptors: Item Analysis, Test Bias, Test Items, Testing Programs
Peer reviewedBurton, Richard F. – Assessment & Evaluation in Higher Education, 2001
Item-discrimination indices are numbers calculated from test data that are used in assessing the effectiveness of individual test questions. This article asserts that the indices are so unreliable as to suggest that countless good questions may have been discarded over the years. It considers how the indices, and hence overall test reliability,…
Descriptors: Guessing (Tests), Item Analysis, Test Reliability, Testing Problems
van der Linden, Wim J. – Journal of Educational Measurement, 2005
In test assembly, a fundamental difference exists between algorithms that select a test sequentially or simultaneously. Sequential assembly allows us to optimize an objective function at the examinee's ability estimate, such as the test information function in computerized adaptive testing. But it leads to the non-trivial problem of how to realize…
Descriptors: Law Schools, Item Analysis, Admission (School), Adaptive Testing
Evans, Sion Wyn – Educational Studies in Mathematics, 2007
This paper draws on data from the development of annual national mathematics assessment materials for 7-year-old pupils in Wales for use during the period 2000-2002. The materials were developed in both English and Welsh and were designed to be matched. The paper reports on item analyses which sought items that exhibited differential performance…
Descriptors: Foreign Countries, Welsh, Test Bias, Educational Testing
van der Linden, Wim J.; Veldkamp, Bernard P. – Journal of Educational and Behavioral Statistics, 2007
Two conditional versions of the exposure-control method with item-ineligibility constraints for adaptive testing in van der Linden and Veldkamp (2004) are presented. The first version is for unconstrained item selection, the second for item selection with content constraints imposed by the shadow-test approach. In both versions, the exposure rates…
Descriptors: Law Schools, Adaptive Testing, Item Analysis, Probability
von Davier, Alina A.; Wilson, Christine – Educational and Psychological Measurement, 2007
This article discusses the assumptions required by the item response theory (IRT) true-score equating method (with Stocking & Lord, 1983; scaling approach), which is commonly used in the nonequivalent groups with an anchor data-collection design. More precisely, this article investigates the assumptions made at each step by the IRT approach to…
Descriptors: Calculus, Item Response Theory, Scores, Data Collection

Direct link
