Publication Date
In 2025 | 42 |
Since 2024 | 165 |
Since 2021 (last 5 years) | 588 |
Since 2016 (last 10 years) | 1225 |
Since 2006 (last 20 years) | 2731 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Researchers | 169 |
Practitioners | 49 |
Teachers | 32 |
Administrators | 8 |
Policymakers | 8 |
Counselors | 4 |
Students | 4 |
Media Staff | 1 |
Location
Turkey | 172 |
Australia | 81 |
Canada | 79 |
China | 70 |
United States | 55 |
Germany | 43 |
Taiwan | 43 |
Japan | 40 |
United Kingdom | 38 |
Iran | 36 |
Spain | 33 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 1 |
Meets WWC Standards with or without Reservations | 1 |
Does not meet standards | 1 |

Dailey, David P. – Educational and Psychological Measurement, 1978
Items constructed like those of Mednick's Remote Associates Test (RAT) fall into distinct categories, which are shown to interact significantly with subjects and with subject groups determined on the basis of performance. Some doubts are cast on the validity of the RAT. A similar 16-item test is included. (Author/JKS)
Descriptors: Association Measures, Creativity Tests, Higher Education, Item Analysis

Serlin, Ronald C.; Kaiser, Henry F. – Educational and Psychological Measurement, 1978
When multiple-choice tests are scored in the usual manner, giving each correct answer one point, information concerning response patterns is lost. A method for utilizing this information is suggested. An example is presented and compared with two conventional methods of scoring. (Author/JKS)
Descriptors: Correlation, Factor Analysis, Item Analysis, Multiple Choice Tests

Morris, John D. – Educational and Psychological Measurement, 1978
Three algorithms for selecting a subset of originally available items, to maximize coefficient alpha, were compared on the size of the resulting alpha and computation time required with nine sets of data. The characteristics of a computer program to perform these item analyses are described. (Author/JKS)
Descriptors: Comparative Analysis, Computer Programs, Item Analysis, Measurement Techniques

Daniels, Bob; Hewitt, Jay – Journal of Clinical Psychology, 1978
Attempted to investigate the effects of different levels of test anxiety on actual rather than simulated classroom test performance. The intent was to learn whether the effect of anxiety would be dependent upon or independent of several variables, such as test scores, sex differences, intelligence, and type of test items. (Author/RK)
Descriptors: Anxiety, Clinical Psychology, Educational Testing, Item Analysis

Schwartz, Steven A. – Journal of Educational Measurement, 1978
A method for the construction of scales which combines the rational (or intuitive) approach with an empirical (item analysis) approach is presented. A step-by-step procedure is provided. (Author/JKS)
Descriptors: Factor Analysis, Item Analysis, Measurement, Psychological Testing

Taylor, James B. – Educational and Psychological Measurement, 1977
The reliability and item homogeneity of personality scales are in part dependent on the content domain being sampled, and this characteristic reliability cannot be explained by item ambiguity or scale length. It is suggested that clarity of self concept is also a determinant. (Author/JKS)
Descriptors: Item Analysis, Personality Assessment, Personality Measures, Personality Theories

Vidoni, Dennis O. – Educational and Psychological Measurement, 1977
In the original item factor analysis of the Adjective Check List, evidence was found for seven main factors. In this cross-validation study, five of the seven factors were replicated: social desirability, introversion/extroversion, internal discomfort, cognitive independence, and social attractiveness. Tables with factor loadings are presented.…
Descriptors: Factor Analysis, Higher Education, Item Analysis, Personality Measures

Knapp, Thomas R. – Journal of Educational Measurement, 1977
The test-retest reliability of one single dichotomous item is discussed. Various indices are derived for summarizing the stability of a dichotomy, based on the concept of Platonic true scores. Both open-ended and multiple choice items are considered. (Author/JKS)
Descriptors: Correlation, Elementary Education, Item Analysis, Response Style (Tests)

Wright, Benjamin D.; Douglas, Graham A. – Educational and Psychological Measurement, 1977
Two procedures for Rasch, sample-free item calibration are reviewed and compared for accuracy. The theoretically ideal "conditional" procedure is impractical for more than fifteen items. The more practical but biased "unconditional" procedure is discussed in detail. (Author/JKS)
Descriptors: Comparative Analysis, Item Analysis, Latent Trait Theory, Mathematical Models

Kuncel, Ruth Boutin – Educational and Psychological Measurement, 1977
The interaction of subjects with test items is investigated. It is suggested that psychometricians rely too heavily on inferences about the nature of the interaction. An approach to data analysis is proposed which is more directly related to this interaction. (JKS)
Descriptors: Higher Education, Item Analysis, Latent Trait Theory, Psychometrics

Vaal, Joseph J.; McCullagh, James – Adolescence, 1977
This research was an attempt to determine the usefullness of the Rathus Assertiveness Schedule with pre-adolescent and early adolescent students. Previously it has been used with outpatients, institutionalized adults, or with college students. The RAS is a thirty item schedule that was developed for measuring assertiveness. (Author/RK)
Descriptors: Adolescents, Assertiveness, Item Analysis, Junior High School Students

Whitely, Susan E.; Dawis, Rene V. – Educational and Psychological Measurement, 1976
Systematically investigates the effects of test context on verbal analogy item difficulty, in terms of both simple percentage correct and easiness estimates from a parameter-invariant model (Rasch, 1960). (RC)
Descriptors: Analysis of Variance, High School Students, Item Analysis, Mathematical Models

Detterman, Douglas K. – American Journal of Psychology, 1977
The tasks with item and position probes seem similar. Given an item probe, a subject must recall its position in the spatial array; given a position probe, the item in that position in the array. Analysis of correct responses and latencies showed that item and position probes yielded different results. (Editor/RK)
Descriptors: Charts, Item Analysis, Memory, Psychological Studies

Massey, A. J. – British Journal of Educational Psychology, 1977
It has long been recognized that students' performance on test questions may be affected by structural aspects of the test itself. Here, the extent to which items, placed at the end of General Certificate of Education tests, influence test taking is examined, and the effect on performance of items placed early or late is considered. (Author/RK)
Descriptors: Educational Psychology, Educational Testing, Item Analysis, Objective Tests

Thompson, Bruce; Melancon, Janet G. – Educational and Psychological Measurement, 1987
Generalizability theory was used in this study of the Group Embedded Figures Test, which was administered to undergraduates in a mathematics course. Results indicated that the test has desirable psychometric characteristics, including test and item difficulty and item discrimination coefficients. (Author/GDC)
Descriptors: Cognitive Style, Cognitive Tests, Field Dependence Independence, Generalizability Theory