Publication Date
| In 2026 | 0 |
| Since 2025 | 74 |
| Since 2022 (last 5 years) | 509 |
| Since 2017 (last 10 years) | 1084 |
| Since 2007 (last 20 years) | 2603 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 169 |
| Practitioners | 49 |
| Teachers | 32 |
| Administrators | 8 |
| Policymakers | 8 |
| Counselors | 4 |
| Students | 4 |
| Media Staff | 1 |
Location
| Turkey | 173 |
| Australia | 81 |
| Canada | 79 |
| China | 72 |
| United States | 56 |
| Taiwan | 44 |
| Germany | 43 |
| Japan | 41 |
| United Kingdom | 39 |
| Iran | 37 |
| Indonesia | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Edmonston, Leon P.; Randall, Robert S. – 1972
A decision model designed to determine the reliability and validity of criterion referenced measures (CRMs) is presented. General procedures which pertain to the model are discussed as to: Measures of relationship, Reliability, Validity (content, criterion-oriented, and construct validation), and Item Analysis. The decision model is presented in…
Descriptors: Criterion Referenced Tests, Decision Making, Evaluation Methods, Item Analysis
Scheuneman, Janice – 1976
Test bias has traditionally been defined in terms of an outside criterion measure of the performance being predicted by the test. In test construction, where criterion-related validity data are usually not collected until after the test is completed, assessment of bias in the absence of outside criteria had become a vital issue. Here, an unbiased…
Descriptors: Evaluation Criteria, Item Analysis, Response Style (Tests), Statistical Analysis
PDF pending restorationDenson, Teri A. – 1976
Cognitive style research is hindered by a variety of poorly described measures and style classifications and the use of small samples. This study describes and compares three measures of three postulated dimensions of cognitive style: impulsivity/reflectivity; internal/external locus of control; and field dependence/field independence to determine…
Descriptors: Cognitive Style, Comparative Analysis, Correlation, Grade 7
Frederiksen, Norman – 1976
A number of different ways of ascertaining whether or not a test measures the same thing in different cultures are examined. Methods range from some that are obvious and simple to those requiring statistical and psychological sophistication. Simpler methods include such things as having candidates "think aloud" and interviewing them about how they…
Descriptors: Analysis of Covariance, Culture Fair Tests, Factor Analysis, Item Analysis
Peer reviewedDwyer, Carol A. – Reading Teacher, 1976
Descriptors: Educational Research, Elementary Education, Item Analysis, Reading Ability
Peer reviewedSheehan, Daniel S.; Davis, Robbie G. – School Science and Mathematics, 1979
The steps discussed are (a) stating the purpose of the battery, (b) specifying performance objectives, (c) generating an item pool, (d) item analysis, (e) item selection, (f) determining cut-off scores, and (g) validating the battery. (MP)
Descriptors: Criterion Referenced Tests, Elementary Secondary Education, Item Analysis, Mastery Tests
Peer reviewedBrowning, Robert; And Others – Psychology in the Schools, 1979
Effects that item order and basal and ceiling rules have on test means, variances, and internal consistency estimates for the Peabody Individual Achievement Test mathematics and reading recognition subtests were examined. Items on the math and reading recognition subtests were significantly easier or harder than test placements indicated. (Author)
Descriptors: Achievement Tests, Elementary Education, Individual Testing, Item Analysis
Peer reviewedStrang, Harold R. – Journal of Educational Measurement, 1977
The effects of option familiarity, length, and technicality on guessing or multiple choice items were investigated in two experiments. Generally, these college undergraduates tended to favor familiar, non-technical, and longer options when guessing on multiple choice tests. (JKS)
Descriptors: Cues, Females, Guessing (Tests), Higher Education
Peer reviewedWeber, Margaret B. – Educational and Psychological Measurement, 1977
Bilevel dimensionality of probability was examined via factor analysis, Rasch latent trait analysis, and classical item analysis. Results suggest that when nonstandardized measures are the criteria for achievement, relying solely on estimates of content validity may lead to erroneous interpretation of test score data. (JKS)
Descriptors: Achievement, Achievement Tests, Factor Analysis, Item Analysis
Peer reviewedDengerink, Joan E.; Bean, Roxanne E. – Language, Speech, and Hearing Services in Schools, 1988
Author-supplied item labels for two common speech discrimination tests were compared with those given spontaneously by 40 children (median age 5:5). Agreement between subjects' and authors' labels was 76.3 percent on the Word Intelligibility by Picture Identification test and 75 percent on the Northwestern University Children's Perception of…
Descriptors: Auditory Discrimination, Expressive Language, Item Analysis, Language Handicaps
Peer reviewedLuecht, Richard M. – Educational and Psychological Measurement, 1987
Test Pac, a test scoring and analysis computer program for moderate-sized sample designs using dichotomous response items, performs comprehensive item analyses and multiple reliability estimates. It also performs single-facet generalizability analysis of variance, single-parameter item response theory analyses, test score reporting, and computer…
Descriptors: Computer Assisted Testing, Computer Software, Computer Software Reviews, Item Analysis
Peer reviewedShannon, Gregory A.; Cliver, Barbara A. – Journal of Educational Measurement, 1987
Spearman correlations were computed between item response theory-derived information functions (IIFs) and four conventional item discrimination indices: phi-coefficient; B-index; phi/phi max; and agreement statistic. Correlations between the phi-coefficient and the IIFs were very high. Data were taken from a real estate licensing test. (Author/GDC)
Descriptors: Adults, Comparative Analysis, Criterion Referenced Tests, Item Analysis
Peer reviewedSchmitt, Alicia P. – Journal of Educational Measurement, 1988
Standardized methodology was used to help identify item characteristics explaining differential item functioning among Hispanics on Scholastic Aptitude Test (SAT), in two studies with 284,359 and 292,725 Whites, Mexican-Americans, and Puerto Ricans. Results indicate true cognates, or words with a common root in English and Spanish; and content of…
Descriptors: College Entrance Examinations, Cultural Influences, Hispanic Americans, Item Analysis
Peer reviewedSipps, Gary J.; And Others – Educational and Psychological Measurement, 1985
The current study explores the item structure of the Myers-Briggs Type Indicator (MBTI). A factor analysis of responses of a large sample yielded six salient factors, four resembling the four scales of the MBTI. (Author/LMO)
Descriptors: Factor Analysis, Factor Structure, Goodness of Fit, Higher Education
Dauzat, Sam V.; Dauzat, Jo Ann – Lifelong Learning, 1987
The authors discuss changes that will appear in the 1988 edition of the General Educational Development (GED) test. Changes include (1) a focus on comprehensive, integrated skills; (2) a need for highly developed thinking skills; (3) use of a computer technology theme; (4) writing assessments; and (5) changes in content areas. (CH)
Descriptors: Adult Education, Change Strategies, High School Equivalency Programs, Item Analysis


