Publication Date
| In 2026 | 0 |
| Since 2025 | 74 |
| Since 2022 (last 5 years) | 509 |
| Since 2017 (last 10 years) | 1084 |
| Since 2007 (last 20 years) | 2603 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 169 |
| Practitioners | 49 |
| Teachers | 32 |
| Administrators | 8 |
| Policymakers | 8 |
| Counselors | 4 |
| Students | 4 |
| Media Staff | 1 |
Location
| Turkey | 173 |
| Australia | 81 |
| Canada | 79 |
| China | 72 |
| United States | 56 |
| Taiwan | 44 |
| Germany | 43 |
| Japan | 41 |
| United Kingdom | 39 |
| Iran | 37 |
| Indonesia | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Peer reviewedVoyce, Colleen D.; Jackson, Douglas N. – Educational and Psychological Measurement, 1977
A model designed to account for major factors on personality questionnaires is proposed and evaluated using the Differential Personality Inventory. Two respondent processes are postulated: sensitivity to the underlying desirability of items, and threshold for responding desirably. (Author/JKS)
Descriptors: Comparative Analysis, Factor Analysis, Higher Education, Item Analysis
Peer reviewedSpada, Hans – Studies in Educational Evaluation, 1976
A properly chosen test model defines the kind of dependence of the observable behavior on underlying abilities of the persons tested in the form of a data-adequate theory. The limitations and potential of several test models are detailed; and they are critically compared as an aspect of empirical evaluation. (Author/MV)
Descriptors: Behavioral Objectives, Criterion Referenced Tests, Curriculum Evaluation, Educational Objectives
Peer reviewedWright, Benjamin D. – Journal of Educational Measurement, 1977
Statements made in a previous article of this journal concerning the Rasch latent trait test model are questioned. Methods of estimation, necessary sample sizes, several formuli, and the general usefulness of the Rasch model are discussed. (JKS)
Descriptors: Computers, Error of Measurement, Item Analysis, Mathematical Models
Peer reviewedWhitely, Susan E. – Educational and Psychological Measurement, 1977
The verbal analogy item as a measure of intelligence is investigated. Using latent partition analysis, this study attempts to identify a semantic structure of relationships that individuals use to comprehend completed analogies. The implications for test construction and test validity are discussed. (Author/JKS)
Descriptors: Cognitive Processes, Higher Education, Intelligence, Intelligence Tests
Peer reviewedScherich, Henry H.; Hanna, Gerald S. – Educational and Psychological Measurement, 1977
The reading comprehension items for a revision of the Nelson Reading Skills Test were administered to several hundred fourth and sixth-grade pupils in order to determine the passage dependency of each item. The passage dependency index was used to locate weak items. (Author/JKS)
Descriptors: Context Clues, Elementary School Students, Intermediate Grades, Item Analysis
Peer reviewedWiedl, Karl Heinz; Carlson, Jerry S. – Educational and Psychological Measurement, 1976
Factor analysis of an administration of the Raven Progressive Matrices Test to children in grades 1, 2, and 3 revealed three orthogonal factors interpreted as (1) concrete and abstract reasoning, (2) continuous and discrete pattern completion, and (3) pattern completion through disclosure. Results are discussed in several contexts. (RC)
Descriptors: Age, Elementary School Students, Factor Analysis, Factor Structure
Peer reviewedGreen, Samuel B.; Halpin, Gerald – Research in Higher Education, 1977
Students rated the quality of the items on a classroom test taken previously and psychometric items were calculated. Results showed that student ratings were related to item difficulty and that better students rated items as less ambiguous. Ambiguity ratings were more highly related to the item-test correlations for better students. (Author/LBH)
Descriptors: Course Evaluation, High Achievement, Higher Education, Item Analysis
Peer reviewedValencia, Richard R.; Rankin, Richard J. – Journal of Educational Psychology, 1985
Content bias of the McCarthy Scales of Children's Abilities (MSCA) was investigated with dominant English- and dominant Spanish-speaking Mexican American preschoolers. The identified item bias (mostly against the Spanish language group) is discussed in terms of information overload on memory as influenced by language differences in word length and…
Descriptors: Intelligence Tests, Item Analysis, Language Dominance, Language Processing
Peer reviewedSimmons, Johnny O. – Journal of Speech and Hearing Disorders, 1988
The Fluharty Preschool Speech and Language Screening Test was examined in terms of construct validity. Analysis of test results for 260 children (ages three to six) found that results for internal consistency, discriminant analysis, and item difficulty analysis raised questions about the usefulness and appropriateness of many test items.…
Descriptors: Discriminant Analysis, Item Analysis, Language Acquisition, Language Handicaps
Sarvela, Paul D.; Noonan, John V. – Educational Technology, 1988
Describes measurement problems associated with computer based testing (CBT) programs when they are part of a computer assisted instruction curriculum. Topics discussed include CBT standards; selection of item types; the contamination of items that arise from test design strategies; and the non-equivalence of comparison groups in item analyses. (8…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Item Analysis, Psychometrics
Peer reviewedKok, Frank G.; And Others – Journal of Educational Measurement, 1985
A mental multiplication test, containing items written in Dutch, Spanish, and Roman numerals was administered to 286 Dutch students. Further instruction was given in either Spanish or Roman, and a subtest combining languages was given. The iterative logit method was found to be useful in detecting biased test items. (GDC)
Descriptors: Dutch, Foreign Countries, Higher Education, Item Analysis
Peer reviewedHarris, Deborah J.; Subkoviak, Michael J. – Educational and Psychological Measurement, 1986
This study examined three statistical methods for selecting items for mastery tests: (1) pretest-posttest; (2) latent trait; and (3) agreement statistics. The correlation between the latent trait method and agreement statistics, proposed here as an alternative, was substantial. Results for the pretest-posttest method confirmed its reputed…
Descriptors: Computer Simulation, Correlation, Item Analysis, Latent Trait Theory
Davis, Jack – Performance and Instruction, 1984
Presents methodology and results of a performance appraisal research project in a financial organization. Findings indicate that a major part of the success of performance appraisal in increasing productivity may lie in training managers in their use. An example of the specifics of what this training should cover is included. (MBR)
Descriptors: Administrators, Banking, Data Analysis, Improvement
Peer reviewedBachman, Lyle F. – TESOL Quarterly, 1985
Describes a study that developed criteria for rationally selecting the words to delete in developing a cloze test. The study also tried to determine the extent to which performance on a cloze test with rational deletions differs from that on a cloze test developed by the fixed-ratio deletion procedure. (SED)
Descriptors: Cloze Procedure, English (Second Language), Higher Education, Item Analysis
Peer reviewedDunlap, William C. – Educational and Psychological Measurement, 1984
This paper presents an assessment instrument developed for use with both deaf-blind children and adults. Designed to assist in individual program planning and to measure progress, the 199 item inventory is organized in six major categories with 19 subsections. It has been field tested using 271 deaf-blind persons. (Author/BS)
Descriptors: Adults, Children, Deaf Blind, Diagnostic Tests


