Publication Date
| In 2026 | 0 |
| Since 2025 | 74 |
| Since 2022 (last 5 years) | 509 |
| Since 2017 (last 10 years) | 1084 |
| Since 2007 (last 20 years) | 2603 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 169 |
| Practitioners | 49 |
| Teachers | 32 |
| Administrators | 8 |
| Policymakers | 8 |
| Counselors | 4 |
| Students | 4 |
| Media Staff | 1 |
Location
| Turkey | 173 |
| Australia | 81 |
| Canada | 79 |
| China | 72 |
| United States | 56 |
| Taiwan | 44 |
| Germany | 43 |
| Japan | 41 |
| United Kingdom | 39 |
| Iran | 37 |
| Indonesia | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Skaggs, Gary; Stevenson, Jose – 1986
This study assesses the accuracy of ASCAL, a microcomputer-based program for estimating item parameters for the three-parameter logistic model in item response theory. Item responses are generated from a three-parameter model, and item parameter estimates from ASCAL are compared to the generating item parameters and to estimates produced by…
Descriptors: Algorithms, Comparative Analysis, Computer Software, Estimation (Mathematics)
Johnston, Peter; Afflerbach, Peter – 1982
A study examined the nature of the questions contained in two major standardized reading comprehension tests in terms of their centrality to the text. It was hypothesized that the use of a discrimination index for item selection would tend to favor relatively trivial questions. Half the reading selections from the Stanford Diagnostic Reading Test…
Descriptors: Comparative Analysis, Higher Education, Item Analysis, Reading Comprehension
PDF pending restorationStocking, Martha L.; Lord, Frederic M. – 1982
A common problem arises in scale transformation when independent estimates of item parameters from two separate data sets must be expressed in the same metric. These item parameter estimates will be different because the metric or scale defined by each independent calibration of the items is different. The problem is frequently confronted in…
Descriptors: Data Analysis, Equated Scores, Item Analysis, Item Banks
Miller, M. David; Burstein, Leigh – 1981
The multilevel characteristics of test item data are considered as a method for examining the characteristics of standardized norm-referenced tests. A theoretical rationale for examining multilevel characteristics is presented. It can be used as an aid to understand why program and instructional effects on measures constructed from…
Descriptors: Achievement Tests, Elementary Secondary Education, Item Analysis, Norm Referenced Tests
Powers, Stephen; And Others – 1985
In a study of the usefulness of the Rasch model for examining tests for possible bias, 102 native Spanish-speaking and 104 native English-speaking preschool four-year-olds in a remedial education program were administered Spanish and English versions of the Cooperative Preschool Inventory, a standardized measure of school readiness, The Rasch…
Descriptors: Achievement Rating, Comparative Analysis, Ethnic Groups, Item Analysis
Doolittle, Allen E. – 1985
Differential item performance (DIP) is discussed as a concept that does not necessarily imply item bias or unfairness to subgroups of examinees. With curriculum-based achievement tests, DIP is presented as a valid reflection of group differences in requisite skills and instruction. Using data from a national testing of the ACT Assessment, this…
Descriptors: Achievement Tests, High Schools, Item Analysis, Mathematics Achievement
Kalisch, Stanley J. – Journal of Computer-Based Instruction, 1974
A tailored testing model employing the beta distribution, whose mean equals the difficulty of an item and whose variance is approximately equal to the sampling variance of the item difficulty, and employing conditional item difficulties, is proposed. (Author)
Descriptors: Adaptive Testing, Computer Assisted Testing, Evaluation Methods, Item Analysis
Bradshaw, Charles W., Jr. – 1968
A method for determining invariant item parameters is presented, along with a scheme for obtaining test scores which are interpretable in terms of a common metric. The method assumes a unidimensional latent trait and uses a three parameter normal ogive model. The assumptions of the model are explored, and the methods for calculating the proposed…
Descriptors: Equated Scores, Item Analysis, Latent Trait Theory, Mathematical Models
Romaniuk, E. W.; Montgomerie, T. C. – 1976
This paper follows the evaluation of a system of student performance analyses programs which has advanced from a crude manual system to an easy to use, on-line, interactive system. Recently, emphasis has focused upon the development of a system which requires little computer expertise on the part of the author to obtain a concise, easy to read…
Descriptors: Computer Assisted Instruction, Computer Oriented Programs, Course Evaluation, Higher Education
Bart, William M.; Lele, Kaustubh – 1977
One hundred eighty one sets of black twins and 223 sets of white twins provided responses to four 12-item subtests of the Raven's Progressive Matrices Test, Standard Version. The children were in elementary school and their item response patterns were analyzed with the use of revised ordering-theoretic methods to search for best-fitting…
Descriptors: Black Students, Comparative Testing, Elementary Education, Elementary School Students
Reigeluth, Charles M. – 1978
The Time-shared Interactive Computer-Controlled Information Television (TICCIT) system represents a considerable technological advance over previous CAI systems, primarily because of its unprecedented foundation in instructional theory. This paper briefly describes the theory-base of the TICCIT system; it summarizes some recent advances in…
Descriptors: Computer Assisted Instruction, Decision Making, Instructional Design, Instructional Innovation
Lutkus, Anthony D.; Laskaris, George – 1981
Analyses of student responses to Introductory Psychology test questions were discussed. The publisher supplied a two thousand item test bank on computer tape. Instructors selected questions for fifteen item tests. The test questions were labeled by the publisher as factual or conceptual. The semester course used a mastery learning format in which…
Descriptors: Difficulty Level, Higher Education, Item Analysis, Item Banks
Smith, Douglas U. – 1978
This study examined the effects of certain item selection methods on the classification accuracy and classification consistency of criterion-referenced instruments. Three item response data sets, representing varying situations of instructional effectiveness, were simulated. Five methods of item selection were then applied to each data set for the…
Descriptors: Criterion Referenced Tests, Item Analysis, Item Sampling, Latent Trait Theory
Australian Council for Educational Research, Hawthorn. – 1978
This teacher's manual describes major ways in which the accompanying Item Bank can be used--that is, for formative evaluation, classroom discussion and investigation, summative tests, suggestions for new subject matter, and as models for multiple-choice questions. Rationale for the use of multiple-choice questions and the parts of a typical entry…
Descriptors: Foreign Countries, Item Analysis, Item Banks, Multiple Choice Tests
Haladyna, Tom – 1978
The lack of a suitable research instrument on attitudes of elementary school children toward school and subject matters has limited the quality and extent of research on school attitudes. The Affective Reporting System was conceived to fill this need. It consists of two instruments--ME and What I Like Best (WILB), each possessing two versions:…
Descriptors: Attitude Measures, Correlation, Elementary Education, Factor Analysis


