Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 1 |
Descriptor
Source
Review of Educational Research | 2 |
Applied Psychological… | 1 |
Educational and Psychological… | 1 |
Journal of Educational… | 1 |
Psychometrika | 1 |
Author
Bashaw, W. L. | 4 |
Rentz, R. Robert | 3 |
Abrahamowicz, Michal | 1 |
Airasian, Peter W. | 1 |
Baker, Frank B. | 1 |
Bart, William M. | 1 |
Brigman, S. Leellen | 1 |
Choppin, Bruce | 1 |
Dawis, Rene V. | 1 |
Dinero, Thomas E. | 1 |
Durovic, Jerry J. | 1 |
More ▼ |
Publication Type
Reports - Research | 16 |
Journal Articles | 5 |
Speeches/Meeting Papers | 4 |
Reports - Evaluative | 2 |
Information Analyses | 1 |
Numerical/Quantitative Data | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ramon Barrada, Juan; Veldkamp, Bernard P.; Olea, Julio – Applied Psychological Measurement, 2009
Computerized adaptive testing is subject to security problems, as the item bank content remains operative over long periods and administration time is flexible for examinees. Spreading the content of a part of the item bank could lead to an overestimation of the examinees' trait level. The most common way of reducing this risk is to impose a…
Descriptors: Item Banks, Adaptive Testing, Item Analysis, Psychometrics
Forster, Fred – 1976
Various factors which influence the relationship between the Rasch item characteristic curve and the actual performance of an item are identified. The Rasch item characteristic curve is a new concept in test design and analysis. The Rasch test model provides information concerning the percent of students with a specified achievement level who…
Descriptors: Goodness of Fit, Item Analysis, Mathematical Models, Probability
Durovic, Jerry J. – 1975
A test bias definition, applicable at the item-level of a test is presented. The definition conceptually equates test bias with measuring different things in different groups, and operationally equates test bias with a difference in item fit to the Rasch Model, greater than one, between groups. It is suggested that the proposed definition avoids…
Descriptors: Content Analysis, Definitions, Item Analysis, Mathematical Models

Baker, Frank B. – Review of Educational Research, 1977
Recent advances in item analysis have provided greater capabilities for the analysis of tests, but have also significantly increased the gap between the theory and practice of item analysis. This paper traces the lines of development in item analysis under latent trait theory. (MV)
Descriptors: Correlation, Item Analysis, Latent Trait Theory, Mathematical Models
Waller, Michael I. – 1976
A method of estimating the parameters of the Rasch Model removing the effect of ramdom guessing is presented. The procedure is an application of the ARRG (Abilities Removing Random Guessing) model recently developed for two parameter latent trait models. Under the Rasch model ARRG provides for estimation of abilities, removing the effects of…
Descriptors: Ability, Guessing (Tests), Item Analysis, Mathematical Models

Wilcox, Rand R. – Educational and Psychological Measurement, 1979
Wilcox has described three probability models which characterize a single test item in terms of a population of examinees (ED 156 718). This note indicates indicates that similar models can be derived which characterize a single examinee in terms of an item domain. A numerical illustration is given. (Author/JKS)
Descriptors: Achievement Tests, Item Analysis, Mathematical Models, Probability

Thissen, David M. – 1977
Few phenomena in psychology have been studied more than reaction time. However, while response latency now provides the basic data for research in many areas of cognitive psychology, little work has been done on the problem of incorporating response latency in the measurement of individual differences. A solution to this problem requires the…
Descriptors: Cognitive Ability, Cognitive Tests, Individual Differences, Item Analysis
Rentz, R. Robert; Bashaw, W. L. – 1976
To determine if Rasch Model procedures have any utility for equating pre-existing tests, data from the equating phase of the Anchor Test Study (ATS) were reanalyzed. This management report summarizes the work completed in the project, describes the differences in this project and that of the ATS, and presents recommendations and conclusions to the…
Descriptors: Elementary Education, Equated Scores, Item Analysis, Mathematical Models
Rudner, Lawrence M. – 1977
Because it is a true score model employing item parameters which are independent of the examined sample, item characteristic curve theory (ICC) offers several advantages over classical measurement theory. In this paper an approach to biased item identification using ICC theory is described and applied. The ICC theory approach is attractive in that…
Descriptors: Bias, Criteria, Culture Fair Tests, Item Analysis
Dinero, Thomas E.; Haertel, Edward – 1976
This paper will discuss the results of a series of computer simulations comparing the Rasch logistic model to a series of models departing to various degrees from its assumption of equal discrimination power for all items. The results have implications for test construction and test scoring, indicating how closely the conventional raw score…
Descriptors: Comparative Analysis, Computer Programs, Goodness of Fit, Individual Differences

van der Linden, Wim J. – Journal of Educational Statistics, 1978
Macready and Dayton introduced two probabilistic models for mastery assessment based on an idealistic all-or-none conception of mastery. Alternatively, an application of latent trait theory to mastery testing is proposed (a three parameter logistic model) as a more plausible model for test theory. (Author/CTM)
Descriptors: Criterion Referenced Tests, Guessing (Tests), Item Analysis, Latent Trait Theory
Brigman, S. Leellen; Bashaw, W. L. – 1976
Procedures are presented for equating simultaneously several tests which have been calibrated by the Rasch Model. Three multiple test equating designs are described. A Full Matrix Design equates each test to all others. A Chain Design links tests sequentially. A Vector Design equates one test to each of the other tests. For each design, the Rasch…
Descriptors: Ability, Achievement Tests, Computer Programs, Equated Scores
Choppin, Bruce – 1982
On well-constructed multiple-choice tests, the most serious threat to measurement is not variation in item discrimination, but the guessing behavior that may be adopted by some students. Ways of ameliorating the effects of guessing are discussed, especially for problems in latent trait models. A new item response model, including an item parameter…
Descriptors: Ability, Algorithms, Guessing (Tests), Item Analysis
Bart, William M.; Airasian, Peter W. – 1976
The question of whether test factor structure is indicative of the test item hierarchy was examined. Data from 1,000 subjects on two sets of five bivalued Law School Admission Test items, which were analyzed with latent trait methods of Bock and Lieberman and of Christoffersson in Psychometrika, were analyzed with an ordering-theoretic method to…
Descriptors: Comparative Analysis, Correlation, Factor Analysis, Factor Structure

Kane, Michael T.; Moloney, James M. – 1976
The Answer-Until-Correct (AUC) procedure has been proposed in order to increase the reliability of multiple-choice items. A model for examinees' behavior when they must respond to each item until they answer it correctly is presented. An expression for the reliability of AUC items, as a function of the characteristics of the item and the scoring…
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Previous Page | Next Page ยป
Pages: 1 | 2