Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 4 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 15 |
Descriptor
Source
Author
Publication Type
Education Level
| Higher Education | 2 |
| Grade 4 | 1 |
| Grade 8 | 1 |
| Postsecondary Education | 1 |
Audience
| Researchers | 49 |
| Practitioners | 1 |
Location
| Netherlands | 5 |
| United States | 3 |
| Australia | 2 |
| Belgium | 2 |
| Italy | 2 |
| California | 1 |
| China | 1 |
| Denmark | 1 |
| Florida | 1 |
| Georgia | 1 |
| Hungary | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Hutchinson, T. P. – 1984
One means of learning about the processes operating in a multiple choice test is to include some test items, called nonsense items, which have no correct answer. This paper compares two versions of a mathematical model of test performance to interpret test data that includes both genuine and nonsense items. One formula is based on the usual…
Descriptors: Foreign Countries, Guessing (Tests), Mathematical Models, Multiple Choice Tests
Divgi, D. R. – 1978
One aim of criterion-referenced testing is to classify an examinee without reference to a norm group; therefore, any statements about the dependability of such classification ought to be group-independent also. A population-independent index is proposed in terms of the probability of incorrect classification near the cutoff true score. The…
Descriptors: Criterion Referenced Tests, Cutting Scores, Difficulty Level, Error of Measurement
Wilcox, Rand R.; Yeh, Jennie P. – 1979
The purpose of this paper is to derive explicit estimates of the parameters of a latent structure model when the skills represented by the test items are hierarchically related. Two special cases are described, which may be used as an approximation to the Dayton and Macready model and also provide initial estimates in an iterative estimation…
Descriptors: Achievement Tests, Elementary Secondary Education, Factor Analysis, Guessing (Tests)
Wright, Benjamin D. – 1998
In three lectures, Benjamin D. Wright of the University of Chicago introduces the Rasch model and its basic concepts. The first lecture, March 30, 1994 discusses the model created by Georg Rasch, a Danish mathematician, which Dr. Wright initially saw as merely a way to make raw scores into measures. Eventually, the model developed into a…
Descriptors: Educational Testing, Estimation (Mathematics), Item Response Theory, Mathematical Models
Peer reviewedVeale, James R.; Foreman, Dale I. – Journal of Educational Measurement, 1983
Statistical procedures for measuring heterogeneity of test item distractor distributions, or cultural variation, are presented. These procedures are based on the notion that examinees' responses to the incorrect options of a multiple-choice test provide more information concerning cultural bias than their correct responses. (Author/PN)
Descriptors: Ethnic Bias, Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewedvan der Linden, Wim J. – Applied Psychological Measurement, 1979
The restrictions on item difficulties that must be met when binomial models are applied to domain-referenced testing are examined. Both a deterministic and a stochastic conception of item responses are discussed with respect to difficulty and Guttman-type items. (Author/BH)
Descriptors: Difficulty Level, Item Sampling, Latent Trait Theory, Mathematical Models
Peer reviewedPark, Dong-Gun; Lautenschlager, Gary J. – Applied Psychological Measurement, 1990
The effectiveness of two iterative methods of item response theory (IRT) item bias detection was examined in a simulation study. A modified form of the iterative item parameter linking method of F. Drasgow and an adaptation of the test purification procedure of F. M. Lord were compared. (SLD)
Descriptors: Ability Identification, Computer Simulation, Item Bias, Item Response Theory
Peer reviewedPrien, Borge – Studies in Educational Evaluation, 1989
Under certain conditions it may be possible to determine the difficulty of previously untested test items. Although no recipe can be provided, reflections on this topic are presented, drawing on concepts of item banking. A functional constructive method is suggested as having the most potential. (SLD)
Descriptors: Difficulty Level, Educational Assessment, Foreign Countries, Item Analysis
Peer reviewedSmith, Richard M. – Educational and Psychological Measurement, 1991
This study reports results of an investigation based on simulated data of the distributional properties of the item fit statistics that are commonly used in the Rasch model calibration programs as indices of the fit of responses to individual items to the measurement model. (SLD)
Descriptors: Computer Simulation, Equations (Mathematics), Goodness of Fit, Item Response Theory
Mislevy, Robert J. – 1992
A closed form approximation is given for the variance of examinee proficiency estimates in the Rasch model for dichotomous items, under the condition that only estimates, rather than true values, of item difficulty parameters are available. The term that must be added to the usual response-sampling variance is inversely proportional to both the…
Descriptors: Academic Achievement, Achievement Tests, Equations (Mathematics), Estimation (Mathematics)
Fisk, Yvette Hester – 1991
The reasons for recent endeavors to evaluate item bias are discussed, and item bias is defined. Some of the literature regarding the most promising methods of detecting item bias is reviewed. Three classes of methods for detecting item bias are discussed using concrete examples and illustrations. These methods are: (1) latent trait; (2)…
Descriptors: Chi Square, Comparative Analysis, Difficulty Level, Item Bias
Samejima, Fumiko – 1990
A method is proposed that increases the accuracies of estimation of the operating characteristics of discrete item responses, especially when the true operating characteristic is represented by a steep curve, and also at the lower and upper ends of the ability distribution where the estimation tends to be inaccurate because of the smaller number…
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Equations (Mathematics)
Jannarone, Robert J. – 1986
Conjunctive item response models are introduced such that (a) sufficient statistics for latent traits are not necessarily additive in item scores; (b) items are not necessarily locally independent; and (c) existing compensatory (additive) item response models including the binomial, Rasch, logistic, and general locally independent model are…
Descriptors: Cognitive Structures, Equations (Mathematics), Estimation (Mathematics), Hypothesis Testing
Hwang, Chi-en; Cleary, T. Anne – 1986
The results obtained from two basic types of pre-equatings of tests were compared: the item response theory (IRT) pre-equating and section pre-equating (SPE). The simulated data were generated from a modified three-parameter logistic model with a constant guessing parameter. Responses of two replication samples of 3000 examinees on two 72-item…
Descriptors: Computer Simulation, Equated Scores, Latent Trait Theory, Mathematical Models
PDF pending restorationCobern, William W. – 1986
This computer program, written in BASIC, performs three different calculations of test reliability: (1) the Kuder-Richardson method; (2); the "common split-half" method; and (3) the Rulon-Guttman split-half method. The program reads sequential access data files for microcomputers that have been set up by statistical packages such as…
Descriptors: Computer Software, Difficulty Level, Educational Research, Equations (Mathematics)


