Descriptor
Achievement Tests | 38 |
Mathematical Models | 38 |
Test Items | 38 |
Item Analysis | 18 |
Test Construction | 15 |
Difficulty Level | 13 |
Latent Trait Theory | 12 |
Test Theory | 11 |
Statistical Analysis | 9 |
Elementary Secondary Education | 8 |
Item Banks | 8 |
More ▼ |
Source
Educational and Psychological… | 3 |
Journal of Educational… | 3 |
Journal of Educational… | 1 |
Multivariate Behavioral… | 1 |
Psychometrika | 1 |
Author
Wilcox, Rand R. | 5 |
Muthen, Bengt O. | 3 |
Forster, Fred | 2 |
Reckase, Mark D. | 2 |
Yen, Wendy M. | 2 |
Ackerman, Terry A. | 1 |
Adema, Jos J. | 1 |
Albanese, Mark A. | 1 |
Bacon, Tina P. | 1 |
Bejar, Issac I. | 1 |
Canner, Jane M. | 1 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 7 |
Location
Australia | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating

Wilcox, Rand R. – Educational and Psychological Measurement, 1979
Wilcox has described three probability models which characterize a single test item in terms of a population of examinees (ED 156 718). This note indicates indicates that similar models can be derived which characterize a single examinee in terms of an item domain. A numerical illustration is given. (Author/JKS)
Descriptors: Achievement Tests, Item Analysis, Mathematical Models, Probability

Hattie, John – Multivariate Behavioral Research, 1984
This paper describes a simulation that determines the adequacy of various indices as decision criteria for assessing unidimensionality. Using the sum of absolute residuals from the two-parameter latent trait model, indices were obtained that could discriminate between one latent trait and more than one latent trait. (Author/BW)
Descriptors: Achievement Tests, Latent Trait Theory, Mathematical Models, Monte Carlo Methods

Harris, Chester W.; Pearlman, Andrea Pastorok – Journal of Educational Statistics, 1978
A theory and a procedure are presented for estimating a domain parameter and item parameters for test items in a homogeneous domain, such that the combined domain and item parameters account for observed proportions right for each item in a test. (CTM)
Descriptors: Achievement Tests, Difficulty Level, Item Analysis, Mathematical Models

Gressard, Risa P.; Loyd, Brenda H. – Journal of Educational Measurement, 1991
A Monte Carlo study, which simulated 10,000 examinees' responses to four tests, investigated the effect of item stratification on parameter estimation in multiple matrix sampling of achievement data. Practical multiple matrix sampling is based on item stratification by item discrimination and a sampling plan with moderate number of subtests. (SLD)
Descriptors: Achievement Tests, Comparative Testing, Computer Simulation, Estimation (Mathematics)

Feldt, Leonard S. – Educational and Psychological Measurement, 1984
The binomial error model includes form-to-form difficulty differences as error variance and leads to Ruder-Richardson formula 21 as an estimate of reliability. If the form-to-form component is removed from the estimate of error variance, the binomial model leads to KR 20 as the reliability estimate. (Author/BW)
Descriptors: Achievement Tests, Difficulty Level, Error of Measurement, Mathematical Formulas
Wilcox, Rand R.; Yeh, Jennie P. – 1979
The purpose of this paper is to derive explicit estimates of the parameters of a latent structure model when the skills represented by the test items are hierarchically related. Two special cases are described, which may be used as an approximation to the Dayton and Macready model and also provide initial estimates in an iterative estimation…
Descriptors: Achievement Tests, Elementary Secondary Education, Factor Analysis, Guessing (Tests)
Mislevy, Robert J. – 1992
A closed form approximation is given for the variance of examinee proficiency estimates in the Rasch model for dichotomous items, under the condition that only estimates, rather than true values, of item difficulty parameters are available. The term that must be added to the usual response-sampling variance is inversely proportional to both the…
Descriptors: Academic Achievement, Achievement Tests, Equations (Mathematics), Estimation (Mathematics)

Muthen, Bengt O. – Psychometrika, 1989
A method for detecting instructional sensitivity (item bias) in test items is proposed. This method extends item response theory by allowing for item-specific variation in measurement relations across students' varying instructional backgrounds. Item bias detection is a by-product. Traditional and new methods are compared. (SLD)
Descriptors: Achievement Tests, Educational Background, Educational Opportunities, Elementary Secondary Education
Adema, Jos J. – 1989
Item banks, large sets of test items, can be used for the construction of achievement tests. Mathematical programming models have been proposed for the selection of items from an item bank for a test. These models make automated test construction possible. However, to find an optimal or even an approximate optimal solution to a test construction…
Descriptors: Achievement Tests, Computer Assisted Testing, Computer Software, Item Banks
Wilcox, Rand R. – 1978
Two fundamental problems in mental test theory are to estimate true score and to estimate the amount of error when testing an examinee. In this report, three probability models which characterize a single test item in terms of a population of examinees are described. How these models may be modified to characterize a single examinee in terms of an…
Descriptors: Achievement Tests, Comparative Analysis, Error of Measurement, Mathematical Models

Albanese, Mark A.; Forsyth, Robert A. – Educational and Psychological Measurement, 1984
The purpose of this study was to compare the relative robustness of the one-, two-, and modified two-parameter latent trait logistic models for the Iowa Tests of Educational Development. Results suggest that the modified two-parameter model may provide the best representation of the data. (Author/BW)
Descriptors: Achievement Tests, Comparative Analysis, Goodness of Fit, Item Analysis
Forster, Fred; And Others – 1978
Research on the Rasch model of test and item analysis was applied to tests constructed from item banks for reading and mathematics with respect to five practical problems for scaling items and equating test forms. The questions were: (1) Does the Rasch model yield the same scale value regardless of the student sample? (2) How many students are…
Descriptors: Achievement Tests, Difficulty Level, Elementary Secondary Education, Equated Scores
Wilcox, Rand R. – 1981
These studies in test adequacy focus on two problems: procedures for estimating reliability, and techniques for identifying ineffective distractors. Fourteen papers are presented on recent advances in measuring achievement (a response to Molenaar); "an extension of the Dirichlet-multinomial model that allows true score and guessing to be…
Descriptors: Achievement Tests, Criterion Referenced Tests, Guessing (Tests), Mathematical Models
Douglass, James B. – 1980
The three-, two- and one-parameter (Rasch) logistic item characteristic curve models are compared for use in a large multi-section college course. Only the three-parameter model produced clearly unacceptable parameter estimates for 100 item tests with examinee samples ranging from 594 to 1082. The Rasch and two-parameter models were compared for…
Descriptors: Academic Ability, Achievement Tests, Course Content, Difficulty Level
Engelhard, George, Jr. – 1980
The Rasch model is described as a latent trait model which meets the five criteria that characterize reasonable and objective measurements of an individual's ability independent of the test items used. The criteria are: (1) calibration of test items must be independent of particular norming groups; (2) measurement of individuals must be…
Descriptors: Achievement Tests, Difficulty Level, Elementary Secondary Education, Equated Scores