NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 17 results Save | Export
Ackerman, Terry A.; Spray, Judith A. – 1986
A model of test item dependency is presented and used to illustrate the effect that violations of local independence have on the behavior of item characteristic curves. The dependency model is flexible enough to simulate the interaction of a number of factors including item difficulty and item discrimination, varying degrees of item dependence,…
Descriptors: Difficulty Level, Item Analysis, Latent Trait Theory, Mathematical Models
Reckase, Mark D.; And Others – 1985
Factor analysis is the traditional method for studying the dimensionality of test data. However, under common conditions, the factor analysis of tetrachoric correlations does not recover the underlying structure of dichotomous data. The purpose of this paper is to demonstrate that the factor analyses of tetrachoric correlations is unlikely to…
Descriptors: Correlation, Difficulty Level, Factor Analysis, Item Analysis
Peer reviewed Peer reviewed
Jansen, Margo G. H. – Journal of Educational Statistics, 1986
In this paper a Bayesian procedure is developed for the simultaneous estimation of the reading ability and difficulty parameters which are assumed to be factors in reading errors by the multiplicative Poisson Model. According to several criteria, the Bayesian estimates are better than comparable maximum likelihood estimates. (Author/JAZ)
Descriptors: Achievement Tests, Bayesian Statistics, Comparative Analysis, Difficulty Level
Livingston, Samuel A. – 1986
This paper deals with test fairness regarding a test consisting of two parts: (1) a "common" section, taken by all students; and (2) a "variable" section, in which some students may answer a different set of questions from other students. For example, a test taken by several thousand students each year contains a common multiple-choice portion and…
Descriptors: Difficulty Level, Error of Measurement, Essay Tests, Mathematical Models
Muraki, Eiji – 1984
The TESTFACT computer program and full-information factor analysis of test items were used in a computer simulation conducted to correct for the guessing effect. Full-information factor analysis also corrects for omitted items. The present version of TESTFACT handles up to five factors and 150 items. A preliminary smoothing of the tetrachoric…
Descriptors: Comparative Analysis, Computer Simulation, Computer Software, Correlation
Gustafsson, Jan-Eric – 1979
Problems and procedures in assessing and obtaining fit of data to the Rasch model are treated and assumptions embodied in the Rasch model are made explicit. It is concluded that statistical tests are needed which are sensitive to deviations so that more than one item parameter would be needed for each item, and more than one person parameter would…
Descriptors: Ability, Difficulty Level, Goodness of Fit, Item Analysis
Zwick, Rebecca – 1986
Although perfectly scalable items rarely occur in practice, Guttman's concept of a scale has proved to be valuable to the development of measurement theory. If the score distribution is uniform and there is an equal number of items at each difficulty level, both the elements and the eigenvalues of the Pearson correlation matrix of dichotomous…
Descriptors: Correlation, Difficulty Level, Item Analysis, Latent Trait Theory
Wise, Lauress L. – 1986
A primary goal of this study was to determine the extent to which item difficulty was related to item position and, if a significant relationship was found, to suggest adjustments to predicted item difficulty that reflect differences in item position. Item response data from the Medical College Admission Test (MCAT) were analyzed. A data set was…
Descriptors: College Entrance Examinations, Difficulty Level, Educational Research, Error of Measurement
Samejima, Fumiko – 1986
Item analysis data fitting the normal ogive model were simulated in order to investigate the problems encountered when applying the three-parameter logistic model. Binary item tests containing 10 and 35 items were created, and Monte Carlo methods simulated the responses of 2,000 and 500 examinees. Item parameters were obtained using Logist 5.…
Descriptors: Computer Simulation, Difficulty Level, Guessing (Tests), Item Analysis
Reckase, Mark D. – 1985
Multidimensional item difficulty (MID) is proposed as a means of describing test items which measure more than one ability. With mathematical story problems, for instance, both mathematical and verbal skills are required to obtain a correct answer. The proposed measure of MID is based upon three general assumptions: (1) the probability of…
Descriptors: Ability Identification, College Entrance Examinations, College Mathematics, Difficulty Level
Tucker, Ledyard R.; And Others – 1986
A Monte Carlo study of five indices of dimensionality of binary items used a computer model that allowed sampling of both items and people. Five parameters were systematically varied in a factorial design: (1) number of common factors from one to five; (2) number of items, including 20, 30, 40, and 60; (3) sample sizes of 125 and 500; (4) nearly…
Descriptors: Correlation, Difficulty Level, Educational Research, Expectancy Tables
PDF pending restoration PDF pending restoration
Reckase, Mark D. – 1986
The work presented in this paper defined conceptually the concepts of multidimensional discrimination and information, derived mathematical expressions for the concepts for a particular multidimensional item response theory (IRT) model, and applied the concepts to actual test data. Multidimensional discrimination was defined as a function of the…
Descriptors: College Entrance Examinations, Difficulty Level, Discriminant Analysis, Item Analysis
Ackerman, Terry A. – 1987
The purpose of this study was to investigate the effect of using multidimensional items in a computer adaptive test (CAT) setting which assumes a unidimensional item response theory (IRT) framework. Previous research has suggested that the composite of multidimensional abilities being estimated by a unidimensional IRT model is not constant…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Computer Simulation
Farish, Stephen J. – 1984
The stability of Rasch test item difficulty parameters was investigated under varying conditions. Data were taken from a mathematics achievement test administered to over 2,000 Australian students. The experiments included: (1) relative stability of the Rasch, traditional, and z-item difficulty parameters using different sample sizes and designs;…
Descriptors: Achievement Tests, Difficulty Level, Estimation (Mathematics), Foreign Countries
Kingston, Neal M. – 1985
Birnbaum's three-parameter logistic item response model was used to study guessing behavior of low ability examinees on the Graduate Record Examinations (GRE) General Test, Verbal Measure. GRE scoring procedures had recently changed, from a scoring formula which corrected for guessing, to number-right scoring. The three-parameter theory was used…
Descriptors: Academic Aptitude, Analysis of Variance, College Entrance Examinations, Difficulty Level
Previous Page | Next Page ยป
Pages: 1  |  2