Descriptor
| Mathematical Models | 28 |
| Statistical Studies | 28 |
| Test Items | 28 |
| Latent Trait Theory | 24 |
| Item Analysis | 16 |
| Difficulty Level | 15 |
| Test Theory | 10 |
| Estimation (Mathematics) | 8 |
| Test Construction | 8 |
| College Entrance Examinations | 6 |
| Computer Simulation | 6 |
| More ▼ | |
Author
Publication Type
| Reports - Research | 25 |
| Speeches/Meeting Papers | 16 |
| Journal Articles | 5 |
| Reports - Evaluative | 2 |
| Collected Works - Proceedings | 1 |
| Numerical/Quantitative Data | 1 |
Education Level
Audience
| Researchers | 17 |
Location
| Australia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| ACT Assessment | 3 |
| Graduate Record Examinations | 1 |
| Medical College Admission Test | 1 |
| SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Ackerman, Terry A.; Spray, Judith A. – 1986
A model of test item dependency is presented and used to illustrate the effect that violations of local independence have on the behavior of item characteristic curves. The dependency model is flexible enough to simulate the interaction of a number of factors including item difficulty and item discrimination, varying degrees of item dependence,…
Descriptors: Difficulty Level, Item Analysis, Latent Trait Theory, Mathematical Models
Peer reviewedMuthen, Bengt; Lehman, James – Journal of Educational Statistics, 1985
The applicability of a new multiple-group factor analysis of dichotomous variables is shown and contrasted with the item response theory approach to item bias analysis. Situations are considered where the same set of test items has been administered to more than one group of examinees. (Author/BS).
Descriptors: Factor Analysis, Item Analysis, Latent Trait Theory, Mathematical Models
Peer reviewedJannarone, Robert J. – Psychometrika, 1986
Conjunctive item response models are introduced such that: (1) sufficient statistics for latent traits are not necessarily additive in item scores; (2) items are not necessarily locally independent; and (3) existing compensatory (additive) item response models including the binomial, Rasch, logistic, and general locally independent model are…
Descriptors: Cognitive Processes, Hypothesis Testing, Latent Trait Theory, Mathematical Models
Peer reviewedMuthen, Bengt – Journal of Educational Statistics, 1985
Drawing on recently developed methodology for structural equation modeling with categorical data, this article proposes a new approach for investigating the behavior of dichotomously scored test items in relation to other relevant (observed) variables. A linear structural model relates the latent ability variable to a set of observed scores.…
Descriptors: Biology, Item Analysis, Latent Trait Theory, Mathematical Models
Jannarone, Robert J. – 1986
Conjunctive item response models are introduced such that (a) sufficient statistics for latent traits are not necessarily additive in item scores; (b) items are not necessarily locally independent; and (c) existing compensatory (additive) item response models including the binomial, Rasch, logistic, and general locally independent model are…
Descriptors: Cognitive Structures, Equations (Mathematics), Estimation (Mathematics), Hypothesis Testing
Hwang, Chi-en; Cleary, T. Anne – 1986
The results obtained from two basic types of pre-equatings of tests were compared: the item response theory (IRT) pre-equating and section pre-equating (SPE). The simulated data were generated from a modified three-parameter logistic model with a constant guessing parameter. Responses of two replication samples of 3000 examinees on two 72-item…
Descriptors: Computer Simulation, Equated Scores, Latent Trait Theory, Mathematical Models
Ackerman, Terry A. – 1987
One of the important underlying assumptions of all item response theory (IRT) models is that of local independence. This assumption requires that the response to an item on a test not be influenced by the response to any other items. This assumption is often taken for granted, with little or no scrutiny of the response process required to answer…
Descriptors: Computer Software, Correlation, Estimation (Mathematics), Latent Trait Theory
McKinley, Robert L.; Reckase, Mark D. – 1984
To assess the effects of correlated abilities on test characteristics, and to explore the effects of correlated abilities on the use of a multidimensional item response theory model which does not explicitly account for such a correlation, two tests were constructed. One had two relatively unidimensional subsets of items, the other had all…
Descriptors: Ability, Correlation, Factor Structure, Item Analysis
Reckase, Mark D.; And Others – 1985
Factor analysis is the traditional method for studying the dimensionality of test data. However, under common conditions, the factor analysis of tetrachoric correlations does not recover the underlying structure of dichotomous data. The purpose of this paper is to demonstrate that the factor analyses of tetrachoric correlations is unlikely to…
Descriptors: Correlation, Difficulty Level, Factor Analysis, Item Analysis
Peer reviewedHarrison, David A. – Journal of Educational Statistics, 1986
Multidimensional item response data were created. The strength of a general factor, the number of common factors, the distribution of items loadingon common factors, and the number of items in simulated tests were manipulated. LOGIST effectively recovered both item and trait parameters in nearly all of the experimental conditions. (Author/JAZ)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Correlation
Hambleton, Ronald K.; Rogers, H. Jane – 1986
The general goal of this paper is to help researchers conduct appropriately designed goodness of fit studies for item response model applications. The specific purposes are to describe: (1) an up-to-date set of promising and useful methods for addressing a variety of goodness of fit questions; and (2) current research studies to advance this set…
Descriptors: Data Analysis, Educational Research, Factor Analysis, Goodness of Fit
Livingston, Samuel A. – 1986
This paper deals with test fairness regarding a test consisting of two parts: (1) a "common" section, taken by all students; and (2) a "variable" section, in which some students may answer a different set of questions from other students. For example, a test taken by several thousand students each year contains a common multiple-choice portion and…
Descriptors: Difficulty Level, Error of Measurement, Essay Tests, Mathematical Models
Muraki, Eiji – 1984
The TESTFACT computer program and full-information factor analysis of test items were used in a computer simulation conducted to correct for the guessing effect. Full-information factor analysis also corrects for omitted items. The present version of TESTFACT handles up to five factors and 150 items. A preliminary smoothing of the tetrachoric…
Descriptors: Comparative Analysis, Computer Simulation, Computer Software, Correlation
Gustafsson, Jan-Eric – 1979
Problems and procedures in assessing and obtaining fit of data to the Rasch model are treated and assumptions embodied in the Rasch model are made explicit. It is concluded that statistical tests are needed which are sensitive to deviations so that more than one item parameter would be needed for each item, and more than one person parameter would…
Descriptors: Ability, Difficulty Level, Goodness of Fit, Item Analysis
Zwick, Rebecca – 1986
Although perfectly scalable items rarely occur in practice, Guttman's concept of a scale has proved to be valuable to the development of measurement theory. If the score distribution is uniform and there is an equal number of items at each difficulty level, both the elements and the eigenvalues of the Pearson correlation matrix of dichotomous…
Descriptors: Correlation, Difficulty Level, Item Analysis, Latent Trait Theory
Previous Page | Next Page ยป
Pages: 1 | 2

