NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Evaluative29
Journal Articles21
Speeches/Meeting Papers5
Reports - Research2
Education Level
Audience
Researchers1
Location
Florida2
Denmark1
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of…1
What Works Clearinghouse Rating
Showing 1 to 15 of 29 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Effatpanah, Farshad; Baghaei, Purya – Practical Assessment, Research & Evaluation, 2023
Item response theory (IRT) refers to a family of mathematical models which describe the relationship between latent continuous variables (attributes or characteristics) and their manifestations (dichotomous/polytomous observed outcomes or responses) with regard to a set of item characteristics. Researchers typically use parametric IRT (PIRT)…
Descriptors: Item Response Theory, Feedback (Response), Mathematical Models, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Peabody, Michael R. – Measurement: Interdisciplinary Research and Perspectives, 2023
Many organizations utilize some form of automation in the test assembly process; either fully algorithmic or heuristically constructed. However, one issue with heuristic models is that when the test assembly problem changes the entire model may need to be re-conceptualized and recoded. In contrast, mixed-integer programming (MIP) is a mathematical…
Descriptors: Programming Languages, Algorithms, Heuristics, Mathematical Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ramon Barrada, Juan; Veldkamp, Bernard P.; Olea, Julio – Applied Psychological Measurement, 2009
Computerized adaptive testing is subject to security problems, as the item bank content remains operative over long periods and administration time is flexible for examinees. Spreading the content of a part of the item bank could lead to an overestimation of the examinees' trait level. The most common way of reducing this risk is to impose a…
Descriptors: Item Banks, Adaptive Testing, Item Analysis, Psychometrics
Peer reviewed Peer reviewed
Bart, William M.; Williams-Morris, Ruth – Applied Measurement in Education, 1990
Refined item digraph analysis (RIDA) is a way of studying diagnostic and prescriptive testing. It permits assessment of a test item's diagnostic value by examining the extent to which the item has properties of ideal items. RIDA is illustrated with the Orange Juice Test, which assesses the proportionality concept. (TJH)
Descriptors: Diagnostic Tests, Evaluation Methods, Item Analysis, Mathematical Models
Peer reviewed Peer reviewed
Holland, Paul W. – Psychometrika, 1981
Deciding whether sets of test data are consistent with any of a large class of item response models is considered. The assumption of local independence is weakened to a new condition, local nonnegative dependence (LND). Necessary and sufficient conditions are derived for a LND item response model. (Author/JKS)
Descriptors: Item Analysis, Latent Trait Theory, Mathematical Models, Psychometrics
Peer reviewed Peer reviewed
Follmann, Dean – Psychometrika, 1988
The equivalence between non-parametric marginal logistic models (NMLMs) and a class of discrete marginal logistic models is examined. Parametric models offer some of the advantages of the NMLMs approach, but there are more restrictions on the manifest probabilities. (SLD)
Descriptors: Equations (Mathematics), Estimation (Mathematics), Item Analysis, Mathematical Models
Peer reviewed Peer reviewed
Raju, Nambury S. – Psychometrika, 1988
Formulas for computing the exact signed and unsigned areas between two item characteristic curves (ICCs) are presented. It is further shown that when the "c" parameters are unequal, the area between two ICCs is infinite. The significance of the exact area measures for item bias research is discussed. (Author)
Descriptors: Equations (Mathematics), Estimation (Mathematics), Item Analysis, Latent Trait Theory
Peer reviewed Peer reviewed
Wilson, Mark – Applied Psychological Measurement, 1988
A method for detecting and interpreting disturbances of the local-independence assumption among items that share common stimulus material or other features is presented. Dichotomous and polytomous Rasch models are used to analyze structure of the learning outcome superitems. (SLD)
Descriptors: Item Analysis, Latent Trait Theory, Mathematical Models, Test Interpretation
Peer reviewed Peer reviewed
Guttman, Louis – Psychometrika, 1988
Two discrimination coefficients, known as "disco" and "odisco," are proposed for measuring the extent of overlap in distributions as a direct function of the variance between the arithmetic means. These coefficients are related to K. Pearson's "eta" (1905) and R. A. Fisher's "F" (1950). (SLD)
Descriptors: Analysis of Variance, Computer Software, Discriminant Analysis, Equations (Mathematics)
Peer reviewed Peer reviewed
Reckase, Mark D.; And Others – Journal of Educational Measurement, 1988
It is demonstrated, theoretically and empirically, that item sets can be selected that meet the unidimensionality assumption of most item response theory models, even though they require more than one ability for a correct response. A method for identifying such item sets for test development purposes is presented. (SLD)
Descriptors: Computer Simulation, Item Analysis, Latent Trait Theory, Mathematical Models
Peer reviewed Peer reviewed
Flexser, Arthur J. – Psychological Review, 1981
Contingency analyses have been employed to assess the degree to which outcomes of successive tests of corresponding items deviate from stochastic independence. A method of adjusting contingency tables to remove the effects of subject and item inhomogeneities, is presented. The method represents a partial solution to the "Simpson's…
Descriptors: Correlation, Expectancy Tables, Goodness of Fit, Item Analysis
Peer reviewed Peer reviewed
Muthen, Bengt; Lehman, James – Journal of Educational Statistics, 1985
The applicability of a new multiple-group factor analysis of dichotomous variables is shown and contrasted with the item response theory approach to item bias analysis. Situations are considered where the same set of test items has been administered to more than one group of examinees. (Author/BS).
Descriptors: Factor Analysis, Item Analysis, Latent Trait Theory, Mathematical Models
Peer reviewed Peer reviewed
Muthen, Bengt – Journal of Educational Statistics, 1985
Drawing on recently developed methodology for structural equation modeling with categorical data, this article proposes a new approach for investigating the behavior of dichotomously scored test items in relation to other relevant (observed) variables. A linear structural model relates the latent ability variable to a set of observed scores.…
Descriptors: Biology, Item Analysis, Latent Trait Theory, Mathematical Models
Peer reviewed Peer reviewed
Kelderman, Henk – Journal of Educational Statistics, 1988
A method is proposed for equating different sets of items administered to different groups using the Rasch model. The Rasch equating model formulated describes one common Rasch scale in different groups with different but overlapping item sets. The model, based on the quasi-loglinear Rasch model, can be tested globally. (SLD)
Descriptors: Equated Scores, Estimation (Mathematics), Item Analysis, Latent Trait Theory
Mellenbergh, Gideon J.; van der Linden, Wim J. – Evaluation in Education: International Progress, 1982
Three item selection methods for criterion-referenced tests are examined: the classical theory of item difficulty and item-test correlation; the latent trait theory of item characteristic curves; and a decision-theoretic approach for optimal item selection. Item contribution to the standardized expected utility of mastery testing is discussed. (CM)
Descriptors: Criterion Referenced Tests, Educational Testing, Item Analysis, Latent Trait Theory
Previous Page | Next Page ยป
Pages: 1  |  2