Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 4 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 15 |
Descriptor
Source
Author
Publication Type
Education Level
| Higher Education | 2 |
| Grade 4 | 1 |
| Grade 8 | 1 |
| Postsecondary Education | 1 |
Audience
| Researchers | 49 |
| Practitioners | 1 |
Location
| Netherlands | 5 |
| United States | 3 |
| Australia | 2 |
| Belgium | 2 |
| Italy | 2 |
| California | 1 |
| China | 1 |
| Denmark | 1 |
| Florida | 1 |
| Georgia | 1 |
| Hungary | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedAdema, Jos J. – Journal of Educational Measurement, 1990
Mixed integer linear programing models for customizing two-stage tests are presented. Model constraints are imposed with respect to test composition, administration time, inter-item dependencies, and other practical considerations. The models can be modified for use in the construction of multistage tests. (Author/TJH)
Descriptors: Adaptive Testing, Computer Assisted Testing, Equations (Mathematics), Linear Programing
Peer reviewedBaker, Frank B. – Applied Psychological Measurement, 1988
The form of item log-likelihood surface was investigated under two-parameter and three-parameter logistic models. Results confirm that the LOGIST program procedures used to locate the maximum of the likelihood functions are consistent with the form of the item log-likelihood surface. (SLD)
Descriptors: Estimation (Mathematics), Factor Analysis, Graphs, Latent Trait Theory
Peer reviewedWilcox, Rand R.; And Others – Journal of Educational Measurement, 1988
The second response conditional probability model of decision-making strategies used by examinees answering multiple choice test items was revised. Increasing the number of distractors or providing distractors giving examinees (N=106) the option to follow the model improved results and gave a good fit to data for 29 of 30 items. (SLD)
Descriptors: Cognitive Tests, Decision Making, Mathematical Models, Multiple Choice Tests
Peer reviewedWainer, Howard; And Others – Journal of Educational Measurement, 1991
A testlet is an integrated group of test items presented as a unit. The concept of testlet differential item functioning (testlet DIF) is defined, and a statistical method is presented to detect testlet DIF. Data from a testlet-based experimental version of the Scholastic Aptitude Test illustrate the methodology. (SLD)
Descriptors: College Entrance Examinations, Definitions, Graphs, Item Bias
Peer reviewedSamejima, Fumiko – Psychometrika, 1993
An approximation for the bias function of the maximum likelihood estimate of the latent trait or ability is developed for the general case where item responses are discrete, which includes the dichotomous response level, the graded response level, and the nominal response level. (SLD)
Descriptors: Ability, Equations (Mathematics), Estimation (Mathematics), Item Response Theory
Peer reviewedNandakumar, Ratna – Journal of Educational Measurement, 1993
The phenomenon of simultaneous differential item functioning (DIF) amplification and cancellation and the role of the SIBTEST approach in detecting DIF are investigated with a variety of simulated test data. The effectiveness of SIBTEST is supported, and the implications of DIF amplification and cancellation are discussed. (SLD)
Descriptors: Computer Simulation, Elementary Secondary Education, Equal Education, Equations (Mathematics)
Roberts, James S.; Laughlin, James E. – 1996
Binary or graded disagree-agree responses to attitude items are often collected for the purpose of attitude measurement. Although such data are sometimes analyzed with cumulative measurement models, recent investigations suggest that unfolding models are more appropriate (J. S. Roberts, 1995; W. H. Van Schuur and H. A. L. Kiers, 1994). Advances in…
Descriptors: Attitude Measures, Estimation (Mathematics), Item Response Theory, Mathematical Models
De Champlain, Andre; Gessaroli, Marc E. – 1991
A new index for assessing the dimensionality underlying a set of test items was investigated. The incremental fit index (IFI) is based on the sum of squares of the residual covariances. Purposes of the study were to: (1) examine the distribution of the IFI in the null situation, with truly unidimensional data; (2) examine the rejection rate of the…
Descriptors: Equations (Mathematics), Factor Analysis, Foreign Countries, Item Response Theory
Schumacker, Randall E.; Fluke, Rickey – 1991
Three methods of factor analyzing dichotomously scored item performance data were compared using two raw score data sets of 20-item tests, one reflecting normally distributed latent traits and the other reflecting uniformly distributed latent traits. This comparison was accomplished by using phi and tetrachoric correlations among dichotomous data…
Descriptors: Comparative Analysis, Equations (Mathematics), Estimation (Mathematics), Factor Analysis
Muraki, Eiji – 1991
Multiple group factor analysis is described and illustrated through a simulation involving 5,000 examinees. The estimation process of the group factors were implemented using the TESTFACT program of Wilson and others (1987). Group factor analysis is described as a special case of confirmatory factor analysis. Group factors can be computed based on…
Descriptors: Data Analysis, Difficulty Level, Equations (Mathematics), Estimation (Mathematics)
Ackerman, Terry A. – 1991
Many researchers have suggested that the main cause of item bias is the misspecification of the latent ability space. That is, items that measure multiple abilities are scored as though they are measuring a single ability. If two different groups of examinees have different underlying multidimensional ability distributions and the test items are…
Descriptors: Equations (Mathematics), Item Bias, Item Response Theory, Mathematical Models
Gibbons, Robert D.; And Others – 1990
A plausible "s"-factor solution for many types of psychological and educational tests is one in which there is one general factor and "s - 1" group- or method-related factors. The bi-factor solution results from the constraint that each item has a non-zero loading on the primary dimension "alpha(sub j1)" and at most…
Descriptors: Equations (Mathematics), Estimation (Mathematics), Factor Analysis, Item Analysis
Samejima, Fumiko – 1990
Test validity is a concept that has often been ignored in the context of latent trait models and in modern test theory, particularly as it relates to computerized adaptive testing. Some considerations about the validity of a test and of a single item are proposed. This paper focuses on measures that are population-free and that will provide local…
Descriptors: Adaptive Testing, Computer Assisted Testing, Equations (Mathematics), Item Response Theory
van der Linden, Wim J.; Boekkooi-Timminga, Ellen – 1987
A "maximin" model for item response theory based test design is proposed. In this model only the relative shape of the target test information function is specified. It serves as a constraint subject to which a linear programming algorithm maximizes the information in the test. In the practice of test construction there may be several…
Descriptors: Algorithms, Foreign Countries, Item Banks, Latent Trait Theory
Knol, Dirk L. – 1989
Two iterative procedures for constructing Rasch scales are presented. A log-likelihood ratio test based on a quasi-loglinear formulation of the Rasch model is given by which one item at a time can be deleted from or added to an initial item set. In the so-called "top-down" algorithm, items are stepwise deleted from a relatively large…
Descriptors: Algorithms, Item Banks, Latent Trait Theory, Mathematical Models


