NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Ratcliff, Roger; And Others – Psychological Review, 1992
Four global memory models were evaluated in 3 recognition memory experiments with 30 college students. Experiments provide receiver operating characteristic (ROC) curves. Data give a clear idea of the behavior of signal and noise distributions in recognition memory. Ways in which results support revision of current models are discussed. (SLD)
Descriptors: College Students, Estimation (Mathematics), Higher Education, Mathematical Models
Ackerman, Terry A.; Evans, John A. – 1993
A didactic example is provided, using a Monte Carlo method, of how differential item functioning (DIF) can be eliminated (and thus better understood) when the complete latent space is used. The main source of DIF is that the matching single criterion used in some DIF procedures, Mantel Haenszel or Simultaneous Item Bias (SIBTEST), does not account…
Descriptors: Computer Simulation, Equations (Mathematics), Item Bias, Item Response Theory
Peer reviewed Peer reviewed
Baker, Frank B. – Applied Psychological Measurement, 1988
The form of item log-likelihood surface was investigated under two-parameter and three-parameter logistic models. Results confirm that the LOGIST program procedures used to locate the maximum of the likelihood functions are consistent with the form of the item log-likelihood surface. (SLD)
Descriptors: Estimation (Mathematics), Factor Analysis, Graphs, Latent Trait Theory
Peer reviewed Peer reviewed
Prien, Borge – Studies in Educational Evaluation, 1989
Under certain conditions it may be possible to determine the difficulty of previously untested test items. Although no recipe can be provided, reflections on this topic are presented, drawing on concepts of item banking. A functional constructive method is suggested as having the most potential. (SLD)
Descriptors: Difficulty Level, Educational Assessment, Foreign Countries, Item Analysis
Reckase, Mark D.; McKinley, Robert L. – 1982
This paper reviews the existing multidimensional item response theory (IRT) models and demonstrates how one of the models can be applied to estimation of abilities from a test measuring more than one dimension. The purposes of this paper were threefold. First, the fundamental concepts required when considering multidimensional models for the…
Descriptors: Estimation (Mathematics), Higher Education, Latent Trait Theory, Mathematical Models
Ackerman, Terry A. – 1987
One of the important underlying assumptions of all item response theory (IRT) models is that of local independence. This assumption requires that the response to an item on a test not be influenced by the response to any other items. This assumption is often taken for granted, with little or no scrutiny of the response process required to answer…
Descriptors: Computer Software, Correlation, Estimation (Mathematics), Latent Trait Theory
DeAyala, R. J.; Koch, William R. – 1987
A nominal response model-based computerized adaptive testing procedure (nominal CAT) was implemented using simulated data. Ability estimates from the nominal CAT were compared to those from a CAT based upon the three-parameter logistic model (3PL CAT). Furthermore, estimates from both CAT procedures were compared with the known true abilities used…
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Computer Simulation
Peer reviewed Peer reviewed
Wainer, Howard; And Others – Journal of Educational Measurement, 1991
Hierarchical (adaptive) and linear methods of testlet construction were compared. The performance of 2,080 ninth and tenth graders on a 4-item testlet was used to predict performance on the entire test. The adaptive test was slightly superior as a predictor, but the cost of obtaining that superiority was considerable. (SLD)
Descriptors: Adaptive Testing, Algebra, Comparative Testing, High School Students
Mislevy, Robert J. – 1987
Standard procedures for estimating item parameters in Item Response Theory models make no use of auxiliary information about test items, such as their format or content, or the skills they require for solution. This paper describes a framework for exploiting this information, thereby enhancing the precision and stability of item parameter…
Descriptors: Bayesian Statistics, Difficulty Level, Estimation (Mathematics), Intermediate Grades
Gustafsson, Jan-Eric – 1979
Problems and procedures in assessing and obtaining fit of data to the Rasch model are treated and assumptions embodied in the Rasch model are made explicit. It is concluded that statistical tests are needed which are sensitive to deviations so that more than one item parameter would be needed for each item, and more than one person parameter would…
Descriptors: Ability, Difficulty Level, Goodness of Fit, Item Analysis
Peer reviewed Peer reviewed
Mislevy, Robert J.; And Others – Journal of Educational Measurement, 1993
This paper illustrates how, in the item-response theory framework, collateral information about test items can augment or replace examinee responses when linking or equating new tests to established scales, using data from the Pre-Professional Skills Test for approximately 40,000 examinees. Collateral information can predict item operating…
Descriptors: College Students, Equated Scores, Equations (Mathematics), Higher Education
Wise, Lauress L. – 1986
A primary goal of this study was to determine the extent to which item difficulty was related to item position and, if a significant relationship was found, to suggest adjustments to predicted item difficulty that reflect differences in item position. Item response data from the Medical College Admission Test (MCAT) were analyzed. A data set was…
Descriptors: College Entrance Examinations, Difficulty Level, Educational Research, Error of Measurement
Green, Kathy E.; Kluever, Raymond C. – 1991
Item components that might contribute to the difficulty of items on the Raven Colored Progressive Matrices (CPM) and the Standard Progressive Matrices (SPM) were studied. Subjects providing responses to CPM items were 269 children aged 2 years 9 months to 11 years 8 months, most of whom were referred for testing as potentially gifted. A second…
Descriptors: Academically Gifted, Children, Comparative Testing, Difficulty Level
Kirisci, Levent; Hsu, Tse-Chi – 1992
A predictive adaptive testing (PAT) strategy was developed based on statistical predictive analysis, and its feasibility was studied by comparing PAT performance to those of the Flexilevel, Bayesian modal, and expected a posteriori (EAP) strategies in a simulated environment. The proposed adaptive test is based on the idea of using item difficulty…
Descriptors: Adaptive Testing, Bayesian Statistics, Comparative Analysis, Computer Assisted Testing
Smith, Richard M.; Mitchell, Virginia P. – 1979
To improve the accuracy of college placement, Rasch scoring and person-fit statistics on the Comparative Guidance and Placement test (CGP) was compared to the traditional right-only scoring. Correlations were calculated between English and mathematics course grades and scores of 1,448 entering freshmen on the reading, writing, and mathematics…
Descriptors: Academic Ability, Computer Programs, Difficulty Level, Goodness of Fit