NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Audience
Researchers34
Practitioners1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 34 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A.; Ge, Yuan – Measurement: Interdisciplinary Research and Perspectives, 2023
In selected-response assessments such as attitude surveys with Likert-type rating scales, examinees often select from rating scale categories to reflect their locations on a construct. Researchers have observed that some examinees exhibit "response styles," which are systematic patterns of responses in which examinees are more likely to…
Descriptors: Goodness of Fit, Responses, Likert Scales, Models
Ackerman, Terry A.; Spray, Judith A. – 1986
A model of test item dependency is presented and used to illustrate the effect that violations of local independence have on the behavior of item characteristic curves. The dependency model is flexible enough to simulate the interaction of a number of factors including item difficulty and item discrimination, varying degrees of item dependence,…
Descriptors: Difficulty Level, Item Analysis, Latent Trait Theory, Mathematical Models
PDF pending restoration PDF pending restoration
Reckase, Mark D.; McKinley, Robert L. – 1984
A new indicator of item difficulty, which identifies effectiveness ranges, overcomes the limitations of other item difficulty indexes in describing the difficulty of an item or a test as a whole and in aiding the selection of appropriate ability level items for a test. There are three common uses of the term "item difficulty": (1) the probability…
Descriptors: Difficulty Level, Evaluation Methods, Item Analysis, Latent Trait Theory
Ackerman, Terry A. – 1987
Concern has been expressed over the item response theory (IRT) assumption that a person's ability can be estimated in a unidimensional latent space. To examine whether or not the response to an item requires only a single latent ability, unidimensional ability estimates were compared for data generated from the multidimensional item response…
Descriptors: Ability, Computer Simulation, Difficulty Level, Item Analysis
Holland, Paul W.; Thayer, Dorothy T. – 1985
An alternative definition has been developed of the delta scale of item difficulty used at Educational Testing Service. The traditional delta scale uses an inverse normal transformation based on normal ogive models developed years ago. However, no use is made of this fact in typical uses of item deltas. It is simply one way to make the probability…
Descriptors: Difficulty Level, Error Patterns, Estimation (Mathematics), Item Analysis
Reckase, Mark D.; McKinley, Robert L. – 1984
The purpose of this paper is to present a generalization of the concept of item difficulty to test items that measure more than one dimension. Three common definitions of item difficulty were considered: the proportion of correct responses for a group of individuals; the probability of a correct response to an item for a specific person; and the…
Descriptors: Difficulty Level, Item Analysis, Latent Trait Theory, Mathematical Models
Rubin, Lois S.; Mott, David E. W. – 1984
An investigation of the effect on the difficulty value of an item due to position placement within a test was made. Using a 60-item operational test comprised of 5 subtests, 60 items were placed as experimental items on a number of spiralled test forms in three different positions (first, middle, last) within the subtest composed of like items.…
Descriptors: Difficulty Level, Item Analysis, Minimum Competency Testing, Reading Tests
Yen, Wendy M. – 1984
Unidimensional item response theory (IRT) is widely used in the analysis and equating of educational achievement tests. If an IRT model is true, item response must be locally independent when the trait is held constant. Several measures of local dependence which are used in conjunction with the three-parameter logistic model in the analysis of…
Descriptors: Achievement Tests, Difficulty Level, Elementary Education, Equated Scores
Reid, Jerry B. – 1985
This report investigates an area of uncertainty in using the Angoff method for setting standards, namely whether or not a judge's conceptualizations of borderline group performance are realistic. Ratings are usually made with reference to the performance of this hypothetical group, therefore the Angoff method's success is dependent on this point.…
Descriptors: Certification, Cutting Scores, Difficulty Level, Interrater Reliability
Winsberg, Suzanne; And Others – 1984
In most item response theory models a particular mathematical form is assumed for all item characteristic curves, e.g., a logistic function. It could be desirable, however, to estimate the shape of the item characteristic curves without prior restrictive assumptions about its mathematical form. We have developed a practical method of estimating…
Descriptors: Difficulty Level, Estimation (Mathematics), Goodness of Fit, Item Analysis
Peer reviewed Peer reviewed
Cheng, Tina T.; And Others – AEDS Journal, 1985
Presents a validation procedure for the Computer Literacy Examination: Cognitive Aspect, a test assessing high school students' computer literacy levels. Steps in the test's construction process are explained, data collected during its validation phase are analyzed, and conclusions on its validity and reliability are discussed. The final test…
Descriptors: Achievement Gains, Computer Literacy, Content Analysis, Difficulty Level
Reckase, Mark D.; And Others – 1985
Factor analysis is the traditional method for studying the dimensionality of test data. However, under common conditions, the factor analysis of tetrachoric correlations does not recover the underlying structure of dichotomous data. The purpose of this paper is to demonstrate that the factor analyses of tetrachoric correlations is unlikely to…
Descriptors: Correlation, Difficulty Level, Factor Analysis, Item Analysis
Peer reviewed Peer reviewed
Garrison, Wayne; And Others – American Annals of the Deaf, 1992
This study examined characteristics of multiple-choice reading comprehension tasks suspected of influencing their difficulty, through administration of the California Achievement Tests to 158 deaf college students. Problem components evaluated included manifest content, psychologically salient features, and processing demands. Variation in item…
Descriptors: Cognitive Processes, College Students, Deafness, Difficulty Level
Drasgow, Fritz; Parsons, Charles K. – 1982
The effects of a multidimensional latent trait space on estimation of item and person parameters by the computer program LOGIST are examined. Several item pools were simulated that ranged from truly unidimensional to an inconsequential general latent trait. Item pools with intermediate levels of prepotency of the general latent trait were also…
Descriptors: Computer Simulation, Computer Software, Difficulty Level, Item Analysis
Muraki, Eiji – 1984
The TESTFACT computer program and full-information factor analysis of test items were used in a computer simulation conducted to correct for the guessing effect. Full-information factor analysis also corrects for omitted items. The present version of TESTFACT handles up to five factors and 150 items. A preliminary smoothing of the tetrachoric…
Descriptors: Comparative Analysis, Computer Simulation, Computer Software, Correlation
Previous Page | Next Page ยป
Pages: 1  |  2  |  3