NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 346 to 360 of 492 results Save | Export
Tollefson, Nona; Tripp, Alice – 1986
The item difficulty and item discrimination of three multiple-choice item formats were compared in experimental and non-experimental settings. In the experimental study, 104 graduate students were randomly assigned to complete one of three forms of a multiple-choice test: (1) a complex alternative ("none of the above") as the correct answer; (2) a…
Descriptors: Achievement Tests, Difficulty Level, Discriminant Analysis, Graduate Students
Forster, Fred; And Others – 1978
Research on the Rasch model of test and item analysis was applied to tests constructed from item banks for reading and mathematics with respect to five practical problems for scaling items and equating test forms. The questions were: (1) Does the Rasch model yield the same scale value regardless of the student sample? (2) How many students are…
Descriptors: Achievement Tests, Difficulty Level, Elementary Secondary Education, Equated Scores
Harris, Dickie A.; Penell, Roger J. – 1977
This study used a series of simulations to answer questions about the efficacy of adaptive testing raised by empirical studies. The first study showed that for reasonable high entry points, parameters estimated from paper-and-pencil test protocols cross-validated remarkably well to groups actually tested at a computer terminal. This suggested that…
Descriptors: Adaptive Testing, Computer Assisted Testing, Cost Effectiveness, Difficulty Level
Peer reviewed Peer reviewed
O'Brien, Michael L. – Studies in Educational Evaluation, 1986
A test score can be used for individual instructional diagnosis after determining whether: (1) difficulty of the test items was consistent with the complexity of the content measured; (2) items measuring the same underlying process were about equally difficult; and (3) partial credit scoring would increase the reliability of the diagnosis. (LMO)
Descriptors: Behavioral Objectives, Difficulty Level, Educational Diagnosis, Error Patterns
Peer reviewed Peer reviewed
Cheng, Tina T.; And Others – AEDS Journal, 1985
Presents a validation procedure for the Computer Literacy Examination: Cognitive Aspect, a test assessing high school students' computer literacy levels. Steps in the test's construction process are explained, data collected during its validation phase are analyzed, and conclusions on its validity and reliability are discussed. The final test…
Descriptors: Achievement Gains, Computer Literacy, Content Analysis, Difficulty Level
Peer reviewed Peer reviewed
Bennett, Randy Elliot; And Others – Journal of Educational Measurement, 1989
Causes of differential item difficulty for blind students taking the braille edition of the Scholastic Aptitude Test's mathematical section were studied. Data for 261 blind students were compared with data for 8,015 non-handicapped students. Results show an association between selected item categories and differential item functioning. (TJH)
Descriptors: Braille, College Entrance Examinations, Comparative Analysis, Difficulty Level
Peer reviewed Peer reviewed
Crehan, Kevin D.; And Others – Educational and Psychological Measurement, 1993
Studies with 220 college students found that multiple-choice test items with 3 items are more difficult than those with 4 items, and items with the none-of-these option are more difficult than those without this option. Neither format manipulation affected item discrimination. Implications for test construction are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Difficulty Level, Distractors (Tests)
Reckase, Mark D.; And Others – 1985
Factor analysis is the traditional method for studying the dimensionality of test data. However, under common conditions, the factor analysis of tetrachoric correlations does not recover the underlying structure of dichotomous data. The purpose of this paper is to demonstrate that the factor analyses of tetrachoric correlations is unlikely to…
Descriptors: Correlation, Difficulty Level, Factor Analysis, Item Analysis
Schmitt, Alicia P.; Bleistein, Carole A. – 1987
The purpose of this investigation was to identify item factors that may contribute to differential item functioning (DIF) for black examinees on Scholastic Aptitude Test (SAT) analogy items. Initially, items were classified according to several possible explanatory factors. Preliminary analyses identified several factors that seemed to affect DIF…
Descriptors: Analogy, Black Students, College Entrance Examinations, Difficulty Level
Ervin, Nancy S. – 1988
How accurately deltas (statistics measuring the difficulty of items) established by pre-test populations reflect deltas obtained from final form populations, and the consequent utility of pre-test deltas for constructing final (operational test) forms to meet developed statistical specifications were studied. Data were examined from five subject…
Descriptors: Achievement Tests, College Entrance Examinations, Difficulty Level, Higher Education
Choppin, Bruce – 1982
A strategy for overcoming problems with the Rasch model's inability to handle missing data involves a pairwise algorithm which manipulates the data matrix to separate out the information needed for the estimation of item difficulty parameters in a test. The method of estimation compares two or three items at a time, separating out the ability…
Descriptors: Difficulty Level, Estimation (Mathematics), Goodness of Fit, Item Analysis
Ironson, Gail H.; Craig, Robert – 1982
This study was designed to increase knowledge of the functioning of item bias techniques in detecting biased items. Previous studies have used computer-generated data or real data with unknown amounts of bias. The present project extends previous studies by using items that are logically generated and subjectively evaluated a priori to be biased…
Descriptors: Ability Grouping, Difficulty Level, Higher Education, Item Analysis
Berk, Ronald A. – 1978
Sixteen item statistics recommended for use in the development of criterion-referenced tests were evaluated. There were two major criteria: (1) practicability in terms of ease of computation and interpretation and (2) meaningfulness in the context of the development process. Most of the statistics were based on a comparison of performance changes…
Descriptors: Achievement Tests, Criterion Referenced Tests, Difficulty Level, Guides
Cohen, Allan S.; Kappy, Kathleen A. – 1980
The ability of the Rasch model to provide item difficulties and achievement test scores which are invariant is studied. Data for the study were obtained from students in grades 3 through 7 who took the Sequential Tests of Educational Progress (STEP III) Reading and Mathematics Concepts tests during a spring norming study. Each test contained 50…
Descriptors: Achievement Tests, Difficulty Level, Elementary Education, Item Analysis
Bratfisch, Oswald – 1972
Nine studies are summarized which investigated the relation between attributes of performance as perceived by the subject and corresponding objective measurements. The attributes studied were: (1) intellectual activity perceived to be involved when dealing with a task (Studies 1 and 2), and (2) perceived difficulty (Studies 4 to 9). Study 3…
Descriptors: Cognitive Measurement, Correlation, Difficulty Level, Intellectual Experience
Pages: 1  |  ...  |  20  |  21  |  22  |  23  |  24  |  25  |  26  |  27  |  28  |  ...  |  33