NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 3,151 to 3,165 of 5,131 results Save | Export
Peer reviewed Peer reviewed
Rudner, Lawrence M. – Educational and Psychological Measurement, 1983
The relationship between item parameter values obtained from independent Birnbaum (l968) model calibrations of the same item set from two different samples is examined. (Author)
Descriptors: Achievement Tests, Aptitude Tests, Item Analysis, Latent Trait Theory
Peer reviewed Peer reviewed
Masters, Geofferey N. – Educational and Psychological Measurement, 1984
DICOT, a computer program for the Rasch analysis of classroom tests, is described. Results are presented in a self-explanatory form. Person ability and item difficulty estimates are expressed in a familiar metric. Person and item fit statistics provide a diagnosis of individual children and identification of problematic items. (Author/DWH)
Descriptors: Classroom Techniques, Foreign Countries, Item Analysis, Latent Trait Theory
Peer reviewed Peer reviewed
Wilcox, Rand R. – Psychometrika, 1983
A procedure for determining the reliability of an examinee knowing k out of n possible multiple choice items given his or her performance on those items is presented. Also, a scoring procedure for determining which items an examinee knows is presented. (Author/JKS)
Descriptors: Item Analysis, Latent Trait Theory, Measurement Techniques, Multiple Choice Tests
Peer reviewed Peer reviewed
Stegelmann, Werner – Psychometrika, 1983
The Rasch model is generalized to a multicomponent model, so that observations of component events are not needed to apply the model. It is shown that the generalized model maintains the property of specific objectivity of the Rasch model. An application to a mathematics test is provided. (Author/JKS)
Descriptors: Estimation (Mathematics), Item Analysis, Latent Trait Theory, Mathematical Models
Vacc, Nicholas A.; Loesch, Larry C.; Lubik, Ruth E. – 2001
Multiple choice tests are widely viewed as the most effective and objective means of assessment. Item development is the central component of creating an effective test, but test developers often do not have the background in item development. This document describes recall, application, and analysis, the three cognitive levels of test items. It…
Descriptors: Educational Assessment, Evaluation, Item Analysis, Measures (Individuals)
Wang, Jianjun; Staver, John – 1999
Development of the test instrument in the Third International Mathematics and Science Study (TIMSS) was based on the expertise of many researchers, including "distinguished scholars from 10 countries" who participated on the TIMSS Subject Matter Advisory Committee. However, a close examination of the TIMSS Science items suggests that not…
Descriptors: Achievement Rating, Elementary Secondary Education, Foreign Countries, Item Analysis
Liu, Jinghua; Schuppan, Fred; Walker, Michael E. – College Board, 2005
This study explored whether the addition of the items with more advanced math content to the SAT Reasoning Test™ (SAT®) would impact test-taker performance. Two sets of SAT math equating sections were modified to form four subforms each. Different numbers of items with advanced content, taken from the SAT II: Mathematics Level IC Test (Math IC),…
Descriptors: College Entrance Examinations, Mathematics Tests, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Bennett, S. N. – British Journal of Educational Psychology, 1973
The JEPI was re-analysed by item, factor and cluster analysis. (Editor)
Descriptors: Cluster Analysis, Correlation, Educational Psychology, Educational Research
Peer reviewed Peer reviewed
McQuitty, Louis L. – Educational and Psychological Measurement, 1973
Paper analyzes additive variance in such a fashion that it supports both a theory of types and a cognitive-frustration theory of behavior in the development of tests designed to assess mental disturbance'' versus normality'' amongst college students. (Author)
Descriptors: Emotional Disturbances, Item Analysis, Psychological Testing, Tables (Data)
Peer reviewed Peer reviewed
Board, Cynthia; Whitney, Douglas R. – Journal of Educational Measurement, 1972
For the principles studied here, poor item-writing practices serve to obscure (or attentuate) differences between good and poor students. (Authors)
Descriptors: College Students, Item Analysis, Multiple Choice Tests, Test Construction
Peer reviewed Peer reviewed
Pyrczak, Fred – Reading Research Quarterly, 1972
Descriptors: Item Analysis, Multiple Choice Tests, Reading Comprehension, Reading Research
Peer reviewed Peer reviewed
Gerst, Marvin S.; Moos, Rudolf H. – Journal of Educational Psychology, 1972
The development, initial standardization, and substantive data of the University Residence Environment Scales (URES) are presented. (Authors)
Descriptors: Comparative Analysis, Dormitories, Environment, Evaluation Criteria
Peer reviewed Peer reviewed
Oosterhof, Albert C.; Kocher, A. Thel – Educational and Psychological Measurement, 1972
A listing of this program which includes illustrative input and output can be obtained by writing either author, Bureau of Educational Research, The University of Kansas, Lawrence, Kansas. (Authors)
Descriptors: Computer Programs, Feedback, Input Output, Item Analysis
Bishop, A. J.; and others – Int J Educ Sci, 1969
Descriptors: Item Analysis, Multiple Choice Tests, Questioning Techniques, Test Construction
Peer reviewed Peer reviewed
MacGregor, Ronald – Studies in Art Education, 1972
Author developed an instrument, called the Perceptual Index, which was designed to provide teachers of art with a valid and reliable measure of response to visual stimuli. (Author/MB)
Descriptors: Art Education, Elementary School Students, Item Analysis, Measurement Instruments
Pages: 1  |  ...  |  207  |  208  |  209  |  210  |  211  |  212  |  213  |  214  |  215  |  ...  |  343