NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 3,736 to 3,750 of 5,169 results Save | Export
Peer reviewed Peer reviewed
Newland, T. Ernest – Journal of School Psychology, 1973
Error in assessing learning aptitude inheres much more in the users of the tests than in the tests themselves. Assumptions fundamental to such assessment are considered. It is particularly important that the tester constantly be sensitive to the nature of the relaionship between the psychological demands of test items or tests and the learning…
Descriptors: Academic Aptitude, Examiners, Item Analysis, Learning Processes
Peer reviewed Peer reviewed
Oller, John W., Jr.; And Others – Language Learning, 1972
Descriptors: Cloze Procedure, English (Second Language), High School Students, Item Analysis
Peer reviewed Peer reviewed
Cranney, A. Garr – Journal of Reading Behavior, 1972
Descriptors: Cloze Procedure, College Students, Item Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
Herold, Edward S. – Adolescence, 1973
Support for the concurrent validity of the scale was given when four indicators of direct dating experience were found to be significantly related to the scale scores. (Author)
Descriptors: College Students, Dating (Social), Item Analysis, Measurement Instruments
Peer reviewed Peer reviewed
Costin, Frank – Educational and Psychological Measurement, 1972
This study confirmed the practical benefits of three-choice items. (Author)
Descriptors: Achievement Tests, Cues, Item Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
Ace, Merle C.; Dawis, Rene V. – Educational and Psychological Measurement, 1973
Because no previous study was found in which both blank position in the item stem and positional placement of the correct response were studied simultaneously, it was decided to investigate the influence of these two factors, alone and in combination, on the difficulty level of verbal analogy items. (Authors)
Descriptors: Analysis of Variance, Data Analysis, Difficulty Level, Disadvantaged
Peer reviewed Peer reviewed
Sax, Gilbert; And Others – Journal of Experimental Education, 1972
One purpose of this study was to investigate the effects of different levels of item complexity on subsequent student achievement to help clarify the effectiveness of the transfer and hierarchical models. (Authors/MB)
Descriptors: Achievement, Cognitive Processes, Difficulty Level, Interaction Process Analysis
Criscuolo, Nicholas P. – Minn Reading Quart, 1970
Descriptors: Achievement Tests, Disadvantaged, Inner City, Instructional Improvement
Ratter, George S.; Tinkleman, Vera – Educ Psychol Meas, 1970
The placement or ordering of items on behavior rating scales which elicit extreme responses (anchor stimulus items) affect the responses given to neutral items. (DG)
Descriptors: Behavior Rating Scales, Content Analysis, Item Analysis, Measurement Techniques
Peer reviewed Peer reviewed
Finkbeiner, Daniel T.; And Others – Mathematics Teacher, 1971
Descriptors: Advanced Placement Programs, Calculus, College Entrance Examinations, Item Analysis
Peer reviewed Peer reviewed
Oller, John W., Jr.; Inal, Nevin – TESOL Quarterly, 1971
Descriptors: Cloze Procedure, Comparative Analysis, Educational Experiments, English (Second Language)
Peer reviewed Peer reviewed
Schrock, Timothy J.; Mueller, Daniel J. – Journal of Educational Research, 1982
Three item-construction principles for multiple-choice tests were studied to determine how they affected test results for high school students: (1) use of incomplete sentence stem; (2) location of blank in the stem; and (3) presence of noncueing material. Differences in item construction had a slight effect on test results. (Authors/CJ)
Descriptors: Cues, High School Students, High Schools, Item Analysis
Stratton, N. J. – Teaching at a Distance, 1981
A study of recurrent faults in multiple-choice items in Britain's Open University's computer-marked tests has led to a procedure for avoiding these faults. A description of the study covers the incidence and sources of faults (obviousness, memorization, unclear instruction, ambiguity, distractors, inter-item effects, and structure) and…
Descriptors: Error Patterns, Foreign Countries, Higher Education, Item Analysis
Peer reviewed Peer reviewed
Scheuneman, Janice – Journal of Educational Measurement, 1979
This paper presents a chi square method for assessing bias in test items. In this procedure an unbiased item is defined as an item for which the probability of a correct response is the same for any person of a given ability level regardless of that person's ethnic group. (Author/CTM)
Descriptors: Cultural Differences, Culture Fair Tests, Item Analysis, Primary Education
Peer reviewed Peer reviewed
Ekstrom, Ruth B.; And Others – Educational Horizons, 1979
Content analyses for sex bias were conducted on items from three widely-used achievement tests which together span the grade levels 1-12. A significant but modest correlation was found between an item's content bias and performance on that item by male and female students. (SJL)
Descriptors: Academic Achievement, Achievement Tests, Content Analysis, Elementary Secondary Education
Pages: 1  |  ...  |  246  |  247  |  248  |  249  |  250  |  251  |  252  |  253  |  254  |  ...  |  345