NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 3,166 to 3,180 of 5,131 results Save | Export
Michael, Joan J.; Michael, William B. – Educ Psychol Meas, 1969
Descriptors: Academic Achievement, College Mathematics, Item Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
McKenzie, Gary R. – American Educational Research Journal, 1972
This study offers one bit of evidence that quizzes written to require reasoning are more effective in attaining thinking" objectives than are recall quizzes. (Author)
Descriptors: Abstract Reasoning, Cognitive Processes, Comparative Testing, Grade 8
Peer reviewed Peer reviewed
Huck, Schuyler W.; Bowers, Norman D. – Journal of Educational Measurement, 1972
Study investigated whether the proportion of examinees who answer an item correctly may be influenced by the difficulty of the immediately preceding item. (Authors/MB)
Descriptors: Achievement Tests, Difficulty Level, Hypothesis Testing, Item Analysis
Adams, Don; Farrell, Joseph P. – Comp Educ, 1969
The nature of societal differentiation and the meaning and measurement of educational differentiation are studied. (Editor)
Descriptors: Classification, Educational Change, Educational Development, Educational Objectives
Entwistle, N. J.; Entwistle, Dorothy – Brit J Educ Psychol, 1970
Descriptors: Academic Achievement, College Students, Item Analysis, Personality Studies
Sawyer, R. N. – Measurement and Evaluation in Guidance, 1971
This study investigated the reliability and validity of the Philosophical Belief Inventory (PBI). With the exception of the relationship between idealism and pragmatism and realism and existentialism, the PBI scales appear to be assessing independent facets of belief. (Author)
Descriptors: College Students, Counselors, Item Analysis, Psychological Testing
Anderson, Gary J.; Walberg, Herbert J. – Educational Sciences - An International Journal, 1968
Descriptors: Academic Achievement, Educational Environment, Group Dynamics, Item Analysis
Fremer, John; Anastasio, Ernest J. – J Educ Meas, 1969
Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago, Illinois, February, 1968.
Descriptors: Computer Assisted Instruction, Computer Programs, Item Analysis, Programed Instructional Materials
Bannatyne, Alex D. – J Learning Disabilities, 1969
Descriptors: Cerebral Dominance, Exceptional Child Research, Item Analysis, Memory
Peer reviewed Peer reviewed
Harnisch, Delwyn L. – Journal of Educational Measurement, 1983
The Student-Problem (S-P) methodology is described using an example of 24 students on a test of 44 items. Information based on the students' test score and the modified caution index is put to diagnostic use. A modification of the S-P methodology is applied to domain-referenced testing. (Author/CM)
Descriptors: Academic Achievement, Educational Practices, Item Analysis, Responses
Peer reviewed Peer reviewed
Carlson, Jerry S.; Jensen, C. Mark – Journal of Consulting and Clinical Psychology, 1981
Reliabilities for the Raven Colored Progressive Matrices Test (CPM) are reported for three age groups (ages 5 1/2- 6 1/2, 6 1/2-7 1/2, and 7 1/2-8 1/2 years) and three ethnic groups (Anglo, Black, and Hispanic). Results indicate CPM is not equally reliable for all age groups, but appears equally reliable for the three ethnic groups. (Author)
Descriptors: Age Differences, Blacks, Children, Comparative Analysis
Peer reviewed Peer reviewed
Haynes, Stephen N.; And Others – Journal of Consulting and Clinical Psychology, 1981
Examined the discriminant validity, criterion-related validity, and between-spouse agreement for a marital intake interview as a function of interview method and valence of interview items. Results suggested a high degree of discriminant validity, higher interspouse correlations during joint interviews, and higher indices of criterion related…
Descriptors: Comparative Analysis, Data Collection, Evaluation Methods, Interviews
Peer reviewed Peer reviewed
Barker, Douglas; Ebel, Robert L. – Contemporary Educational Psychology, 1982
Two forms of an undergraduate examination were constructed. Tests varied with respect to item truth value (true, false) and method of phrasing (positive, negative). Negatively stated items were more difficult but not more discriminating than positively stated items. False items were not more difficult but were more discriminating than true items.…
Descriptors: Difficulty Level, Higher Education, Item Analysis, Response Style (Tests)
Peer reviewed Peer reviewed
Beuchert, A. Kent; Mendoza, Jorge L. – Journal of Educational Measurement, 1979
Ten item discrimination indices, across a variety of item analysis situations, were compared, based on the validities of tests constructed by using each of the indices to select 40 items from a 100-item pool. Item score data were generated by a computer program and included a simulation of guessing. (Author/CTM)
Descriptors: Item Analysis, Simulation, Statistical Analysis, Test Construction
Peer reviewed Peer reviewed
Sachar, Jane; Suppes, Patrick – Educational and Psychological Measurement, 1980
The present study compared six methods, two of which utilize the content structure of items, to estimate total-test scores using 450 students and 60 items of the 110-item Stanford Mental Arithmetic Test. Three methods yielded fairly good estimates of the total-test score. (Author/RL)
Descriptors: Content Analysis, Correlation, Item Analysis, Item Sampling
Pages: 1  |  ...  |  208  |  209  |  210  |  211  |  212  |  213  |  214  |  215  |  216  |  ...  |  343