NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1,906 to 1,920 of 3,126 results Save | Export
Peer reviewed Peer reviewed
Pajares, Frank; Miller, M. David – Journal of Experimental Education, 1997
The mathematics self-efficacy and problem-solving performance of 327 middle school students were assessed with multiple-choice and open-ended methods. No differences in self-efficacy resulted from the different forms of assessment, although those who took the multiple-choice test had higher scores and better calibration of ability. (SLD)
Descriptors: Ability, Educational Assessment, Mathematics, Middle School Students
Peer reviewed Peer reviewed
Dochy, Filip; Moerkerke, George; De Corte, Erik; Segers, Mien – European Journal of Psychology of Education, 2001
Focuses on the discussion of whether "none of the above" (NOTA) questions should be used on tests. Discusses a study in which a protocol analysis was conducted on written statements of examinees while answering NOTA items. Explains that a multiple-choice test was given to university students finding that NOTA options seem to be more attractive.…
Descriptors: College Students, Educational Research, Higher Education, Skill Development
Peer reviewed Peer reviewed
Good, Frances – Educational Studies, 1989
Considers issues surrounding the use of differentiated examinations. Discusses how differentiation may be provided, the wording of questions, and how marks should be given. Highlights some pitfalls of using this approach. Concludes that, although differentiated examinations are possible, they will not always meet the needs of the end range of test…
Descriptors: Educational Research, Elementary Secondary Education, Evaluation, Foreign Countries
Sternberg, Robert J. – Learning, 1989
Standardized tests which measure a narrow span of intelligence unfairly penalize students whose strengths don't fall within that range. Three kinds of intelligence (analytical, creative, practical) are discussed. Sternberg's Triarchic Abilities Test, currently being test-piloted, assesses all three aspects of intelligence in contrast to current…
Descriptors: Accountability, Cognitive Processes, Creativity, Elementary Secondary Education
Griffiths, Sue – Use of English, 1989
Examines the format and nature of the two required writing tasks in the General Certificate of Secondary Education (GCSE) English Examination. Asserts that the test assignments do not properly assess writing ability and are problematic, tedious, and ineptly-conceived ordeals for pupils. Discusses the use of writing folders as an alternative. (KEH)
Descriptors: Educational Testing, English Instruction, Foreign Countries, Secondary Education
Peer reviewed Peer reviewed
Zoller, Uri; Ben-Chaim, David – Journal of Research in Science Teaching, 1989
Surveys the trait anxiety profile and preference concerning types of examinations of preservice science teachers. Reports that the students prefer examinations on understanding and analyzing rather than on knowing and remembering, and that students' anxiety states correlate with the type of examination and sex. (Author)
Descriptors: Achievement Tests, College Science, Preservice Teacher Education, Science Education
Peer reviewed Peer reviewed
Balch, William R. – Teaching of Psychology, 1989
Studies the effect of item order on test scores and completion time. Students scored slightly higher when test items were grouped sequentially (relating to text and lectures) than on tests when test items were grouped by text chapter but ordered randomly, or when test items were ordered randomly. Found no differences in completion time. (Author/LS)
Descriptors: Educational Research, Higher Education, Performance, Psychology
Peer reviewed Peer reviewed
Wainer, Howard; And Others – Journal of Educational Measurement, 1994
The comparability of scores on test forms that are constructed through examinee item choice is examined in an item response theory framework. The approach is illustrated with data from the College Board's Advanced Placement Test in Chemistry taken by over 18,000 examinees. (SLD)
Descriptors: Advanced Placement, Chemistry, Comparative Analysis, Constructed Response
Peer reviewed Peer reviewed
Bruno, James E.; Dirkzwager, A. – Educational and Psychological Measurement, 1995
Determining the optimal number of choices on a multiple-choice test is explored analytically from an information theory perspective. The analysis revealed that, in general, three choices seem optimal. This finding is in agreement with previous statistical and psychometric research. (SLD)
Descriptors: Distractors (Tests), Information Theory, Multiple Choice Tests, Psychometrics
Peer reviewed Peer reviewed
Wilson, Mark; Wang, Wen-chung – Applied Psychological Measurement, 1995
Data from the California Learning Assessment System mathematics assessment were used to examine issues that arise when scores from different assessment modes are combined. Multiple-choice, open-ended, and investigation items were combined in a test across three test forms. Results illustrate the difficulties faced in evaluating combined…
Descriptors: Educational Assessment, Equated Scores, Evaluation Methods, Item Response Theory
Peer reviewed Peer reviewed
Mellenbergh, Gideon J. – Multivariate Behavioral Research, 1994
A general linear latent trait model for continuous item responses is described. The special unidimensional case for continuous item response is the model of K. G. Joreskog (1971) of congeneric item response. The correspondence between models for continuous and dichotomous item responses is shown to be closer than usually supposed. (SLD)
Descriptors: Attitude Measures, Item Bias, Item Response Theory, Personality Measures
Peer reviewed Peer reviewed
Prewett, Peter N. – Psychological Assessment, 1995
The concurrent validity of 2 brief intelligence tests, the Matrix Analogies Test-Short Form (MAT) and the Kaufman Brief Intelligence Test (K-BIT) with the Wechsler Intelligence Scale for Children-Third Edition (WISC-III) using a sample of 50 urban students. The MAT and K-BIT appeared equally useful as screening tests. (SLD)
Descriptors: Children, Comparative Analysis, Concurrent Validity, Correlation
Peer reviewed Peer reviewed
Downing, Steven M.; And Others – Applied Measurement in Education, 1995
The criterion-related validity evidence and other psychometric characteristics of multiple-choice and multiple true-false (MTF) items in medical specialty certification examinations were compared using results from 21,346 candidates. Advantages of MTF items and implications for test construction are discussed. (SLD)
Descriptors: Cognitive Ability, Licensing Examinations (Professions), Medical Education, Objective Tests
Peer reviewed Peer reviewed
Benshoff, James M.; Thomas, Wayne P. – Counselor Education and Supervision, 1992
Reexamined Counselor Evaluation Rating Scale (CERS) using confirmatory factor analysis. Analyzed 185 self-rated CERs. Findings suggest that, when counselors use CERS to rate themselves, different factors may emerge from those emerging when experienced supervisors use CERS to evaluate supervisee progress and performance. (Author/NB)
Descriptors: Counselor Training, Evaluation Methods, Factor Structure, Graduate Students
Peer reviewed Peer reviewed
Frisbie, David A.; Becker, Douglas F. – Applied Measurement in Education, 1990
Seventeen educational measurement textbooks were reviewed to analyze current perceptions regarding true-false achievement testing. A synthesis of the rules for item writing is presented, and the purported advantages and disadvantages of the true-false format derived from those texts are reviewed. (TJH)
Descriptors: Achievement Tests, Higher Education, Methods Courses, Objective Tests
Pages: 1  |  ...  |  124  |  125  |  126  |  127  |  128  |  129  |  130  |  131  |  132  |  ...  |  209