NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1,996 to 2,010 of 3,093 results Save | Export
Peer reviewed Peer reviewed
Kobrin, Jennifer L.; Young, John W. – Applied Measurement in Education, 2003
Studied the cognitive equivalence of computerized and paper-and-pencil reading comprehension tests using verbal protocol analysis. Results for 48 college students indicate that the only significant difference between the computerized and paper-and-pencil tests was in the frequency of identifying important information in the passage. (SLD)
Descriptors: Cognitive Processes, College Students, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
McCallum, R. Steve; Karnes, Frances A. – Journal of School Psychology, 1990
Compared area scores from short-form version of Stanford-Binet Intelligence Test (Fourth) with those from long form for 33 gifted children. Found three of five mean difference contrasts were significantly different and correlation coefficients between corresponding area scores and Test Composite were statistically significant. Suggests that…
Descriptors: Academically Gifted, Comparative Testing, Elementary Education, Elementary School Students
Watanabe, Addison; Algozzine, Bob – Diagnostique, 1989
This article discusses ways teachers may vary and modify teacher-made tests to better facilitate prescriptive programing for special education students. Formats featuring statements, object identification, gestures, and writing are discussed, as are item alterations in terms of both item presentation and student responses. (PB)
Descriptors: Achievement Tests, Disabilities, Elementary Secondary Education, Evaluation Methods
Peer reviewed Peer reviewed
Veloski, J. Jon; And Others – Evaluation and the Health Professions, 1990
Part III of the National Board Examination--a certifying examination of medical knowledge and patient management abilities--was assessed using 1,866 first-year residents. This 15-year study comparing Part III results with those of Parts I and II and with superiors' ratings indicates Part III's validity and provides a model for future research.…
Descriptors: Analysis of Covariance, Clinical Diagnosis, Computer Assisted Testing, Licensing Examinations (Professions)
Peer reviewed Peer reviewed
Tamir, Pinchas – Journal of Biological Education, 1989
Students' justifications for the three-item formats provided were compared. The study confirmed the usefulness of justifications as a diagnostic tool and offered recommendations regarding the use of justifications, including their use in the construction of two-tier items. (Author/CW)
Descriptors: Foreign Countries, Science Education, Science Instruction, Science Tests
Peer reviewed Peer reviewed
Henly, Susan J.; And Others – Applied Psychological Measurement, 1989
A group of covariance structure models was examined to ascertain the similarity between conventionally administered and computerized adaptive versions of the Differential Aptitude Test (DAT). Results for 332 students indicate that the computerized version of the DAT is an adequate representation of the conventional test battery. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Nickerson, Raymond S. – Educational Researcher, 1989
Discusses issues involved in the construction, validity, and use of tests that evaluate educational progress, especially those that assess higher-order cognitive functioning. Reviews the four articles in this special issue. (FMW)
Descriptors: Cognitive Measurement, Educational Testing, Elementary Secondary Education, Evaluation
Peer reviewed Peer reviewed
Schriesheim, Chester A.; And Others – Educational and Psychological Measurement, 1989
Three studies explored the effects of grouping versus randomized items in questionnaires on internal consistency and test-retest reliability with samples of 80, 80, and 100, respectively, university students and undergraduates. The 2 correlational and 1 experimental studies were reasonably consistent in demonstrating that neither format was…
Descriptors: Classification, College Students, Evaluation Methods, Higher Education
Peer reviewed Peer reviewed
Statman, Stella – SYSTEM, 1988
Multiple choice items formatted as a question with one of four distractors giving the correct answer are a clearer and more valid way of testing the reading comprehension of foreign learners of English than is the common format in which the testee must complete a sentence stem by choosing one of four distractors. (Author/CB)
Descriptors: Cloze Procedure, English (Second Language), Language Tests, Multiple Choice Tests
Bracey, Gerald W. – High School Magazine, 1993
Describes four criteria that can be used to evaluate methods of assessment: (1) "What are the consequences of using the test?" (2) "Is this assessment fair?" (3) "Do the skills and knowledge of this assessment transfer or generalize?" and (4) "Does this assessment cover cognitively complex task?" (KDP)
Descriptors: Alternative Assessment, Evaluation Methods, High Schools, Performance Based Assessment
Peer reviewed Peer reviewed
Page, Gordon; And Others – Academic Medicine, 1995
An approach to testing medical students' clinical decision-making skills identifies key features (critical steps in resolution of a clinical problem) and presents a clinical case scenario followed by questions focusing on those key features. Key-feature problems provide flexibility on issues of question format, multiple responses to questions, and…
Descriptors: Clinical Diagnosis, Decision Making, Evaluation Methods, Higher Education
Peer reviewed Peer reviewed
Vogel, Lora Ann – Journal of Educational Computing Research, 1994
Reports on a study conducted to evaluate how individual differences in anxiety levels affect performance on computer versus paper-and-pencil forms of verbal sections of the Graduate Record Examination. Contrary to the research hypothesis, analysis of scores revealed that extroverted and less computer anxious subjects scored significantly lower on…
Descriptors: Comparative Analysis, Computer Anxiety, Computer Assisted Testing, Computer Attitudes
Peer reviewed Peer reviewed
Dunham, Trudy C.; Davison, Mark L. – Applied Measurement in Education, 1990
The effects of packing or skewing the response options of a scale on the common measurement problems of leniency and range restriction in instructor ratings were assessed. Results from a sample of 130 undergraduate education students indicate that packing reduced leniency but had no effect on range restriction. (TJH)
Descriptors: Education Majors, Higher Education, Professors, Rating Scales
Peer reviewed Peer reviewed
Cizek, Gregory J. – Educational and Psychological Measurement, 1994
Performance of a common set of test items on an examination in which the order of options for one test form was experimentally manipulated. Results for 759 medical specialty board examinees find that reordering item options results in significant but unpredictable effects on item difficulty. (SLD)
Descriptors: Change, Difficulty Level, Equated Scores, Licensing Examinations (Professions)
Peer reviewed Peer reviewed
Arthur, Winfred, Jr.; Day, David V. – Educational and Psychological Measurement, 1994
The development of a short form of the Raven Advanced Progressive Matrices Test is reported. Results from 3 studies with 663 college students indicate that the short form demonstrates psychometric properties similar to the long form yet requires a substantially shorter administration time. (SLD)
Descriptors: Cognitive Ability, College Students, Educational Research, Higher Education
Pages: 1  |  ...  |  130  |  131  |  132  |  133  |  134  |  135  |  136  |  137  |  138  |  ...  |  207