NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 4,141 to 4,155 of 4,790 results Save | Export
Smith, Steven H.; Jorgenson, Olaf – American School Board Journal, 1997
Despite educators' well-founded misgivings about standardized multiple-choice testing, this practice is firmly entrenched in American life. One of educators' major responsibilities is preparing students to make wise choices in school and the "real world." With college and career opportunities hinging on standardized test scores, schools are…
Descriptors: Administrator Attitudes, Elementary Secondary Education, Guidelines, High Stakes Tests
Peer reviewed Peer reviewed
Israel, Glenn D.; Taylor, C. L. – Evaluation and Program Planning, 1990
Mail questionnaire items that are susceptible to order effects were examined using data from 168 questionnaires in a Florida Cooperative Extension Service evaluation. Order effects were found for multiple-response and attributive questions but not for single-response items. Order also interacted with question complexity, social desirability, and…
Descriptors: Adult Farmer Education, Difficulty Level, Educational Assessment, Error of Measurement
Peer reviewed Peer reviewed
Farr, Roger; And Others – Journal of Educational Measurement, 1990
A study of 26 college seniors used introspective and retrospective interviewing to assess cognitive processes used during multiple-choice reading comprehension tests. Categories of processing identified included an overall approach to the test task, reading strategies, and test-taking strategies. (TJH)
Descriptors: Cognitive Processes, College Seniors, Higher Education, Interviews
Peer reviewed Peer reviewed
Haladyna, Thomas A. – Applied Measurement in Education, 1992
Several multiple-choice item formats are examined in the current climate of test reform. The reform movement is discussed as it affects use of the following formats: (1) complex multiple-choice; (2) alternate choice; (3) true-false; (4) multiple true-false; and (5) the context dependent item set. (SLD)
Descriptors: Cognitive Psychology, Comparative Testing, Context Effect, Educational Change
Bracey, Gerald W. – Principal, 1993
Describes recent efforts of the Center for Research in Evaluation, Standards and Student Testing (CRESST) to evaluate authentic assessment methods, such as portfolios and performance tests. When comparing the merits of authentic versus multiple-choice testing, it is wise to consider validity, reliability, consequences, fairness, generalization,…
Descriptors: Cost Effectiveness, Efficiency, Elementary Education, Evaluation Criteria
Peer reviewed Peer reviewed
Caudill, Steven B.; Gropper, Daniel M. – Journal of Economic Education, 1991
Presents a study of the effect of question order on student performance on economics tests. Reports that question order has no statistically significant effect on examination scores, even after including variables that reflect differential human capital characteristics. Concludes that instructors need not worry that some examination versions give…
Descriptors: Economics Education, Educational Research, Higher Education, Human Capital
Peer reviewed Peer reviewed
Samejima, Fumiko – Applied Psychological Measurement, 1994
The Level-11 vocabulary subtest of the Iowa Tests of Basic Skills was analyzed using a two-stage latent trait approach and data set of 2,356 examinees, approximately 11 years of age. It is concluded that the nonparametric approach leads to efficient estimation of the latent trait. (SLD)
Descriptors: Achievement Tests, Distractors (Tests), Elementary Education, Elementary School Students
Peer reviewed Peer reviewed
Bennett, Randy Elliot; And Others – Journal of Educational Measurement, 1991
The relationship of multiple-choice and free-response items on the College Board's Advanced Placement Computer Science Examination was studied using confirmatory factor analysis. Results with 2 samples of 1,000 high school students suggested that the most parsimonious fit was achieved using a single factor. Implications for construct validity are…
Descriptors: Chi Square, College Entrance Examinations, Comparative Testing, Computer Science
Peer reviewed Peer reviewed
Woehr, David J.; And Others – Educational and Psychological Measurement, 1991
Methods for setting cutoff scores based on criterion performance, normative comparison, and absolute judgment were compared for scores on a multiple-choice psychology examination for 121 undergraduates and 251 undergraduates as a comparison group. All methods fell within the standard error of measurement. Implications of differences for decision…
Descriptors: Comparative Analysis, Concurrent Validity, Content Validity, Cutting Scores
Peer reviewed Peer reviewed
Skaggs, Gary; Lissitz, Robert W. – Journal of Educational Measurement, 1992
The consistency of several item bias detection methods was studied across different test administrations of the same items using data from a mathematics test given to approximately 6,600 eighth grade students in all. The Mantel Haenszel and item-response-theory-based sum-of-squares methods were the most consistent. (SLD)
Descriptors: Comparative Testing, Grade 8, Item Bias, Item Response Theory
Peer reviewed Peer reviewed
Birenbaum, Menucha; And Others – Applied Psychological Measurement, 1992
The effect of multiple-choice (MC) or open-ended (OE) response format on diagnostic assessment of algebra test performance was investigated with 231 eighth and ninth graders in Tel Aviv (Israel) using bug or rule space analysis. Both analyses indicated closer similarity between parallel OE subsets than between stem-equivalent OE and MC subsets.…
Descriptors: Algebra, Comparative Testing, Educational Assessment, Educational Diagnosis
Peer reviewed Peer reviewed
Spires, Hiller A.; And Others – Journal of Experimental Education, 1992
Effects of schema-based cues (previewing) and structure-based cues (structure cueing) on comprehension of expository prose were studied with 74 fourth graders. Results indicate a significant pattern of higher performance by the previewing group, in comparison with the structure-cued group and a control group. (SLD)
Descriptors: Comparative Testing, Control Groups, Cues, Elementary School Students
Peer reviewed Peer reviewed
Kim, Jwa K.; Nicewander, W. Alan – Psychometrika, 1993
Bias, standard error, and reliability of five ability estimators were evaluated using Monte Carlo estimates of the unknown conditional means and variances of the estimators. Results indicate that estimates based on Bayesian modal, expected a posteriori, and weighted likelihood estimators were reasonably unbiased with relatively small standard…
Descriptors: Ability, Bayesian Statistics, Equations (Mathematics), Error of Measurement
Peer reviewed Peer reviewed
Yi'an, Wu – Language Testing, 1998
Investigated the test-taking processes of English-as-a-Foreign-Language (EFL) testees taking a multiple choice listening-comprehension test, evaluating the effectiveness of immediate retrospection for accessing such processes. Linguistic and non-linguistic knowledge was activated in a parallel manner for all subjects with input of various…
Descriptors: Cognitive Processes, College Students, English (Second Language), Evaluation Methods
Peer reviewed Peer reviewed
Klein, Stephen P. – Journal of Personnel Evaluation in Education, 1998
Discusses how recent court decisions and the move toward performance assessment may affect the adverse impact, reliability, validity, and pass-fail standards of teacher-certification tests. Recommendations are made for tests that combine multiple-choice items with open-ended tasks. (SLD)
Descriptors: Constructed Response, Court Litigation, Elementary Secondary Education, Multiple Choice Tests
Pages: 1  |  ...  |  273  |  274  |  275  |  276  |  277  |  278  |  279  |  280  |  281  |  ...  |  320