NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 211 to 225 of 426 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Luebke, Stephen; Lorie, James – Journal of Applied Testing Technology, 2013
This article is a brief account of the use of Bloom's Taxonomy of Educational Objectives (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956) by staff of the Law School Admission Council in the 1990 development of redesigned specifications for the Reading Comprehension section of the Law School Admission Test. Summary item statistics for the…
Descriptors: Classification, Educational Objectives, Reading Comprehension, Law Schools
Peer reviewed Peer reviewed
Direct linkDirect link
Sadler, Philip M.; Coyle, Harold; Cook Smith, Nancy; Miller, Jaimie; Mintzes, Joel; Tanner, Kimberly; Murray, John – CBE - Life Sciences Education, 2013
We report on the development of an item test bank and associated instruments based on the National Research Council (NRC) K-8 life sciences content standards. Utilizing hundreds of studies in the science education research literature on student misconceptions, we constructed 476 unique multiple-choice items that measure the degree to which test…
Descriptors: National Standards, Knowledge Level, Biological Sciences, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Crisp, Victoria; Grayson, Rebecca – Research Papers in Education, 2013
"Item difficulty modelling" is a technique used for a number of purposes such as to support future item development, to explore validity in relation to the constructs that influence difficulty and to predict the difficulty of items. This research attempted to explore the factors influencing question difficulty in a general qualification…
Descriptors: Test Items, Physics, Science Instruction, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Leal, Johanna P. – Latin American Journal of Content and Language Integrated Learning, 2016
On-going bilingual programs without regard to needs analysis; little research on the actual effects of CLIL in Colombia and vague awareness or knowledge about the necessary considerations for effective CLIL programs, underpin the need to address a particular issue of curriculum as it is summative assessment. This small scale study takes place in a…
Descriptors: Science Instruction, Second Language Learning, Second Language Instruction, Language Proficiency
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deane, Paul; Lawless, René R.; Li, Chen; Sabatini, John; Bejar, Isaac I.; O'Reilly, Tenaha – ETS Research Report Series, 2014
We expect that word knowledge accumulates gradually. This article draws on earlier approaches to assessing depth, but focuses on one dimension: richness of semantic knowledge. We present results from a study in which three distinct item types were developed at three levels of depth: knowledge of common usage patterns, knowledge of broad topical…
Descriptors: Vocabulary, Test Items, Language Tests, Semantics
Peer reviewed Peer reviewed
Direct linkDirect link
Hohensinn, Christine; Kubinger, Klaus D. – Educational and Psychological Measurement, 2011
In aptitude and achievement tests, different response formats are usually used. A fundamental distinction must be made between the class of multiple-choice formats and the constructed response formats. Previous studies have examined the impact of different response formats applying traditional statistical approaches, but these influences can also…
Descriptors: Item Response Theory, Multiple Choice Tests, Responses, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kulkarni, Vasudeo Digambar; Tambade, Popat Savaleram – European Journal of Physics Education, 2013
In this study, a Thermodynamic Concept Test (TCT) was designed to assess student's conceptual understanding heat and thermodynamics at undergraduate level. The different statistical tests such as item difficulty index, item discrimination index, point biserial coefficient were used for assessing TCT. For each item of the test these indices were…
Descriptors: Science Instruction, Thermodynamics, Undergraduate Study, Scientific Concepts
Peer reviewed Peer reviewed
Direct linkDirect link
Hudson, Ross D.; Treagust, David F. – Research in Science & Technological Education, 2013
Background: This study developed from observations of apparent achievement differences between male and female chemistry performances in a state university entrance examination. Male students performed more strongly than female students, especially in higher scores. Apart from the gender of the students, two other important factors that might…
Descriptors: Chemistry, College Entrance Examinations, State Universities, Gender Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Dankbaar, Mary E. W.; Alsma, Jelmer; Jansen, Els E. H.; van Merrienboer, Jeroen J. G.; van Saase, Jan L. C. M.; Schuit, Stephanie C. E. – Advances in Health Sciences Education, 2016
Simulation games are becoming increasingly popular in education, but more insight in their critical design features is needed. This study investigated the effects of fidelity of open patient cases in adjunct to an instructional e-module on students' cognitive skills and motivation. We set up a three-group randomized post-test-only design: a…
Descriptors: Experimental Groups, Thinking Skills, Computer Games, Motivation
Abedlaziz, Nabeel; Ismail, Wail; Hussin, Zaharah – Online Submission, 2011
Test items are designed to provide information about the examinees. Difficult items are designed to be more demanding and easy items are less so. However, sometimes, test items carry with their demands other than those intended by the test developer (Scheuneman & Gerritz, 1990). When personal attributes such as gender systematically affect…
Descriptors: Test Bias, Test Items, Difficulty Level, Gender Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Lee, Hee-Sun; Linn, Marcia C. – Educational Assessment, 2011
Both multiple-choice and constructed-response items have known advantages and disadvantages in measuring scientific inquiry. In this article we explore the function of explanation multiple-choice (EMC) items and examine how EMC items differ from traditional multiple-choice and constructed-response items in measuring scientific reasoning. A group…
Descriptors: Science Tests, Multiple Choice Tests, Responses, Test Items
Sas, Ioan Ciprian – ProQuest LLC, 2010
This research attempted to bridge the gap between cognitive psychology and educational measurement (Mislevy, 2008; Leighton & Gierl, 2007; Nichols, 1994; Messick, 1989; Snow & Lohman, 1989) by using cognitive theories from working memory (Baddeley, 1986; Miyake & Shah, 1999; Grimley & Banner, 2008), multimedia learning (Mayer, 2001), and cognitive…
Descriptors: Multiple Choice Tests, Concept Mapping, Computer Assisted Testing, Time on Task
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mokhtari, Elahe; Talebinezhad, Mohammed Reza – Advances in Language and Literary Studies, 2014
The aim of this research was to probed whether using supplementary readings (short stories containing idioms) increase conceptual fluency of L2 learners. In line with the goal of the study, first, the researcher selected a sample of 30 female lower-intermediate L2 learners from Sadr Private Language Centre in Isfahan. She selected them based on…
Descriptors: Foreign Countries, Second Language Instruction, Second Language Learning, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Han, Kyung T. – Practical Assessment, Research & Evaluation, 2012
For several decades, the "three-parameter logistic model" (3PLM) has been the dominant choice for practitioners in the field of educational measurement for modeling examinees' response data from multiple-choice (MC) items. Past studies, however, have pointed out that the c-parameter of 3PLM should not be interpreted as a guessing…
Descriptors: Statistical Analysis, Models, Multiple Choice Tests, Guessing (Tests)
Hamzah, Mohd Sahandri Gani; Abdullah, Saifuddin Kumar – Online Submission, 2011
The evaluation of learning is a systematic process involving testing, measuring and evaluation. In the testing step, a teacher needs to choose the best instrument that can test the minds of students. Testing will produce scores or marks with many variations either in homogeneous or heterogeneous forms that will be used to categorize the scores…
Descriptors: Test Items, Item Analysis, Difficulty Level, Testing
Pages: 1  |  ...  |  11  |  12  |  13  |  14  |  15  |  16  |  17  |  18  |  19  |  ...  |  29