NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Meets WWC Standards with or without Reservations2
Showing 31 to 45 of 546 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Vázquez-García, Mario – Advances in Physiology Education, 2018
The present study examined the relationship between second-year medical students' group performance and individual performance in a collaborative-learning environment. In recent decades, university professors in the scientific and humanistic disciplines have successfully put into practice different modalities of collaborative approaches to…
Descriptors: Medical Students, Medical Education, Physiology, Human Body
Peer reviewed Peer reviewed
Direct linkDirect link
Bulut, Okan; Quo, Qi; Gierl, Mark J. – Large-scale Assessments in Education, 2017
Position effects may occur in both paper--pencil tests and computerized assessments when examinees respond to the same test items located in different positions on the test. To examine position effects in large-scale assessments, previous studies often used multilevel item response models within the generalized linear mixed modeling framework.…
Descriptors: Structural Equation Models, Educational Assessment, Measurement, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Craig, Tracy S. – International Journal of Mathematical Education in Science and Technology, 2017
The notation for vector analysis has a contentious nineteenth century history, with many different notations describing the same or similar concepts competing for use. While the twentieth century has seen a great deal of unification in vector analysis notation, variation still remains. In this paper, the two primary notations used for expressing…
Descriptors: College Mathematics, Mathematics Instruction, Mathematical Concepts, Algebra
Peer reviewed Peer reviewed
Direct linkDirect link
Potter, Kyle; Lewandowski, Lawrence; Spenceley, Laura – Assessment & Evaluation in Higher Education, 2016
Standardised and other multiple-choice examinations often require the use of an answer sheet with fill-in bubbles (i.e. "bubble" or Scantron sheet). Students with disabilities causing impairments in attention, learning and/or visual-motor skill may have difficulties with multiple-choice examinations that employ such a response style.…
Descriptors: Testing Accommodations, Disabilities, Multiple Choice Tests, Vocabulary
Peer reviewed Peer reviewed
Direct linkDirect link
Prevost, Luanna B.; Lemons, Paula P. – CBE - Life Sciences Education, 2016
This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this…
Descriptors: Biology, Undergraduate Students, Problem Solving, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kelly, Michael D. – Journal of International Education and Leadership, 2016
This study compares School Leaders Licensure Assessment (SLLA) sub-scores with principal interns' self-assessment sub-scores (ISA) for a principal internship evaluation instrument in one educational leadership graduate program. The results of the study will be used to help establish the effectiveness of the current principal internship program,…
Descriptors: Licensing Examinations (Professions), Scores, Principals, Internship Programs
Pawade, Yogesh R.; Diwase, Dipti S. – Journal of Educational Technology, 2016
Item analysis of Multiple Choice Questions (MCQs) is the process of collecting, summarizing and utilizing information from students' responses to evaluate the quality of test items. Difficulty Index (p-value), Discrimination Index (DI) and Distractor Efficiency (DE) are the parameters which help to evaluate the quality of MCQs used in an…
Descriptors: Test Items, Item Analysis, Multiple Choice Tests, Curriculum Development
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ming; Rus, Vasile; Liu, Li – IEEE Transactions on Learning Technologies, 2018
Automatic question generation can help teachers to save the time necessary for constructing examination papers. Several approaches were proposed to automatically generate multiple-choice questions for vocabulary assessment or grammar exercises. However, most of these studies focused on generating questions in English with a certain similarity…
Descriptors: Multiple Choice Tests, Regression (Statistics), Test Items, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Leal, Acácia Gonçalves Ferreira; Vancini, Rodrigo Luiz; Gentil, Paulo; Benedito-Silva, Ana Amélia; da Silva, Antonio Carlos; Campos, Mário Hebling; Andrade, Marilia Santos; de Lira, Claudio Andre Barbosa – Health Education, 2018
Purpose: The purpose of this paper was to assess the knowledge on sport and exercise science held by a sample of Brazilian physiotherapists, nutritionists and physical educators. Design/methodology/approach: A cross-sectional research design was used. The answers given by 1,147 professionals (300 physiotherapists, 705 physical educators and 142…
Descriptors: Foreign Countries, Physiology, Allied Health Personnel, Therapy
Peer reviewed Peer reviewed
Direct linkDirect link
Ting, Mu Yu – EURASIA Journal of Mathematics, Science & Technology Education, 2017
Using the capabilities of expert knowledge structures, the researcher prepared test questions on the university calculus topic of "finding the area by integration." The quiz is divided into two types of multiple choice items (one out of four and one out of many). After the calculus course was taught and tested, the results revealed that…
Descriptors: Calculus, Mathematics Instruction, College Mathematics, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T.; Ferrara, Steve – Applied Measurement in Education, 2016
As an alternative to rubric scoring, comparative judgment generates essay scores by aggregating decisions about the relative quality of the essays. Comparative judgment eliminates certain scorer biases and potentially reduces training requirements, thereby allowing a large number of judges, including teachers, to participate in essay evaluation.…
Descriptors: Essays, Scoring, Comparative Analysis, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Walstad, William B.; Wagner, Jamie – Journal of Economic Education, 2016
This study disaggregates posttest, pretest, and value-added or difference scores in economics into four types of economic learning: positive, retained, negative, and zero. The types are derived from patterns of student responses to individual items on a multiple-choice test. The micro and macro data from the "Test of Understanding in College…
Descriptors: Value Added Models, Scores, Economics Education, Economics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Büyükturan, Esin Bagcan; Sireci, Ayse – Journal of Education and Training Studies, 2018
Item discrimination index, which indicates the ability of the item to distinguish whether or not the individuals have acquired the qualities that are evaluated, is basically a validity measure and it is estimated by examining the fit between item score and the test score. Based on the definition of item discrimination index, classroom observation…
Descriptors: Foreign Countries, Classroom Observation Techniques, Scores, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ganzfried, Sam; Yusuf, Farzana – Education Sciences, 2018
A problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically, these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and length. For example, for a recent class taught by the author, there were…
Descriptors: Weighted Scores, Test Construction, Student Evaluation, Multiple Choice Tests
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Assessment for Effective Intervention, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  37