NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 901 to 915 of 4,798 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Haberman, Shelby J.; Liu, Yang; Lee, Yi-Hsuan – ETS Research Report Series, 2019
Distractor analyses are routinely conducted in educational assessments with multiple-choice items. In this research report, we focus on three item response models for distractors: (a) the traditional nominal response (NR) model, (b) a combination of a two-parameter logistic model for item scores and a NR model for selections of incorrect…
Descriptors: Multiple Choice Tests, Scores, Test Reliability, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Spencer, Mercedes; Gilmour, Allison F.; Miller, Amanda C.; Emerson, Angela M.; Saha, Neena M.; Cutting, Laurie E. – Reading and Writing: An Interdisciplinary Journal, 2019
In the current study, we examined how student characteristics and cognitive skills, differing levels of text complexity (cohesion, decoding, vocabulary, and syntax), and reading comprehension question types (literal, inferential, critical analysis, and reading strategy) affected different types of reading outcomes (multiple-choice reading…
Descriptors: Student Characteristics, Cognitive Ability, Reading Comprehension, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Khoshsima, Hooshang; Hashemi Toroujeni, Seyyed Morteza; Thompson, Nathan; Reza Ebrahimi, Mohammad – Teaching English with Technology, 2019
The current study was conducted to investigate whether test scores of Iranian English as Foreign Language (EFL) learners were equivalent across CBT and PBT modes, with 58 intermediate learners studying at a private language academy located in Behshahr city in northern Iran. Moreover, test takers' computer familiarity, attitudes, aversion, and…
Descriptors: Computer Assisted Testing, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Schuwirth, Lambert W. T.; Van Der Vleuten, Cees P. M. – Journal of Applied Testing Technology, 2019
Programmatic assessment is both a philosophy and a method for assessment. It has been developed in medical education as a response to the limitation of the dominant testing or measurement approaches and to better align with changes in how medical competence was conceptualised. It is based on continual collection of assessment and feedback…
Descriptors: Program Evaluation, Medical Education, Competency Based Education, Feedback (Response)
Neupane, Ramesh – ProQuest LLC, 2019
The research purpose was to investigate the differences between learning preferences (i.e., Active-Reflective, Sensing-Intuitive, Visual-Verbal, and Sequential-Global) determined by the Index of Learning Style and gender (i.e., Male and Female) in regards to standardized achievement multiple-choice test performance determined by the Illinois Nurse…
Descriptors: Nurses, Allied Health Personnel, Cognitive Style, Preferences
Wu, Wen – ProQuest LLC, 2019
Based on Afflerbach and Cho's (2009) theoretical model of Constructively Responsive Reading on the Internet (CRRI model), this study aims to explore the patterns of reading strategies that 40 proficient, college-level, Chinese English as a Foreign Language (EFL) learners use while reading online. It also seeks to identify the strategies' relations…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Reading Strategies
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ibbett, Nicole L.; Wheldon, Brett J. – e-Journal of Business Education and Scholarship of Teaching, 2016
In 2014 Central Queensland University (CQU) in Australia banned the use of multiple choice questions (MCQs) as an assessment tool. One of the reasons given for this decision was that MCQs provide an opportunity for students to "pass" by merely guessing their answers. The mathematical likelihood of a student passing by guessing alone can…
Descriptors: Foreign Countries, Multiple Choice Tests, Item Banks, Guessing (Tests)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Xinxin; Gierl, Mark – Journal of Educational Issues, 2016
The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…
Descriptors: Test Items, Automation, Content Validity, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Luo, Yi; Pan, Rui; Choi, Jea H.; Strobel, Johannes – Journal of Educational Computing Research, 2018
Introducing a new theoretical framework of chronotypes (inner biological clock), this article presents a study examining students' choices, participation, and performance in two discussion-heavy online history courses. The study comprised two major parts: a repetition study and an exploratory study. The survey adopted in the repetition study…
Descriptors: Discussion (Teaching Technique), History Instruction, Computer Mediated Communication, Student Participation
Peer reviewed Peer reviewed
Direct linkDirect link
Lahner, Felicitas-Maria; Lörwald, Andrea Carolin; Bauer, Daniel; Nouns, Zineb Miriam; Krebs, René; Guttormsen, Sissel; Fischer, Martin R.; Huwendiek, Sören – Advances in Health Sciences Education, 2018
Multiple true-false (MTF) items are a widely used supplement to the commonly used single-best answer (Type A) multiple choice format. However, an optimal scoring algorithm for MTF items has not yet been established, as existing studies yielded conflicting results. Therefore, this study analyzes two questions: What is the optimal scoring algorithm…
Descriptors: Scoring Formulas, Scoring Rubrics, Objective Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chu, Hye-Eun; Chandrasegaran, A. L.; Treagust, David F. – School Science Review, 2018
The purpose of this research was to investigate an efficient method to assess year 8 (age 13-14) students' conceptual understanding of heat and temperature concepts. Two different types of instruments were used in this study: Type 1, consisting of multiple-choice items with open-ended justifications; and Type 2, consisting of two-tier…
Descriptors: Comparative Analysis, Item Analysis, Test Items, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Teplitski, Max; Irani, Tracy; Krediet, Cory J.; Di Cesare, Mariachiara; Marvasi, Massimiliano – Journal of Food Science Education, 2018
This multiyear study helps elucidate how the instructional practice of student-generated questions support learning in a blended classroom in science, technology, engineering, and mathematics subjects. Students designed multiple-choice pre-exam questions aimed at higher levels of learning, according to Bloom's taxonomy. Student-generated questions…
Descriptors: Foods Instruction, Teaching Methods, Questioning Techniques, Blended Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zandvakili, Elham; Washington, Ernest; Gordon, Edmund; Wells, Craig – Journal of Education and Learning, 2018
M3CA is a model of mastery learning developed in a college classroom, and an acronym for Mastery, Concept Maps, Critical Thinking, Collaboration and Assessment. The M3CA model is a formative model of skill based mastery learning and assessment that produces high levels of academic achievement and empowers students with a feeling of fairness.…
Descriptors: Mastery Learning, Multiple Choice Tests, Concept Mapping, Critical Thinking
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eser, Mehmet Taha; Dogan, Nuri – Eurasian Journal of Educational Research, 2017
Purpose: The objective of this study is to develop the Inventory of Motive of Preference for Conventional Paper-And-Pencil Tests and to evaluate students' motives for preferring written tests, short-answer tests, true/false tests or multiple-choice tests. This will add a measurement tool to the literature with valid and reliable results to help…
Descriptors: Preferences, Measures (Individuals), Objective Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wolfe, Michael B.; Kurby, Christopher A. – Discourse Processes: A multidisciplinary journal, 2017
We examined subjects' ability to judge the soundness of informal arguments. The argument claims matched or did not match subject beliefs. In all experiments subjects indicated beliefs about spanking and television violence in a prescreening. Subjects read one-sentence arguments consisting of a claim followed by a reason and then judged the…
Descriptors: Persuasive Discourse, Beliefs, Validity, Abstract Reasoning
Pages: 1  |  ...  |  57  |  58  |  59  |  60  |  61  |  62  |  63  |  64  |  65  |  ...  |  320