NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 34 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chahna Gonsalves – Journal of Learning Development in Higher Education, 2023
Multiple-choice quizzes (MCQs) are a popular form of assessment. A rapid shift to online assessment during the COVID-19 pandemic in 2020, drove the uptake of MCQs, yet limited invigilation and wide access to material on the internet allow students to solve the questions via internet search. ChatGPT, an artificial intelligence (AI) agent trained on…
Descriptors: Artificial Intelligence, Technology Uses in Education, Natural Language Processing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Yaneva, Victoria; Clauser, Brian E.; Morales, Amy; Paniagua, Miguel – Journal of Educational Measurement, 2021
Eye-tracking technology can create a record of the location and duration of visual fixations as a test-taker reads test questions. Although the cognitive process the test-taker is using cannot be directly observed, eye-tracking data can support inferences about these unobserved cognitive processes. This type of information has the potential to…
Descriptors: Eye Movements, Test Validity, Multiple Choice Tests, Cognitive Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Little, Jeri L.; Frickey, Elise A.; Fung, Alexandra K. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2019
Taking a test improves memory for that tested information, a finding referred to as the testing effect. Multiple-choice tests tend to produce smaller testing effects than do cued-recall tests, and this result is largely attributed to the different processing that the two formats are assumed to induce. Specifically, it is generally assumed that the…
Descriptors: Multiple Choice Tests, Memory, Cognitive Processes, Recall (Psychology)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Asquith, Steven – TESL-EJ, 2022
Although an accurate measure of vocabulary size is integral to understanding the proficiency of language learners, the validity of multiple-choice (M/C) vocabulary tests to determine this has been questioned due to users guessing correct answers which inflates scores. In this paper the nature of guessing and partial knowledge used when taking the…
Descriptors: Guessing (Tests), English (Second Language), Second Language Learning, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Dönmez, Onur; Akbulut, Yavuz; Telli, Esra; Kaptan, Miray; Özdemir, Ibrahim H.; Erdem, Mukaddes – Education and Information Technologies, 2022
In the current study, we aimed to develop a reliable and valid scale to address individual cognitive load types. Existing scale development studies involved limited number of items without adequate convergent, discriminant and criterion validity checks. Through a multistep correlational study, we proposed a three-factor scale with 13 items to…
Descriptors: Test Construction, Content Validity, Construct Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Moon, Jung Aa; Keehner, Madeleine; Katz, Irvin R. – Educational Measurement: Issues and Practice, 2019
The current study investigated how item formats and their inherent affordances influence test-takers' cognition under uncertainty. Adult participants solved content-equivalent math items in multiple-selection multiple-choice and four alternative grid formats. The results indicated that participants' affirmative response tendency (i.e., judge the…
Descriptors: Affordances, Test Items, Test Format, Test Wiseness
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lenchuk, Iryna; Ahmed, Amer – Arab World English Journal, 2021
This article describes the results of Action Research conducted in an ESP classroom of Dhofar University located in Oman. Following the call of Oman Vision 2040 to emphasize educational practices that promote the development of higher-order cognitive processes, this study raises the following question: Can an online multiple choice question (MCQ)…
Descriptors: Taxonomy, Thinking Skills, Cognitive Processes, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Akbay, Lokman; Terzi, Ragip; Kaplan, Mehmet; Karaaslan, Katibe Gizem – Journal on Mathematics Education, 2017
In this study, we describe the methodology used to identify and validate a set of expert-defined fraction subtraction related attributes. These attributes are expected to be mastered by 6th grade students toward proficiency in fraction subtraction. This research argues and demonstrates that state standards guiding subject instruction plays an…
Descriptors: Mathematics Instruction, Teaching Methods, Fractions, Subtraction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Scully, Darina – Practical Assessment, Research & Evaluation, 2017
Across education, certification and licensure, there are repeated calls for the development of assessments that target "higher-order thinking," as opposed to mere recall of facts. A common assumption is that this necessitates the use of constructed response or essay-style test questions; however, empirical evidence suggests that this may…
Descriptors: Test Construction, Test Items, Multiple Choice Tests, Thinking Skills
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tonekaboni, Fateme Roohani; Ravand, Hamdollah; Rezvani, Reza – International Journal of Language Testing, 2021
Investigating the processes underlying test performance is a major source of data supporting the explanation inference in the validity argument (Chappelle, 2021). One way of modeling the cognitive processes underlying test performance is by constructing a Q-matrix, which is essentially about summarizing the attributes explaining test-takers'…
Descriptors: Reading Comprehension, Reading Tests, High Stakes Tests, Inferences
Warner, Zachary B. – ProQuest LLC, 2013
This study compared an expert-based cognitive model of domain mastery with student-based cognitive models of task performance for Integrated Algebra. Interpretations of student test results are limited by experts' hypotheses of how students interact with the items. In reality, the cognitive processes that students use to solve each item may be…
Descriptors: Comparative Analysis, Algebra, Test Results, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Tractenberg, Rochelle E.; Gushta, Matthew M.; Mulroney, Susan E.; Weissinger, Peggy A. – Advances in Health Sciences Education, 2013
Multiple choice (MC) questions from a graduate physiology course were evaluated by cognitive-psychology (but not physiology) experts, and analyzed statistically, in order to test the independence of content expertise and cognitive complexity ratings of MC items. Integration of higher order thinking into MC exams is important, but widely known to…
Descriptors: Multiple Choice Tests, Critical Thinking, Graduate Study, Physiology
Peer reviewed Peer reviewed
Direct linkDirect link
Schroeder, Jacob; Murphy, Kristen L.; Holme, Thomas A. – Journal of Chemical Education, 2012
General chemistry tests from the Examinations Institute of the Division of Chemical Education of the American Chemical Society have been analyzed to identify factors that may influence how individual test items perform. In this paper, issues of item order (position within a set of items that comprise a test) and answer order (position of correct…
Descriptors: Chemistry, Test Items, Individual Testing, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Crisp, Victoria; Grayson, Rebecca – Research Papers in Education, 2013
"Item difficulty modelling" is a technique used for a number of purposes such as to support future item development, to explore validity in relation to the constructs that influence difficulty and to predict the difficulty of items. This research attempted to explore the factors influencing question difficulty in a general qualification…
Descriptors: Test Items, Physics, Science Instruction, Test Construction
National Assessment Governing Board, 2012
As the ongoing national indicator of what American students know and can do, the National Assessment of Educational Progress (NAEP) in Reading regularly collects achievement information on representative samples of students in grades 4, 8, and 12. Through The Nation's Report Card, the NAEP Reading Assessment reports how well students perform in…
Descriptors: Reading Achievement, National Competency Tests, Reading Comprehension, Grade 4
Previous Page | Next Page »
Pages: 1  |  2  |  3