NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chahna Gonsalves – Journal of Learning Development in Higher Education, 2023
Multiple-choice quizzes (MCQs) are a popular form of assessment. A rapid shift to online assessment during the COVID-19 pandemic in 2020, drove the uptake of MCQs, yet limited invigilation and wide access to material on the internet allow students to solve the questions via internet search. ChatGPT, an artificial intelligence (AI) agent trained on…
Descriptors: Artificial Intelligence, Technology Uses in Education, Natural Language Processing, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Scully, Darina – Practical Assessment, Research & Evaluation, 2017
Across education, certification and licensure, there are repeated calls for the development of assessments that target "higher-order thinking," as opposed to mere recall of facts. A common assumption is that this necessitates the use of constructed response or essay-style test questions; however, empirical evidence suggests that this may…
Descriptors: Test Construction, Test Items, Multiple Choice Tests, Thinking Skills
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Al Fraidan, Abdullah; Al-Khalaf, Khadija – International Education Studies, 2012
Many studies have focused on the function of learners' strategies in a variety of EFL domains. However, research on test-taking strategies (TTSs) has been limited, even though such strategies might influence test scores and, as a result, test validity. Motivated by this fact and in light of our own experience as EFL test-makers, this article will…
Descriptors: Test Wiseness, Multiple Choice Tests, Arabs, English Language Learners
Wu, Yuh-Yin; Guei, I-Fen – 2000
A study was conducted to investigate: (1) the relationships between the results from various forms of assessment and the patterns of correlation across content areas; (2) how cognitive components correlate with the test results from different classroom assessments; and (3) how content areas affected the relationships. Data were collected from a…
Descriptors: Cognitive Processes, Cognitive Tests, Correlation, Elementary School Students
Martinez, Michael E.; Katz, Irvin R. – 1992
Contrasts between constructed response items and stem-equivalent multiple-choice counterparts typically have involved averaging item characteristics, and this aggregation has masked differences in statistical properties at the item level. Moreover, even aggregated format differences have not been explained in terms of differential cognitive…
Descriptors: Architecture, Cognitive Processes, Construct Validity, Constructed Response
Westers, Paul; Kelderman, Henk – 1990
In multiple-choice items the response probability on an item may be viewed as the result of two distinct latent processes--a cognitive process to solve the problem, and another random process that leads to the choice of a certain alternative (the process of giving the actual response). An incomplete latent class model is formulated that describes…
Descriptors: Cognitive Processes, Estimation (Mathematics), Foreign Countries, Guessing (Tests)
Pearson, P. David; Garavaglia, Diane R. – National Center for Education Statistics, 2003
The purpose of this essay is to explore both what is known and what needs to be learned about the information value of performance items "when they are used in large scale assessments." Within the context of the National Assessment of Educational Progress (NAEP), there is substantial motivation for answering these questions. Over the…
Descriptors: Measurement, National Competency Tests, Test Items, Performance
Peer reviewed Peer reviewed
Direct linkDirect link
Rupp, Andre A.; Ferne, Tracy; Choi, Hyeran – Language Testing, 2006
This article provides renewed converging empirical evidence for the hypothesis that asking test-takers to respond to text passages with multiple-choice questions induces response processes that are strikingly different from those that respondents would draw on when reading in non-testing contexts. Moreover, the article shows that the construct of…
Descriptors: Foreign Countries, Language Tests, Reading Comprehension, Evaluation Methods