Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 6 |
| Since 2017 (last 10 years) | 8 |
| Since 2007 (last 20 years) | 8 |
Descriptor
| Models | 8 |
| Multiple Choice Tests | 8 |
| Natural Language Processing | 8 |
| Artificial Intelligence | 4 |
| Test Items | 4 |
| Test Construction | 3 |
| Accuracy | 2 |
| Automation | 2 |
| Bayesian Statistics | 2 |
| Computer Software | 2 |
| Evaluation Methods | 2 |
| More ▼ | |
Source
| Grantee Submission | 3 |
| ACM Transactions on Computing… | 1 |
| British Journal of… | 1 |
| IEEE Transactions on Learning… | 1 |
| IGI Global | 1 |
| Journal of Applied Testing… | 1 |
Author
| Andreea Dutulescu | 1 |
| Andrew M. Olney | 1 |
| Azevedo, Ana, Ed. | 1 |
| Azevedo, José, Ed. | 1 |
| Azevedo, Roger | 1 |
| Danielle S. McNamara | 1 |
| Denis Iorga | 1 |
| Emerson, Andrew | 1 |
| Lester, James | 1 |
| Mead, Alan D. | 1 |
| Mihai Dascalu | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 7 |
| Journal Articles | 4 |
| Speeches/Meeting Papers | 3 |
| Books | 1 |
| Collected Works - General | 1 |
Education Level
| Higher Education | 5 |
| Postsecondary Education | 4 |
Audience
| Administrators | 1 |
| Researchers | 1 |
| Students | 1 |
| Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Olney, Andrew M. – Grantee Submission, 2022
Multi-angle question answering models have recently been proposed that promise to perform related tasks like question generation. However, performance on related tasks has not been thoroughly studied. We investigate a leading model called Macaw on the task of multiple choice question generation and evaluate its performance on three angles that…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Models
Andrew M. Olney – Grantee Submission, 2023
Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Algorithms
Mead, Alan D.; Zhou, Chenxuan – Journal of Applied Testing Technology, 2022
This study fit a Naïve Bayesian classifier to the words of exam items to predict the Bloom's taxonomy level of the items. We addressed five research questions, showing that reasonably good prediction of Bloom's level was possible, but accuracy varies across levels. In our study, performance for Level 2 was poor (Level 2 items were misclassified…
Descriptors: Artificial Intelligence, Prediction, Taxonomy, Natural Language Processing
Emerson, Andrew; Min, Wookhee; Azevedo, Roger; Lester, James – British Journal of Educational Technology, 2023
Game-based learning environments hold significant promise for facilitating learning experiences that are both effective and engaging. To support individualised learning and support proactive scaffolding when students are struggling, game-based learning environments should be able to accurately predict student knowledge at early points in students'…
Descriptors: Game Based Learning, Natural Language Processing, Prediction, Student Evaluation
Nabor C. Mendonça – ACM Transactions on Computing Education, 2024
The recent integration of visual capabilities into Large Language Models (LLMs) has the potential to play a pivotal role in science and technology education, where visual elements such as diagrams, charts, and tables are commonly used to improve the learning experience. This study investigates the performance of ChatGPT-4 Vision, OpenAI's most…
Descriptors: Artificial Intelligence, Natural Language Processing, Technology Uses in Education, Foreign Countries
Rao, Dhawaleswar; Saha, Sujan Kumar – IEEE Transactions on Learning Technologies, 2020
Automatic multiple choice question (MCQ) generation from a text is a popular research area. MCQs are widely accepted for large-scale assessment in various domains and applications. However, manual generation of MCQs is expensive and time-consuming. Therefore, researchers have been attracted toward automatic MCQ generation since the late 90's.…
Descriptors: Multiple Choice Tests, Test Construction, Automation, Computer Software
Azevedo, Ana, Ed.; Azevedo, José, Ed. – IGI Global, 2019
E-assessments of students profoundly influence their motivation and play a key role in the educational process. Adapting assessment techniques to current technological advancements allows for effective pedagogical practices, learning processes, and student engagement. The "Handbook of Research on E-Assessment in Higher Education"…
Descriptors: Higher Education, Computer Assisted Testing, Multiple Choice Tests, Guides

Peer reviewed
Direct link
