Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 2 |
Descriptor
| Automation | 3 |
| Test Format | 3 |
| Test Items | 2 |
| Algebra | 1 |
| Architecture | 1 |
| Artificial Intelligence | 1 |
| Assignments | 1 |
| Computer Assisted Testing | 1 |
| Computer Science | 1 |
| Computer Science Education | 1 |
| Computer Security | 1 |
| More ▼ | |
Author
| Bennett, Randy Elliot | 1 |
| Bin Tan | 1 |
| Elisabetta Mazzullo | 1 |
| Figueira, Álvaro | 1 |
| Leal, José Paulo | 1 |
| Mark J. Gierl | 1 |
| Martinez, Michael E. | 1 |
| Nour Armoush | 1 |
| Okan Bulut | 1 |
| Paiva, José Carlos | 1 |
Publication Type
| Information Analyses | 3 |
| Journal Articles | 3 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Bin Tan; Nour Armoush; Elisabetta Mazzullo; Okan Bulut; Mark J. Gierl – International Journal of Assessment Tools in Education, 2025
This study reviews existing research on the use of large language models (LLMs) for automatic item generation (AIG). We performed a comprehensive literature search across seven research databases, selected studies based on predefined criteria, and summarized 60 relevant studies that employed LLMs in the AIG process. We identified the most commonly…
Descriptors: Artificial Intelligence, Test Items, Automation, Test Format
Paiva, José Carlos; Leal, José Paulo; Figueira, Álvaro – ACM Transactions on Computing Education, 2022
Practical programming competencies are critical to the success in computer science (CS) education and go-to-market of fresh graduates. Acquiring the required level of skills is a long journey of discovery, trial and error, and optimization seeking through a broad range of programming activities that learners must perform themselves. It is not…
Descriptors: Automation, Computer Assisted Testing, Student Evaluation, Computer Science Education
Peer reviewedMartinez, Michael E.; Bennett, Randy Elliot – Applied Measurement in Education, 1992
New developments in the use of automatically scorable constructed response item types for large-scale assessment are reviewed for five domains: (1) mathematical reasoning; (2) algebra problem solving; (3) computer science; (4) architecture; and (5) natural language. Ways in which these technologies are likely to shape testing are considered. (SLD)
Descriptors: Algebra, Architecture, Automation, Computer Science

Direct link
