Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 3 |
| Since 2017 (last 10 years) | 13 |
| Since 2007 (last 20 years) | 21 |
Descriptor
| Computer Assisted Testing | 42 |
| Multiple Choice Tests | 42 |
| Test Items | 18 |
| Test Construction | 13 |
| Test Format | 13 |
| Foreign Countries | 11 |
| College Students | 8 |
| Higher Education | 8 |
| Difficulty Level | 7 |
| Comparative Analysis | 6 |
| Student Attitudes | 6 |
| More ▼ | |
Source
| International Association for… | 9 |
| Grantee Submission | 6 |
| Online Submission | 2 |
| International Working Group… | 1 |
| Mathematics Education… | 1 |
| North American Chapter of the… | 1 |
| Research in Learning… | 1 |
| Research-publishing.net | 1 |
Author
| Anderson, Paul S. | 4 |
| Clariana, Roy B. | 2 |
| Hyers, Albert D. | 2 |
| Kuneshka, Loreta | 2 |
| Teneqexhi, Romeo | 2 |
| Adam C. Sales | 1 |
| Aizawa, Kazumi | 1 |
| Aksenova, Victoria Y. | 1 |
| Albacete, Patricia | 1 |
| Alsubait, Tahani | 1 |
| Alvaro, Rosaria | 1 |
| More ▼ | |
Publication Type
| Speeches/Meeting Papers | 42 |
| Reports - Research | 26 |
| Reports - Evaluative | 10 |
| Reports - Descriptive | 5 |
| Information Analyses | 1 |
| Journal Articles | 1 |
Education Level
| Higher Education | 8 |
| Secondary Education | 7 |
| Postsecondary Education | 6 |
| High Schools | 4 |
| Junior High Schools | 2 |
| Middle Schools | 2 |
| Elementary Education | 1 |
| Grade 11 | 1 |
| Two Year Colleges | 1 |
Audience
| Researchers | 1 |
Location
| Albania | 2 |
| Arkansas | 1 |
| Australia | 1 |
| Hong Kong | 1 |
| Italy (Rome) | 1 |
| Japan | 1 |
| Netherlands | 1 |
| Russia | 1 |
| Sweden | 1 |
| Texas | 1 |
| United Kingdom | 1 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
| SAT (College Admission Test) | 3 |
| ACT Assessment | 1 |
| Group Embedded Figures Test | 1 |
| Preliminary Scholastic… | 1 |
What Works Clearinghouse Rating
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2025
Automated multiple-choice question (MCQ) generation is valuable for scalable assessment and enhanced learning experiences. How-ever, existing MCQ generation methods face challenges in ensuring plausible distractors and maintaining answer consistency. This paper intro-duces a method for MCQ generation that integrates reasoning-based explanations…
Descriptors: Automation, Computer Assisted Testing, Multiple Choice Tests, Natural Language Processing
Ashish Gurung; Kirk Vanacore; Andrew A. McReynolds; Korinn S. Ostrow; Eamon S. Worden; Adam C. Sales; Neil T. Heffernan – Grantee Submission, 2024
Learning experience designers consistently balance the trade-off between open and close-ended activities. The growth and scalability of Computer Based Learning Platforms (CBLPs) have only magnified the importance of these design trade-offs. CBLPs often utilize close-ended activities (i.e. Multiple-Choice Questions [MCQs]) due to feasibility…
Descriptors: Multiple Choice Tests, Testing, Test Format, Computer Assisted Testing
Joe Olsen; Amy Adair; Janice Gobert; Michael Sao Pedro; Mariel O'Brien – Grantee Submission, 2022
Many national science frameworks (e.g., Next Generation Science Standards) argue that developing mathematical modeling competencies is critical for students' deep understanding of science. However, science teachers may be unprepared to assess these competencies. We are addressing this need by developing virtual lab performance assessments that…
Descriptors: Mathematical Models, Intelligent Tutoring Systems, Performance Based Assessment, Data Collection
Teneqexhi, Romeo; Kuneshka, Loreta; Naço, Adrian – International Association for Development of the Information Society, 2018
Organizing exams or competitions with multiple choice questions and assessment by technology today is something that happens in many educational institutions around the world. These kinds of exams or tests as a rule are done by answering questions in a so-called answer sheet form. In this form, each student or participant in the exam is obliged to…
Descriptors: Foreign Countries, Competition, Multiple Choice Tests, Computer Assisted Testing
Galeoto, Giovanni; D'Elpidio, Giuliana; Alvaro, Rosaria; Zicari, Anna Maria; Valente, Donatella; Riccio, Marianna – International Association for Development of the Information Society, 2021
The Italian Disciplinary section of Test of Competences (TECO-D) project is an important longitudinal study used to analyze learning outcomes of ungraded students and to measure quality of the educational process. The aim of the present study was to evaluate the psychometric properties of the TECO-D in students enrolled in the Bachelor's Degree in…
Descriptors: Case Studies, Nursing Education, Psychometrics, Longitudinal Studies
Papasalouros, Andreas; Chatzigiannakou, Maria – International Association for Development of the Information Society, 2018
Automating the production of questions for assessment and self-assessment has become recently an active field of study. The use of Semantic Web technologies has certain advantages over other methods for question generation and thus is one of the most important lines of research for this problem. The aim of this paper is to provide an overview of…
Descriptors: Computer Assisted Testing, Web 2.0 Technologies, Test Format, Multiple Choice Tests
Teneqexhi, Romeo; Qirko, Margarita; Sharko, Genci; Vrapi, Fatmir; Kuneshka, Loreta – International Association for Development of the Information Society, 2017
Exams assessment is one of the most tedious work for university teachers all over the world. Multiple choice theses make exams assessment a little bit easier, but the teacher cannot prepare more than 3-4 variants; in this case, the possibility of students for cheating from one another becomes a risk for "objective assessment outcome." On…
Descriptors: Testing, Computer Assisted Testing, Test Items, Test Construction
El Rassi, Mary Ann Barbour – International Association for Development of the Information Society, 2019
It has long been debated whether the Open-Book-Open-Web exam was useful and efficient as the traditional closed book exams. Some scholars and practitioners have doubted the efficiency and the possibility of cheating in the OBOW as it is not directly monitored. This paper tends to investigate the effectiveness of OBOW exams by comparing them with…
Descriptors: Developing Nations, Test Format, Tests, Cheating
Nygren, Thomas; Guath, Mona – International Association for Development of the Information Society, 2018
In this study we investigate the abilities to determine credibility of digital news among 532 teenagers. Using an online test we assess to what extent teenagers are able to determine the credibility of different sources, evaluate credible and biased uses of evidence, and corroborate information. Many respondents fail to identify the credibility of…
Descriptors: Credibility, Information Sources, Information Literacy, News Reporting
Albacete, Patricia; Silliman, Scott; Jordan, Pamela – Grantee Submission, 2017
Intelligent tutoring systems (ITS), like human tutors, try to adapt to student's knowledge level so that the instruction is tailored to their needs. One aspect of this adaptation relies on the ability to have an understanding of the student's initial knowledge so as to build on it, avoiding teaching what the student already knows and focusing on…
Descriptors: Intelligent Tutoring Systems, Knowledge Level, Multiple Choice Tests, Computer Assisted Testing
Barsukova, Anastasia A.; Aksenova, Victoria Y. – International Association for Development of the Information Society, 2018
The article aims to outline and analyze the most demanding parts of computer-based exams for the students of the Ural Federal University named after the first President of Russian B.N. Yeltsin (Ekaterinburg, Russia) during one exam session of spring of 2016-2017. For this research, a quantitative method of mean scores for reading, listening, and…
Descriptors: Barriers, Computer Assisted Testing, Universities, English (Second Language)
Hardcastle, Joseph; Herrmann-Abell, Cari F.; DeBoer, George E. – Grantee Submission, 2017
Can student performance on computer-based tests (CBT) and paper-and-pencil tests (PPT) be considered equivalent measures of student knowledge? States and school districts are grappling with this question, and although studies addressing this question are growing, additional research is needed. We report on the performance of students who took…
Descriptors: Academic Achievement, Computer Assisted Testing, Comparative Analysis, Student Evaluation
Burstein, Jill; McCaffrey, Dan; Beigman Klebanov, Beata; Ling, Guangming – Grantee Submission, 2017
No significant body of research examines writing achievement and the specific skills and knowledge in the writing domain for postsecondary (college) students in the U.S., even though many at-risk students lack the prerequisite writing skills required to persist in their education. This paper addresses this gap through a novel…
Descriptors: Computer Software, Writing Evaluation, Writing Achievement, College Students
Zoanetti, Nathan; Les, Magdalena; Leigh-Lancaster, David – Mathematics Education Research Group of Australasia, 2014
From 2011-2013 the VCAA conducted a trial aligning the use of computers in curriculum, pedagogy and assessment culminating in a group of 62 volunteer students sitting their end of Year 12 technology-active Mathematical Methods (CAS) Examination 2 as a computer-based examination. This paper reports on statistical modelling undertaken to compare the…
Descriptors: Computer Assisted Testing, Comparative Analysis, Mathematical Concepts, Mathematics Tests
Aizawa, Kazumi; Iso, Tatsuo – Research-publishing.net, 2013
The present study aims to demonstrate how the estimation of vocabulary size might be affected by two neglected factors in vocabulary size tests. The first factor is randomization of question sequence, as opposed to the traditional high-to-low frequency sequencing. The second factor is learners' confidence in choosing the correct meaning for a…
Descriptors: Vocabulary, Computer Assisted Testing, Scores, Multiple Choice Tests

Peer reviewed
Direct link
