Publication Date
| In 2026 | 0 |
| Since 2025 | 59 |
Descriptor
Source
Author
| Abdullah Al Fraidan | 2 |
| Gyeonggeon Lee | 2 |
| Min Li | 2 |
| Okan Bulut | 2 |
| Selcuk Acar | 2 |
| Xiaoming Zhai | 2 |
| Xiaoxiao Liu | 2 |
| Yizhu Gao | 2 |
| Abdullah Ali Khan | 1 |
| Adam B. Wilson | 1 |
| Ahmed Al - Badri | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 54 |
| Reports - Research | 48 |
| Information Analyses | 7 |
| Reports - Evaluative | 6 |
| Tests/Questionnaires | 3 |
| Books | 1 |
| Collected Works - General | 1 |
| Reports - Descriptive | 1 |
Education Level
Audience
| Teachers | 2 |
| Administrators | 1 |
| Policymakers | 1 |
| Researchers | 1 |
Location
| Iran | 3 |
| United Kingdom | 3 |
| Malaysia | 2 |
| Saudi Arabia | 2 |
| South Korea | 2 |
| Taiwan | 2 |
| Turkey | 2 |
| United Kingdom (England) | 2 |
| Chile | 1 |
| China | 1 |
| Europe | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Tom Benton – Practical Assessment, Research & Evaluation, 2025
This paper proposes an extension of linear equating that may be useful in one of two fairly common assessment scenarios. One is where different students have taken different combinations of test forms. This might occur, for example, where students have some free choice over the exam papers they take within a particular qualification. In this…
Descriptors: Equated Scores, Test Format, Test Items, Computation
Bin Tan; Nour Armoush; Elisabetta Mazzullo; Okan Bulut; Mark J. Gierl – International Journal of Assessment Tools in Education, 2025
This study reviews existing research on the use of large language models (LLMs) for automatic item generation (AIG). We performed a comprehensive literature search across seven research databases, selected studies based on predefined criteria, and summarized 60 relevant studies that employed LLMs in the AIG process. We identified the most commonly…
Descriptors: Artificial Intelligence, Test Items, Automation, Test Format
Meltem Acar Güvendir; Seda Donat Bacioglu; Hasan Özgür; Sefa Uyanik; Fatmagül Gürbüz Akçay; Emre Güvendir – International Journal of Psychology and Educational Studies, 2025
Different types of test items influence students' test anxiety, and physiological measures such as heart rate provide a means of measuring this anxiety. This study aimed to explore the connection between test anxiety and examination item formats. It centered on 20 junior university students in Western Türkiye. The research monitored students'…
Descriptors: Foreign Countries, Test Anxiety, Measurement Techniques, Physiology
Hao Lei; Libing Chen; Ming Ming Chiu; Longyue Fang; Yuxin Ding – Educational Psychology Review, 2025
Adding illustrations to texts might improve students' science achievement. This meta-analysis of 121 effect sizes from 63 studies of 7,621 students across five decades determines both the overall effect and moderators that account for differences across studies. Our random-effects model shows a positive effect of adding illustrations to texts on…
Descriptors: Illustrations, Textbooks, Science Achievement, Effect Size
Yi-Ling Wu; Yao-Hsuan Huang; Chia-Wen Chen; Po-Hsi Chen – Journal of Educational Measurement, 2025
Multistage testing (MST), a variant of computerized adaptive testing (CAT), differs from conventional CAT in that it is adapted at the module level rather than at the individual item level. Typically, all examinees begin the MST with a linear test form in the first stage, commonly known as the routing stage. In 2020, Han introduced an innovative…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Format, Measurement
Sohee Kim; Ki Lynn Cole – International Journal of Testing, 2025
This study conducted a comprehensive comparison of Item Response Theory (IRT) linking methods applied to a bifactor model, examining their performance on both multiple choice (MC) and mixed format tests within the common item nonequivalent group design framework. Four distinct multidimensional IRT linking approaches were explored, consisting of…
Descriptors: Item Response Theory, Comparative Analysis, Models, Item Analysis
Jonathan Hoseana; Andy Leonardo Louismono; Oriza Stepanus – International Journal of Mathematical Education in Science and Technology, 2025
We describe and evaluate a method to mitigate unwanted student collaborations in assessments, which we recently implemented in a second-year undergraduate mathematics module. The method requires a list of specific pairs of students to be prevented from collaborating, which we constructed based on the results of previous assessments. We converted…
Descriptors: Graphs, Color, College Mathematics, Undergraduate Students
Ata Jahangir Moshayedi; Atanu Shuvam Roy; Zeashan Hameed Khan; Hong Lan; Habibollah Lotfi; Xiaohong Zhang – Education and Information Technologies, 2025
In this paper, a secure exam proctoring assistant 'EMTIHAN' (which means exam in Arabic/Persian/Urdu/Turkish languages) is developed to address concerns related to online exams for handwritten topics by allowing students to submit their answers online securely via their mobile devices. This system is designed with an aim to lessen the student's…
Descriptors: Computer Assisted Testing, Distance Education, MOOCs, Virtual Classrooms
Harpreet Auby; Namrata Shivagunde; Vijeta Deshpande; Anna Rumshisky; Milo D. Koretsky – Journal of Engineering Education, 2025
Background: Analyzing student short-answer written justifications to conceptually challenging questions has proven helpful to understand student thinking and improve conceptual understanding. However, qualitative analyses are limited by the burden of analyzing large amounts of text. Purpose: We apply dense and sparse Large Language Models (LLMs)…
Descriptors: Student Evaluation, Thinking Skills, Test Format, Cognitive Processes
Guadalupe Elizabeth Morales-Martinez; Ricardo Jesus Villarreal-Lozano; Maria Isolde Hedlefs-Aguilar – International Journal of Emotional Education, 2025
This research study explored the systematic thinking modes underlying test anxiety in 706 engineering students through an experiment centred on the cognitive algebra paradigm. The participants had to read 36 experimental scenarios that narrated an imaginary academic assessment situation one by one and then judge the level of anxiety they…
Descriptors: Engineering Education, Cognitive Style, College Students, Student Attitudes
Selcuk Acar; Peter Organisciak; Denis Dumas – Journal of Creative Behavior, 2025
In this three-study investigation, we applied various approaches to score drawings created in response to both Form A and Form B of the Torrance Tests of Creative Thinking-Figural (broadly TTCT-F) as well as the Multi-Trial Creative Ideation task (MTCI). We focused on TTCT-F in Study 1, and utilizing a random forest classifier, we achieved 79% and…
Descriptors: Scoring, Computer Assisted Testing, Models, Correlation
Jiawei Xiong; George Engelhard; Allan S. Cohen – Measurement: Interdisciplinary Research and Perspectives, 2025
It is common to find mixed-format data results from the use of both multiple-choice (MC) and constructed-response (CR) questions on assessments. Dealing with these mixed response types involves understanding what the assessment is measuring, and the use of suitable measurement models to estimate latent abilities. Past research in educational…
Descriptors: Responses, Test Items, Test Format, Grade 8
Victoria Crisp; Sylvia Vitello; Abdullah Ali Khan; Heather Mahy; Sarah Hughes – Research Matters, 2025
This research set out to enhance our understanding of the exam techniques and types of written annotations or markings that learners may wish to use to support their thinking when taking digital multiple-choice exams. Additionally, we aimed to further explore issues around the factors that contribute to learners writing less rough work and…
Descriptors: Computer Assisted Testing, Test Format, Multiple Choice Tests, Notetaking
Vahe Permzadian; Kit W. Cho – Teaching in Higher Education, 2025
When administering an in-class exam, a common decision that confronts every instructor is whether the exam format should be closed book or open book. The present review synthesizes research examining the effect of administering closed-book or open-book assessments on long-term learning. Although the overall effect of assessment format on learning…
Descriptors: College Students, Tests, Test Format, Long Term Memory
Hasibe Yahsi Sari; Hulya Kelecioglu – International Journal of Assessment Tools in Education, 2025
The aim of the study is to examine the effect of polytomous item ratio on ability estimation in different conditions in multistage tests (MST) using mixed tests. The study is simulation-based research. In the PISA 2018 application, the ability parameters of the individuals and the item pool were created by using the item parameters estimated from…
Descriptors: Test Items, Test Format, Accuracy, Test Length

Peer reviewed
Direct link
