Publication Date
| In 2026 | 0 |
| Since 2025 | 59 |
| Since 2022 (last 5 years) | 385 |
| Since 2017 (last 10 years) | 828 |
| Since 2007 (last 20 years) | 1342 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 195 |
| Teachers | 161 |
| Researchers | 93 |
| Administrators | 50 |
| Students | 34 |
| Policymakers | 15 |
| Parents | 12 |
| Counselors | 2 |
| Community | 1 |
| Media Staff | 1 |
| Support Staff | 1 |
| More ▼ | |
Location
| Canada | 62 |
| Turkey | 59 |
| Germany | 40 |
| Australia | 36 |
| United Kingdom | 36 |
| Japan | 35 |
| China | 33 |
| United States | 32 |
| California | 25 |
| Iran | 25 |
| United Kingdom (England) | 25 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Cox, Julie; Martin, Angi – Advances in Special Education, 2021
Assessing learning outcomes for students who are d/Deaf or hard of hearing presents ongoing challenges given the largely verbal nature of classrooms and the presentation of information. Educators should carefully consider several factors when assessing their students who are d/Deaf or hard of hearing. Those factors include, but are not limited to:…
Descriptors: Students with Disabilities, Deafness, Hearing Impairments, Student Evaluation
Sibic, Okan; Sesen, Burcin Acar – International Journal of Assessment Tools in Education, 2022
One of the main goals of science education is to make students gain science process skills. Thus, it is significant to measure whether students gain those skills or not. For this purpose, various tests have been produced and used in various studies. This study aims to examine science process skills tests which have been used in the theses produced…
Descriptors: Foreign Countries, Science Education, Science Process Skills, Masters Theses
Alamri, Aeshah; Higham, Philip A. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2022
Corrective feedback is often touted as a critical benefit to learning, boosting testing effects when retrieval is poor and reducing negative testing effects. Here, we explore the dark side of corrective feedback. In three experiments, we found that corrective feedback on multiple-choice (MC) practice questions is later endorsed as the answer to…
Descriptors: Feedback (Response), Multiple Choice Tests, Cues, Recall (Psychology)
Giofrè, D.; Allen, K.; Toffalini, E.; Caviola, S. – Educational Psychology Review, 2022
This meta-analysis reviews 79 studies (N = 46,605) that examined the existence of gender difference on intelligence in school-aged children. To do so, we limited the literature search to works that assessed the construct of intelligence through the Wechsler Intelligence Scales for Children (WISC) batteries, evaluating eventual gender differences…
Descriptors: Gender Differences, Cognitive Processes, Children, Intelligence Tests
Grajzel, Katalin; Dumas, Denis; Acar, Selcuk – Journal of Creative Behavior, 2022
One of the best-known and most frequently used measures of creative idea generation is the Torrance Test of Creative Thinking (TTCT). The TTCT Verbal, assessing verbal ideation, contains two forms created to be used interchangeably by researchers and practitioners. However, the parallel forms reliability of the two versions of the TTCT Verbal has…
Descriptors: Test Reliability, Creative Thinking, Creativity Tests, Verbal Ability
Wilson, Joseph; Pollard, Benjamin; Aiken, John M.; Lewandowski, H. J. – Physical Review Physics Education Research, 2022
Surveys have long been used in physics education research to understand student reasoning and inform course improvements. However, to make analysis of large sets of responses practical, most surveys use a closed-response format with a small set of potential responses. Open-ended formats, such as written free response, can provide deeper insights…
Descriptors: Natural Language Processing, Science Education, Physics, Artificial Intelligence
Erdem-Kara, Basak; Dogan, Nuri – International Journal of Assessment Tools in Education, 2022
Recently, adaptive test approaches have become a viable alternative to traditional fixed-item tests. The main advantage of adaptive tests is that they reach desired measurement precision with fewer items. However, fewer items mean that each item has a more significant effect on ability estimation and therefore those tests are open to more…
Descriptors: Item Analysis, Computer Assisted Testing, Test Items, Test Construction
Pengelley, James; Whipp, Peter R.; Rovis-Hermann, Nina – Educational Psychology Review, 2023
The aim of the present study is to reconcile previous findings (a) that testing mode has no effect on test outcomes or cognitive load (Comput Hum Behav 77:1-10, 2017) and (b) that younger learners' working memory processes are more sensitive to computer-based test formats (J Psychoeduc Assess 37(3):382-394, 2019). We addressed key methodological…
Descriptors: Scores, Cognitive Processes, Difficulty Level, Secondary School Students
Cui, Ying; Chen, Fu; Lutsyk, Alina; Leighton, Jacqueline P.; Cutumisu, Maria – Assessment in Education: Principles, Policy & Practice, 2023
With the exponential increase in the volume of data available in the 21st century, data literacy skills have become vitally important in work places and everyday life. This paper provides a systematic review of available data literacy assessments targeted at different audiences and educational levels. The results can help researchers and…
Descriptors: Data, Information Literacy, 21st Century Skills, Competence
Kimpo, Rhea R.; Puder, Barb – Anatomical Sciences Education, 2023
The traditional format for neuroanatomy lab practical exams involves stations with a time limit for each station and inability to revisit stations. Timed exams have been associated with anxiety, which can lead to poor performance. In alignment with the universal design for learning (UDL), "Timed Image Question" and "Untimed Image…
Descriptors: Anatomy, Neurosciences, Comparative Analysis, Laboratory Experiments
Chan, Cecilia Ka Yuk – Assessment & Evaluation in Higher Education, 2023
With the advances of technologies, possessing digital and information literacy is crucial for the selection of candidates by employers in this digital AI era. For most students, receiving and outputting electronic text has become the norm, and thus examinations with writing components done by hand may not accurately reflect their abilities. It…
Descriptors: Test Format, Handwriting, Stakeholders, Feedback (Response)
van den Broek, Gesa S. E.; Gerritsen, Suzanne L.; Oomen, Iris T. J.; Velthoven, Eva; van Boxtel, Femke H. J.; Kester, Liesbeth; van Gog, Tamara – Journal of Educational Psychology, 2023
Multiple-choice questions (MCQs) are popular in vocabulary software because they can be scored automatically and are compatible with many input devices (e.g., touchscreens). Answering MCQs is beneficial for learning, especially when learners retrieve knowledge from memory to evaluate plausible answer alternatives. However, such retrieval may not…
Descriptors: Multiple Choice Tests, Vocabulary Development, Test Format, Cues
Ivan D. Mardini G.; Christian G. Quintero M.; César A. Viloria N.; Winston S. Percybrooks B.; Heydy S. Robles N.; Karen Villalba R. – Education and Information Technologies, 2024
Today reading comprehension is considered an essential skill in modern life, therefore, higher education students require more specific skills to understand, interpret and evaluate texts effectively. Short answer questions (SAQs) are one of the relevant and proper tools for assessing reading comprehension skills. Unlike multiple-choice questions,…
Descriptors: Reading Comprehension, Reading Tests, Learning Strategies, Grading
Celeste Combrinck – SAGE Open, 2024
We have less time and focus than ever before, while the demand for attention is increasing. Therefore, it is no surprise that when answering questionnaires, we often choose to strongly agree or be neutral, producing problematic and unusable data. The current study investigated forced-choice (ipsative) format compared to the same questions on a…
Descriptors: Likert Scales, Test Format, Surveys, Design
Kyung-Mi O. – Language Testing in Asia, 2024
This study examines the efficacy of artificial intelligence (AI) in creating parallel test items compared to human-made ones. Two test forms were developed: one consisting of 20 existing human-made items and another with 20 new items generated with ChatGPT assistance. Expert reviews confirmed the content parallelism of the two test forms.…
Descriptors: Comparative Analysis, Artificial Intelligence, Computer Software, Test Items

Peer reviewed
Direct link
