NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20251
Since 2022 (last 5 years)8
Since 2017 (last 10 years)13
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Liunian Li – ProQuest LLC, 2024
To build an Artificial Intelligence system that can assist us in daily lives, the ability to understand the world around us through visual input is essential. Prior studies train visual perception models by defining concept vocabularies and annotate data against the fixed vocabulary. It is hard to define a comprehensive set of everything, and thus…
Descriptors: Artificial Intelligence, Visual Stimuli, Visual Perception, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ethan O. Nadler; Douglas Guilbeault; Sofronia M. Ringold; T. R. Williamson; Antoine Bellemare-Pepin; Iulia M. Com?a; Karim Jerbi; Srini Narayanan; Lisa Aziz-Zadeh – Cognitive Science, 2025
Can metaphorical reasoning involving embodied experience--such as color perception--be learned from the statistics of language alone? Recent work finds that colorblind individuals robustly understand and reason abstractly about color, implying that color associations in everyday language might contribute to the metaphorical understanding of color.…
Descriptors: Color, Painting (Visual Arts), Natural Language Processing, Figurative Language
Peer reviewed Peer reviewed
Direct linkDirect link
Stefan Depeweg; Contantin A. Rothkopf; Frank Jäkel – Cognitive Science, 2024
More than 50 years ago, Bongard introduced 100 visual concept learning problems as a challenge for artificial vision systems. These problems are now known as Bongard problems. Although they are well known in cognitive science and artificial intelligence, only very little progress has been made toward building systems that can solve a substantial…
Descriptors: Visual Learning, Problem Solving, Cognitive Science, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Harris, Anthony M.; Eayrs, Joshua O.; Lavie, Nilli – Cognitive Research: Principles and Implications, 2023
Highly-automated technologies are increasingly incorporated into existing systems, for instance in advanced car models. Although highly automated modes permit non-driving activities (e.g. internet browsing), drivers are expected to reassume control upon a 'take over' signal from the automation. To assess a person's readiness for takeover,…
Descriptors: Eye Movements, Attention, Cognitive Processes, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Taylor, Tessa; Lanovaz, Marc J. – Journal of Applied Behavior Analysis, 2022
Behavior analysts typically rely on visual inspection of single-case experimental designs to make treatment decisions. However, visual inspection is subjective, which has led to the development of supplemental objective methods such as the conservative dual-criteria method. To replicate and extend a study conducted by Wolfe et al. (2018) on the…
Descriptors: Visual Perception, Artificial Intelligence, Decision Making, Evaluators
Willis, Athena S. – ProQuest LLC, 2023
Recent research shows that deaf signers show increased behavioral and neural sensitivity to certain types of movement, such as biological motion, human actions, and signing avatars. However, other work suggests that in deaf signers exposed to signed language before age five, the mirror mechanism has minimal involvement during the perception of…
Descriptors: Deafness, Sign Language, Young Children, Cognitive Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Janet H. Hsiao; Jeehye An; Veronica Kit Sum Hui; Yueyuan Zheng; Antoni B. Chan – npj Science of Learning, 2022
Greater eyes-focused eye movement pattern during face recognition is associated with better performance in adults but not in children. We test the hypothesis that higher eye movement consistency across trials, instead of a greater eyes-focused pattern, predicts better performance in children since it reflects capacity in developing visual…
Descriptors: Eye Movements, Recognition (Psychology), Human Body, Visual Perception
Peer reviewed Peer reviewed
Sami Baral; Li Lucy; Ryan Knight; Alice Ng; Luca Soldaini; Neil T. Heffernan; Kyle Lo – Grantee Submission, 2024
In real-world settings, vision language models (VLMs) should robustly handle naturalistic, noisy visual content as well as domain-specific language and concepts. For example, K-12 educators using digital learning platforms may need to examine and provide feedback across many images of students' math work. To assess the potential of VLMs to support…
Descriptors: Visual Learning, Visual Perception, Natural Language Processing, Freehand Drawing
Peer reviewed Peer reviewed
Direct linkDirect link
Mason, Blake; Rau, Martina A.; Nowak, Robert – Cognitive Science, 2019
Visual representations are prevalent in STEM instruction. To benefit from visuals, students need representational competencies that enable them to see meaningful information. Most research has focused on explicit conceptual representational competencies, but implicit perceptual competencies might also allow students to efficiently see meaningful…
Descriptors: Visual Aids, STEM Education, Task Analysis, Competence
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sen, Ayon; Patel, Purav; Rau, Martina A.; Mason, Blake; Nowak, Robert; Rogers, Timothy T.; Zhu, Xiaojin – International Educational Data Mining Society, 2018
In STEM domains, students are expected to acquire domain knowledge from visual representations that they may not yet be able to interpret. Such learning requires perceptual fluency: the ability to intuitively and rapidly see which concepts visuals show and to translate among multiple visuals. Instructional problems that engage students in…
Descriptors: Visual Aids, Visual Perception, Data Analysis, Artificial Intelligence
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Moreno-Estevaa, Enrique Garcia; White, Sonia L. J.; Wood, Joanne M.; Black, Alex A. – Frontline Learning Research, 2018
In this research, we aimed to investigate the visual-cognitive behaviours of a sample of 106 children in Year 3 (8.8 ± 0.3 years) while completing a mathematics bar-graph task. Eye movements were recorded while children completed the task and the patterns of eye movements were explored using machine learning approaches. Two different techniques of…
Descriptors: Artificial Intelligence, Man Machine Systems, Mathematics Education, Eye Movements
Baker, Jason R. – ProQuest LLC, 2017
The goals of the present action research study were to understand intelligence analysts' perceptions of weapon systems visual recognition ("vis-recce") training and to determine the impact of a Critical Thinking Training (CTT) Seminar and Formative Assessments on unit-level intelligence analysts' "vis-recce" performance at a…
Descriptors: Critical Thinking, Thinking Skills, Skill Development, Military Personnel
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Simonson, Michael, Ed.; Seepersaud, Deborah, Ed. – Association for Educational Communications and Technology, 2019
For the forty-second time, the Association for Educational Communications and Technology (AECT) is sponsoring the publication of these Proceedings. Papers published in this volume were presented at the annual AECT Convention in Las Vegas, Nevada. The Proceedings of AECT's Convention are published in two volumes. Volume 1 contains 37 papers dealing…
Descriptors: Educational Technology, Technology Uses in Education, Research and Development, Elementary Education