Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 8 |
| Since 2007 (last 20 years) | 15 |
Descriptor
| Bayesian Statistics | 16 |
| Inferences | 16 |
| Models | 10 |
| Learning Processes | 6 |
| Classification | 4 |
| Cognitive Processes | 4 |
| Decision Making | 4 |
| Logical Thinking | 4 |
| Computation | 3 |
| Language Processing | 3 |
| Young Children | 3 |
| More ▼ | |
Source
| Cognitive Science | 16 |
Author
| Lagnado, David A. | 2 |
| Lee, Michael D. | 2 |
| Austerweil, Joseph L. | 1 |
| Burns, Patrick | 1 |
| Chen, Dawn | 1 |
| Fenton, Norman | 1 |
| Franke, Michael | 1 |
| Frosch, Caren A. | 1 |
| Gershman, Samuel J. | 1 |
| Gopnik, Alison | 1 |
| Griffiths, Thomas L. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 16 |
| Reports - Research | 12 |
| Reports - Evaluative | 3 |
| Reports - Descriptive | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Austerweil, Joseph L.; Sanborn, Sophia; Griffiths, Thomas L. – Cognitive Science, 2019
Generalization is a fundamental problem solved by every cognitive system in essentially every domain. Although it is known that how people generalize varies in complex ways depending on the context or domain, it is an open question how people "learn" the appropriate way to generalize for a new context. To understand this capability, we…
Descriptors: Generalization, Logical Thinking, Inferences, Bayesian Statistics
Kangasrääsiö, Antti; Jokinen, Jussi P. P.; Oulasvirta, Antti; Howes, Andrew; Kaski, Samuel – Cognitive Science, 2019
This paper addresses a common challenge with computational cognitive models: identifying parameter values that are both theoretically plausible and generate predictions that match well with empirical data. While computational models can offer deep explanations of cognition, they are computationally complex and often out of reach of traditional…
Descriptors: Inferences, Computation, Cognitive Processes, Models
Lloyd, Kevin; Sanborn, Adam; Leslie, David; Lewandowsky, Stephan – Cognitive Science, 2019
Algorithms for approximate Bayesian inference, such as those based on sampling (i.e., Monte Carlo methods), provide a natural source of models of how people may deal with uncertainty with limited cognitive resources. Here, we consider the idea that individual differences in working memory capacity (WMC) may be usefully modeled in terms of the…
Descriptors: Short Term Memory, Bayesian Statistics, Cognitive Ability, Individual Differences
Mayrhofer, Ralf; Waldmann, Michael R. – Cognitive Science, 2016
Research on human causal induction has shown that people have general prior assumptions about causal strength and about how causes interact with the background. We propose that these prior assumptions about the parameters of causal systems do not only manifest themselves in estimations of causal strength or the selection of causes but also when…
Descriptors: Causal Models, Bayesian Statistics, Inferences, Probability
Chen, Dawn; Lu, Hongjing; Holyoak, Keith J. – Cognitive Science, 2017
A key property of relational representations is their "generativity": From partial descriptions of relations between entities, additional inferences can be drawn about other entities. A major theoretical challenge is to demonstrate how the capacity to make generative inferences could arise as a result of learning relations from…
Descriptors: Inferences, Abstract Reasoning, Learning Processes, Models
Roettger, Timo B.; Franke, Michael – Cognitive Science, 2019
Intonation plays an integral role in comprehending spoken language. Listeners can rapidly integrate intonational information to predictively map a given pitch accent onto the speaker's likely referential intentions. We use mouse tracking to investigate two questions: (a) how listeners draw predictive inferences based on information from…
Descriptors: Cues, Intonation, Language Processing, Speech Communication
Jarecki, Jana B.; Meder, Björn; Nelson, Jonathan D. – Cognitive Science, 2018
Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature independence assumption simplifies the inference…
Descriptors: Classification, Conditioning, Inferences, Novelty (Stimulus Dimension)
Schouwstra, Marieke; Swart, Henriëtte; Thompson, Bill – Cognitive Science, 2019
Natural languages make prolific use of conventional constituent-ordering patterns to indicate "who did what to whom," yet the mechanisms through which these regularities arise are not well understood. A series of recent experiments demonstrates that, when prompted to express meanings through silent gesture, people bypass native language…
Descriptors: Nonverbal Communication, Language Acquisition, Bayesian Statistics, Preferences
Gershman, Samuel J.; Pouncy, Hillard Thomas; Gweon, Hyowon – Cognitive Science, 2017
We routinely observe others' choices and use them to guide our own. Whose choices influence us more, and why? Prior work has focused on the effect of perceived similarity between two individuals (self and others), such as the degree of overlap in past choices or explicitly recognizable group affiliations. In the real world, however, any dyadic…
Descriptors: Social Influences, Social Cognition, Inferences, Models
Phillips, Lawrence; Pearl, Lisa – Cognitive Science, 2015
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's "cognitive plausibility." We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition…
Descriptors: Language Acquisition, Models, Computational Linguistics, Credibility
Jenkins, Gavin W.; Samuelson, Larissa K.; Smith, Jodi R.; Spencer, John P. – Cognitive Science, 2015
It is unclear how children learn labels for multiple overlapping categories such as "Labrador," "dog," and "animal." Xu and Tenenbaum (2007a) suggested that learners infer correct meanings with the help of Bayesian inference. They instantiated these claims in a Bayesian model, which they tested with preschoolers and…
Descriptors: Generalization, Young Children, Inferences, Models
Fenton, Norman; Neil, Martin; Lagnado, David A. – Cognitive Science, 2013
A Bayesian network (BN) is a graphical model of uncertainty that is especially well suited to legal arguments. It enables us to visualize and model dependencies between different hypotheses and pieces of evidence and to calculate the revised probability beliefs about all uncertain factors when any piece of new evidence is presented. Although BNs…
Descriptors: Networks, Bayesian Statistics, Persuasive Discourse, Models
Weisberg, Deena S.; Gopnik, Alison – Cognitive Science, 2013
Young children spend a large portion of their time pretending about non-real situations. Why? We answer this question by using the framework of Bayesian causal models to argue that pretending and counterfactual reasoning engage the same component cognitive abilities: disengaging with current reality, making inferences about an alternative…
Descriptors: Causal Models, Bayesian Statistics, Young Children, Imagination
Frosch, Caren A.; McCormack, Teresa; Lagnado, David A.; Burns, Patrick – Cognitive Science, 2012
The application of the formal framework of causal Bayesian Networks to children's causal learning provides the motivation to examine the link between judgments about the causal structure of a system, and the ability to make inferences about interventions on components of the system. Three experiments examined whether children are able to make…
Descriptors: Bayesian Statistics, Intervention, Inferences, Attribution Theory
Lee, Michael D.; Vanpaemel, Wolf – Cognitive Science, 2008
This article demonstrates the potential of using hierarchical Bayesian methods to relate models and data in the cognitive sciences. This is done using a worked example that considers an existing model of category representation, the Varying Abstraction Model (VAM), which attempts to infer the representations people use from their behavior in…
Descriptors: Computation, Inferences, Cognitive Science, Models
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
