Wil Cunningham
Department of Psychology, The Ohio State University
Title: The Motivated Brain: Goals Shape Activation
Although early research implicated the amygdala in the automatic processing of negative information, more recent research suggests that it plays a more general role in the processing of the motivational relevance of various stimuli. In several studies, we demonstrate that relationships between valence and amygdala activity contextually vary due to chronic and contextual goals. Implications for the social neuroscience of prejudice will be explored.
Susan Johnson
Department of Psychology, The Ohio State University
Title: Attachment and Social Information Processing in Human Infants: From Genes to Beliefs
My research examines traditional topics in infant social development through the lens of cognitive science in order to characterize the representational mechanisms that drive early social reasoning and behavior. I show that infantsĀ¹ attachment-related behaviors in the traditional Strange Situation are directly related to both their conceptualization of social interactions and their attention to emotional expressions. I also argue that these conceptualizations are the subjective outcomes of objective experience as filtered through genetic biases in social information processing.
Professor Yuko Munakata
Department of Psychology, University of Colorado, Boulder
Morning Session Keynote Speaker
Title: Developing Cognitive Control
We carry out habitual behaviors day after day. How do we break out of them? Our studies of the development of cognitive control suggest three key transitions to more flexible behavior. The first is an increasing ability to overcome habits in response to changes in the environment. The second transition is a shift from reactive to proactive control. The third transition is a shift from externally-driven to internally-generated cognitive control. These transitions can be understood in terms of the development of increasingly active and abstract task representations, as evidenced through our studies of children's card-sorting, continuous performance, and verbal fluency.
William Schuler
Department of Linguistics, The Ohio State University
Title: A Simple Computational Model of Interactive Language Comprehension
Probabilistic time-series models are widely used in computational linguistics to model the human capacity to anticipate and resolve ambiguities that arise in language comprehension. But many studies (e.g. by Tanenhaus and colleagues) suggest that these disambiguation decisions depend not only on word frequencies, learned from a lifetime of language use, but also on the interpretations of hypothesized words in some immediate discourse or environment context. In some cases, words appear to be interpreted and contextually constrained even while they are still being pronounced. This timely access to referential information about input utterances may allow listeners to adjust their preferences among likely interpretations of noisy or ambiguous utterances to favor those that make sense in the current environment or discourse context, before any lowerlevel disambiguation decisions have been made. This talk will describe a probabilistic time-series model that allows familiar generative syntactic structures and model-theoretic semantic constraints to interactively influence disambiguation decisions in incremental speech comprehension. The model is defined over a formal notion of incomplete constituents, related to ordinary (complete) phrase structure constituents through a reversible `right-corner' grammar transform. These incomplete constituents form predominantly left-branching tree structures which, in ordinary incremental recognition, allow new incomplete constituents to be semantically composed immediately upon being hypothesized. The time-series model is then factored to define probability distributions over sets of stacked-up incomplete constituents, each further factored into syntactic and referential states. Experiments on the large syntactically-annotated Penn Treebank corpus show this transformed model can parse a vast majority of sentences using only three stacked-up incomplete constituents, in line with estimates of human short-term working memory capacity. The referential states in this model are then further factored into a vectorial representation of latent concepts contributing to meaning. These concepts can be intersected, negated, or unioned using block matrix operations, supporting either vague archetypes that probabilistically constrain meaning (as in other vectorial representations), or specific entities in a familiar world model (as in symbolic constraint-based representations), or a mixture of the two (e.g. latent entities induced from observations). Experiments on an implementation of this model in a real-time spoken language interface show that basing probability estimates of hypothesized words on interpretations of these words in an environment context creates a human-like capacity to exploit overspecification (or redundancy) in spoken descriptions to improve recognition accuracy.
Mikhail Belkin
Department of Computer Science and Engineering, The Ohio State University
Title: Supervised and Unsupervised Learning in High Dimension
One of the key challenges in machine learning and, arguably, a key to understanding natural learning, is dealing with high dimensionality of the data. While standard linear methods, such as Principal Components Analysis are well-known and widely used, the underlying assumption of linearity is restrictive in many situations. In recent years a class of algorithms called manifold learning has been developed to deal with non-linear high dimensional data and has seen success in dimensionality reduction, semi-supervised learning and other applications. In my talk I will discuss the problem of high dimensionality, some of our work on manifold learning and applications to learning from labeled and unlabeled data.
Professor Randy O'Reilly
Department of Psychology, University of Colorado, Boulder
Afternoon Session Keynote Speaker
Title: From Spikes to Object Recognition and Beyond: Building an Embodied Brain
One of the great unsolved questions in our field is how the human brain, and simulations thereof, can achieve the kind of common-sense understanding that is widely believed to be essential for robust intelligence. Many have argued that embodiment is important for developing common-sense understanding, but exactly how this occurs at a mechanistic level remains unclear. In the process of building an embodied cognitive agent that learns from experience in a virtual environment, my colleagues and I have developed several insights into this process. First, embodiment provides access to a rich, continuous source of training signals that, in conjunction with the proper neural structures, naturally support the learning of complex sensory-motor abilities. Second, there is an intriguing developmental cascade of learning, where initial learning to fixate (foveate) a target enables subsequent learning to reach for that target, and also to recognize it within a cluttered visual environment. Finally, there are important functional differences in the learning mechanisms required for different brain areas and associated domains, which converge well with bottom-up biological data, including that on spike-timing dependent plasticity (STDP).
Top of page