COGFEST 2011 Abstracts

COGFEST 2011


Joshua B. Tenenbaum

Brain and Cognitive Sciences, MIT

Title: How to Grow a Mind: Statistics, Structure, and Abstraction

In coming to understand the world—in learning concepts, acquiring language, and grasping causal relations—our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?


Dirk Bernhardt-Walther

Department of Psychology, Ohio State University

Title: Deciphering the neural codes of natural scene categories

How do people see the natural world around them? How does our brain activity differ when we see a forest versus a highway? What are the properties of natural scenes that let us recognize and categorize them so fast and efficiently? We address these questions with MRI experiments in which participants view natural scenes. We then look for activity patterns in the brain that tell us what kind of scenes the participants saw. To achieve this association of brain activity patterns with the scene categories we borrow advanced statistical learning techniques from electrical engineering and computer science. Using these techniques we can observe the changes in the brain activity patterns when we modify the images that we show to our participants. For instance, showing the images upsidedown impairs people's ability to categorize scenes. Accordingly, we see a weakening of the activity patterns associated with scene categories in a particular brain area (the PPA), but not in other brain areas. Recently we have started to use line drawings of natural scenes in our experiments. Line drawings pervade the history of art in most cultures on earth. Although line drawings lack many of the defining characteristics seen in the real world (color, most texture, most shading), they nevertheless appear to capture some essential structure, which makes them useful as a way to depict the world for artistic expression or as a visual record. In fact, children use "boundary lines" or "embracing lines" to define the shapes of objects and object parts in their first attempts to depict the world around them. In our fMRI experiments we have found that line drawings of natural scenes elicit very similar activation patterns as color photographs in the PPA and some other visually active brain regions. This tells us that these brain regions contain information about the content of the scenes and not just information about their visual attributes (e.g., green forest, horizontal lines for highways).


David Huron

School of Music, Ohio State University

Title: How Music Produces Goosebumps and Why Listeners Enjoy It

One of the most sublime experiences induced by music entails the distinctive feeling of shivers, chills, or "frisson." Music is not alone in evoking pleasant "hair-raising" experiences: climbing into a warm bath, riding a roller coaster, or being unexpectedly touched by a potential romantic partner can all lead to frisson. Conversely, there are unpleasant "hair-raising" experiences, including finger-nails scratching a blackboard, encountering a wild animal, or hearing someone scream. What distinguishes pleasant goosebumps from unpleasant goosebumps? And how is music able to evoke this memorable response? This presentation summarizes three decades of scientific research and proposes the theory that pleasant goosebumps are linked to cortical inhibition of the amygdala.


Andrea Sims

Slavic & East European, Ohio State University

Title: Productivity and the Paradigmatic Organization of Words

How do speakers coin new words? A long-standing mystery related to word structure (morphology) is the productivity of morphological patterns, or more specifically, their potential lack of productivity (e.g. piglet, booklet exist, but new words cannot be coined with - let: *birdlet, *houselet). What determines whether a morphological pattern will be productive, and thus available to form new words? And why should variable productivity be a property of morphology? Classical deductive methods of linguistic discovery have identified structural and stylistic restrictions on the application of morphological rules, but it is by now clear that the productivity of a rule cannot be equated with its restrictiveness. Some rules have virtually no restrictions but they still cannot be used to coin new words. The traditional approach has thus not been able to adequately solve the problem of variable productivity. However, a recent shift in the field of morphology (and in linguistics more generally) towards a dynamic systems perspective opens the door to asking whether, and how, differences in morphological productivity depend on initial conditions and the nature of the interaction between linguistic elements. Along these lines, my approach emphasizes the paradigmatic, rather than the syntagmatic, organization of words. Among working morphologists, the trend has been towards viewing the morpheme (the ultimate syntagma) as a convenient fiction. Many morphologists now argue that the word, not the morpheme, is the most basic meaningful unit of language. In this view, the meaning of a word is defined by oppositions that it enters into with related words (e.g. cats is plural not because -s means 'plurality' in any direct way, but rather because cats is paradigmatically opposed to cat). Morphemes thus have no necessary theoretical status, but the paradigm – a network of implicational relations holding between words – becomes a central theoretical concept. Through experimental and simulation data, I show that thinking about the paradigm as a dynamic system of interactions in the lexicon helps us to understand both (differences in) inflectional productivity and its apparent opposite, inflectional defectiveness.


Top of page