top of page

Talk by Aina Puce (Indiana University)

How does the brain respond when we see a crowd of faces, or a collection of objects?

30

may

wed.

4pm

Room Théodule Ribot

ENS, 29 rue d'Ulm 75005 Paris

Aina Puce is invited by the SAN team (ICM), and the Social Group (LNC² - ENS) 

Abstract

In everyday life we are continually on the move, as are people and objects around us. We regularly encounter groups or crowds of people and collections of objects, and are able to decode their properties and in the case of people, their intentions. How do we do this ? Somewhat surprisingly, social and cognitive neuroscience research has not yet really tackled this question. Over the last decade or so, more studies are using naturalistic viewing paradigms, and these studies have largely confirmed the earlier findings of selectivity in visual brain area, and more recently have begun to chart how brain networks work together in response to rapidly changing visual (and associated auditory) input. Despite these forays into more ecologically valid tasks, very little work has been devoted to how we process more than one human face or object, when seen simultaneusly. When we look at a group of people (or at a crowd), at what point do we no longer see the individual but instead look at a collective unit such as a crowd? Similarly, how do we decode the collective emotions of a crowd? We are currently exploring these research questions in my laboratory.

 

I will describe a set of 4 very basic experiments examining how neural responses are modulated by the numerosity of a complex stimulus such as a face, using event-related potentials (ERPs) and electroencephalographic (EEG) power in data collected in healthy subjects. In Expts 1 and 2, we varied the number of items in the visual display from 1 to 36. In Expts 3 and 4, a 36-item stimulus display was used, but the ratio of the number of concurrent faces to scrambled faces was manipulated. The N170 ERP increased as a function of stimulus number, reaching a plateau at about 25 faces. We postulate that this is the point at which individual faces are no longer seen as such, but are processed as a crowd. Intriguingly, when faces are presented in a mixed display with another stimulus category (scrambled faces), the N170 response linearizes. Increasing face number was best described by a linear increase in evoked EEG power in theta, alpha and beta bands for all types of visual displays. Future experiments with mixed arrays of faces and everyday objects will be needed to explicitly test if and when objects/faces are no longer processed as individual items, but are processed as a group or a collection. This work has implications for naturalistic studies of social interactions, suggesting that the number of individuals in the scene should be controlled for, or at least monitored.

Aina Puce

My current research focuses on the neural basis of nonverbal communication, with a particular emphasis on the dynamic face, and the processing of facial emotions and crowds. 

I have previously investigated how the brain responds to static and dynamic faces, hands and bodies. I use mainly EEG and fMRI methods for my studies, but have also used intracranial EEG recordings, MEG and infra-red eye tracking and mouse-tracking. I am currently the Eleanor Cox Riggs Professor in the Department of Psychological & Brain Sciences at Indiana University, where I perform my research and also teach graduate and undergraduate students. From January to July 2018, I am on sabbatical from Indiana University and am very fortunate to be working in the SAN lab of Nathalie George at the ICM.

Website >>

bottom of page