Neural Correlates of Face Recognition Bernie C. Till

 

Allison, T, Puce, A, & McCarthy, G, 2000: Social perception from visual cues: Role of the STS region, Trends in Cognitive Sciences, vol 4, no 7, pp 267-278.

Social perception refers to initial stages in the processing of information that culminates in the accurate analysis of the dispositions and intentions of other individuals. Single-cell recordings in monkeys, and neurophysiological and neuroimaging studies in humans, reveal that cerebral cortex in and near the superior temporal sulcus (STS) region is an important component of this perceptual system. In monkeys and humans, the STS region is activated by movements of the eyes, mouth, hands and body, suggesting that it is involved in analysis of biological motion. However, it is also activated by static images of the face and body, suggesting that it is sensitive to implied motion and more generally to stimuli that signal the actions of another individual. Subsequent analysis of socially relevant stimuli is carried out in the amygdala and orbitofrontal cortex, which supports a three-structure model proposed by Brothers. The homology of human and monkey areas involved in social perception, and the functional interrelationships between the STS region and the ventral face area, are unresolved issues.

Allison, T, Puce, A, Spencer, D D, & McCarthy, G, 1999: Electrophysiological Studies of Human Face Perception I: Potentials Generated in Occipitotemporal Cortex by Face and Non-face Stimuli, Cerebral Cortex, vol 9, pp 415-430.

This and the following two papers describe event-related potentials (ERPs) evoked by visual stimuli in 98 patients in whom electrodes were placed directly upon the cortical surface to monitor medically intractable seizures. Patients viewed pictures of faces, scrambled faces, letter-strings, number-strings, and animate and inanimate objects. This paper describes ERPs generated in striate and peristriate cortex, evoked by faces, and evoked by sinusoidal gratings, objects and letter-strings. Short-latency ERPs generated in striate and peristriate cortex were sensitive to elementary stimulus features such as luminance. Three types of face-specific ERPs were found: (i) a surface-negative potential with a peak latency of ~200 ms (N200) recorded from ventral occipitotemporal cortex, (ii) a lateral surface N200 recorded primarily from the middle temporal gyrus, and (iii) a late positive potential (P350) recorded from posterior ventral occipitotemporal, posterior lateral temporal and anterior ventral temporal cortex. Face-specific N200s were preceded by P150 and followed by P290 and N700 ERPs. N200 reflects initial face-specific processing, while P290, N700 and P350 reflect later face processing at or near N200 sites and in anterior ventral temporal cortex. Face-specific N200 amplitude was not significantly different in males and females, in the normal and abnormal hemisphere, or in the right and left hemisphere. However, cortical patches generating ventral face-specific N200s were larger in the right hemisphere. Other cortical patches in the same region of extrastriate cortex generated grating-sensitive N180s and object-specific or letter-string-specific N200s, suggesting that the human ventral object recognition system is segregated into functionally discrete regions.

Andrews, T J, & Schluppeck, D, 2004: Neural responses to Mooney images reveal a modular representation of faces in human visual cortex, NeuroImage, vol 21, pp 91-98.

The way in which information about objects is represented in visual cortex remains controversial. One model of human object recognition poses that information is processed in modules, highly specialised for different categories of objects; an opposing model appeals to a distributed representation across a large network of visual areas. We addressed this debate by monitoring activity in face- and object selective areas while human subjects viewed ambiguous face stimuli (Mooney faces). The measured neural response in the face-selective region of the fusiform gyrus was greater when subjects reported seeing a face than when they perceived the image as a collection of blobs. In contrast, there was no difference in magnetic resonance response between face and no-face perceived events in either the face-selective voxels of the superior temporal sulcus or the object-selective voxels of the parahippocampal gyrus and lateral occipital complex. These results challenge the concept that neural representation of faces is distributed and overlapping and suggest that the fusiform gyrus is tightly linked to the awareness of faces.

Bartlett, J C, & Searcy, J, 1993: Inversion and Configuration of Faces, Cognitive Psychology, vol 25, pp 281-316.

If the mouth and eyes of a face are inverted, the altered construction appears grotesque when upright, but not when upside-down. Three studies of this "Thatcher illusion" employed faces that were grotesque when upright because: (a) their eyes and mouths had been inverted ("Thatcherized" faces), (b) their eyes and mouths had been moved (spatially distorted faces), or (c) they had grotesque posed expressions. Inversion reduced the apparent grotesqueness of both Thatcherized and spatially distorted faces, but not grotesque-expression faces. Moreover, Thatcherized and distorted faces, although not grotesque-expression faces, were judged as more similar to normal, smiling faces when face-pairs were inverted than when they were upright. Similarity ratings to inverted face-pairs were correlated with latencies of response to these pairs in a task that encouraged attention to components (e.g., mouths, eyes) rather than wholistic properties. Similarity ratings to upright face-pairs showed no such correlation, and this and other findings suggested that although similarity ratings to upright faces are based on wholistic information, similarity ratings to inverted faces are based largely on components. The Thatcher illusion reflects a disruption of encoding of wholistic information when faces are inverted.

Batty, M, & Taylor, M J, 2003: Early processing of the six basic facial emotional expressions, Cognitive Brain Research, vol 17, pp 613-620.

Facial emotions represent an important part of non-verbal communication used in everyday life. Recent studies on emotional processing have implicated differing brain regions for different emotions, but little has been determined on the timing of this processing. Here we presented a large number of unfamiliar faces expressing the six basic emotions, plus neutral faces, to 26 young adults while recording event-related potentials (ERPs). Subjects were naive with respect to the specific questions investigated; it was an implicit emotional task. ERPs showed global effects of emotion from 90 ms (P1), while latency and amplitude differences among emotional expressions were seen from 140 ms (N170 component). Positive emotions evoked N170 significantly earlier than negative emotions and the amplitude of N170 evoked by fearful faces was larger than neutral or surprised faces. At longer latencies (330-420 ms) at fronto- central sites, we also found a different pattern of effects among emotions. Localization analyses confirmed the superior and middle-temporal regions for early processing of facial expressions; the negative emotions elicited later, distinctive activations. The data support a model of automatic, rapid processing of emotional expressions.

Begleiter, H, Porjesz, B, & Wang, W, 1995: Event-related brain potentials differentiate priming and recognition to familiar and unfamiliar faces, Electroencephalography and Clinical Neurophysiology, vol 94, no 1, pp 41-49.

Recent studies from our laboratory have resulted in the identification of an event- related potential (ERP) correlate of a visual memory process. This memory process is reflected by a reduction in the voltage of the visual memory potential (VMP) to repeated pictures of unfamiliar faces compared te novel pictures of faces. In the current experiment we used unfamiliar and famous faces in an identical repetition priming paradigm, while the subject differentially recognized famous from non-famous faces. Significant differences in response times were obtained between primed and unprimed familiar faces, but not between primed and unprimed unfamiliar faces. The VMP was reduced to primed unfamiliar faces and significantly diminished to primed familiar faces compared to unprimed stimuli. Priming was typically reflected by a reduction of the VMP at the occipito-temporal region, whereas recognition resulted in a diminution of the VMP at both the occipito-temporal region and at the frontal region. These data support the involvement of differential neural systems for priming and recognition of visual stimuli.

Bentin, S, Allison, T, Puce, A, Perez, E, & McCarthy, G, 1996: Electrophysiological Studies of Face Perception in Humans, Journal of Cognitive Neuroscience, vol 8, no 6, pp 551-565.

Event-related potentials (ERPs) were recorded with scalp electrodes from normal volunteers. Subjects performed a visual target detection task I which they mentally counted the number of occurrances of pictorial stimuli from a designated category such as butterflies. In separate experiments, target stimuli were embedded within a series of other stimuli including unfamiliar human faces and isolated face components, inverted faces, distorted faces, animal faces and other non-face stimuli. Human faces evoked a negative potential at 172 msec (N170), which was absent from ERPs elicited from other animate and inanimate non-face stimuli. N170 was largest over the posteror temporal scalp and was larger over the right than the left hemisphere. N170 was delayed when faces were presented upside-down, but its amplitude did not change. When presented in isolation, eyes elicited an N170 that was significantly larger than that elicited by whole faces, while noses and lips elicited small negative ERPs about 50 msec later than N170. Distorted human faces, in which the locations of inner face components were altered, elicited an N170 similar in amplitude to that elicited by normal faces. However, faces of animals, human hands, cars, and items of furniture did not evoke N170. N170 may reflect the operation of a neural mechanism tuned to detect (as opposed to identify) human faces, similar to the "structural encoder" suggested by Bruce and Young (1986). A similar function has been proposed for the face-selective N200 recorded from the middle fusiform and posterior inferior temporal gyri using subdural electrodes in humans (Allison, McCarthy, Nobre, Puce and Belger, 1994c). However, the differential sensitivity of N170 to eyes in isolation suggests that N170 may reflect the activation of an eye-sensitive region of cortex. The voltage distribution of N170 over the scalp is consistent with a neural generator located in the occipitotemporal sulcus lateral to the fusiform/inferior temporal region that generates N200.

Bentin, S, & Carmel, D, 2002: Accounts for the N170 face-effect - a reply to Rossion, Curran, & Gauthier, Cognition, vol 85, pp 197-202.

In their commentary, Rossion, Curran, and Gauthier (Rossion, B., Curran, T., Gauthier, I. (2002). A defense of the subordinate-level expertise account for the N170 component. Cognition, 85, 197-202) (RC&G) raise a series of arguments against the domain-specificity account for the N170 face effect (Carmel, D., & Bentin, S. (2002). Domain specificity versus expertise: factors influencing distinct processing of faces. Cognition, 83, 1-29). This effect consists of a large difference (always significant) observed in the amplitude of a negative component peaking at the lower posterior temporal sites in response to human faces relative to many other stimulus categories. As an alternative to domain specificity, RC&G advocate a "subordinate-level expertise" account, by which the N170 effect can be obtained for any type of stimulus for the individual identification of which the perceiver is an expert. While considering some of their arguments well taken and interesting, we believe that, overall, RC&G's interpretation of our current data (as well as some of theirs) and of our position ignores several important aspects and, therefore, their critique is not persuasive.

Bentin, S, & Deouell, L Y, 2000: Structural Encoding and Identification in Face Processing: ERP Evidence for Separate Mechanisms, Cognitive Neuropsychology, vol 17, pp 35-54.

The present study had two aims. The first aim was to explore the possible top-down effect of face recognition and/or face identification processes on the formation of structural representation of faces, as indexed by the N170 ERP component. The second aim was to examine possible ERP manifestations of face identification processes as an initial step for assessing their time course and functional neuroanatomy. Identical N170 potentials were elicited by famous and unfamiliar faces in Experiment 1, when both were irrelevant to the task, suggesting that face familiarity does not affect structural encoding processes. Small but significant differences were observed, however, during later-occurring epochs of the ERPs. In Experiment 2 the participants were instructed to count occasionally occurring portraits of famous politicians while rejecting faces of famous people who were not politicians and faces of unfamiliar people. Although an attempt to identify each face was required, no differences were found in the N170 elicited by faces of unfamiliar people and faces of familiar non-politicians. Famous faces, however, elicited a negative potential that was significantly larger than that elicited by unfamiliar faces between about 250 and 500 msec from stimulus onset. This negative component was tentatively identified as an N400 analogue elicited by faces. Both the absence of an effect of familiarity on the N170 and the familiarity face-N400 effect were replicated in Experiment 3, in which the participants made speeded button-press responses in each trial, distinguishing among faces of politicians and faces of famous and unfamiliar non-politicians. In addition, ERP components later than the N400 were found to be associated with the speed of the response but not with face familiarity. We concluded that (1) although reflected by the N170, the structural encoding mechanism is not influenced by the face recognition and identification processes, and (2) the negative component modulated by face familiarity is associated with the semantic activity involved in the identification of familiar faces.

Bobes, M A, Lopera, F, D�az-Comas, L, Galan, L, Carbonell, F, Bringas, M L, & Vald�s-Sosa, M, 2004: Brain Potentials Reflect Residual Face Processing in a Case of Prosopagnosia, Cognitive Neuropsychology, vol 21, no 7, pp 691-718.

Here, ERPs were employed to characterise the residual face processing of FE, a patient with extensive damage to the ventral temporal-occipital cortex and a dense prosopagnosia. A large N170 was present in FE and he performed well in tests of face structural processing. Covert recognition of the faces of personal acquaintances was demonstrated with P300 oddball experiments. The onset latency of the P300 effect was normal, indicating fast availability of covert memory. The scalp topography of this component in FE was different from that of the P3b, presenting a centro-frontal maximum. FE also presented larger skin conductance responses to familiar than to unfamiliar faces. The amplitudes of both the single-trial P300s and the SCRs triggered by familiar faces were positively correlated with the degree of person-familiarity that FE had for the poser. He performed at chance when asked to select between the face of a familiar person and that of an unfamiliar person on the basis of explicit recognition, whereas he selected more the previously known face if the forced choice was based on trustworthiness or a vague sense of familiarity. The results suggest that in FE, early face processing was relatively intact and covert recognition was fast. Neural structures involved in the processing of emotional or social cues possibly mediate the covert recognition present in FE.

B�hm, S, & Sommer, W, 2005: Neural correlates of intentional and incidental recognition of famous faces, Cognitive Brain Research, in press.

Event-related potentials (ERPs) were used to study the relationship between intentional and incidental recognition of famous faces. Intentional and incidental recognition were operationally defined as repeated presentations of targets and nontargets within a modified Sternberg task. These repetitions elicited temporally and topographically distinct ERP modulations. A repetition effect around 300 ms (ERE/ N250r) and a preceding modulation did not differ between intentional and incidental recognition, whereas a following repetition effect (LRE/ N400) around 500 ms showed differences between incidental and intentional recognition. These results show that during the first few hundred milliseconds intentional and incidental face recognition relate to similar processing, indicating that familiar faces are recognized even when their identification is not required.

Bruce, V, Burton, A M, & Craw, I, 1992: Modelling face recognition, Phil. Trans. Roy. Soc., B, vol 335, no 1273, pp 121-128.

Much early work in the psychology of face processing was hampered by a failure to think carefully about task demands. Recently our understanding of the processes involved in the recognition of familiar faces has been both encapsulated in, and guided by, functional models of the processes involved in processing and recognizing faces. The specification and predictive power of such theory has been increased with the development of an implemented model, based upon an `interactive activation and competition' architecture. However, a major deficiency in most accounts of face processing is their failure to spell out the perceptual primitives that form the basis of our representations for faces. Possible representational schemes are discussed, and the potential role of three-dimensional representations of the face is emphasized.

Burton, A M, Bruce, V, & Hancock, P J B, 1999: From pixels to people: A model of familiar face recognition, Cognitive Science, vol 23, no 1, pp 1-31.

Research in face recognition has largely been divided between those projects concerned with front-end image processing and those projects concerned with memory for familiar people. These perceptual and cognitive programmes of research have proceeded in parallel, with only limited mutual influence. In this paper we present a model of human face recognition which combines both a perceptual and a cognitive component. The perceptual front-end is based on principal components analysis of face images, and the cognitive back-end is based on a simple interactive activation and competition architecture. We demonstrate that this model has a much wider predictive range than either perceptual or cognitive models alone, and we show that this type of combination is necessary in order to analyse some important effects in human face recognition. In sum, the model takes varying images of "known" faces and delivers information about these people.

Caharel, S, Poiroux, S, Bernard, C, Thibaut, F, Lalonde, R, & Rebai, M, 2002: ERPs Associated with Familiarity and Degree of Familiarity During Face Recognition, Int'l J. Neuroscience, vol 112, pp 1531-1544.

Event-related potentials (ERPs) triggered by three different faces (unfamiliar, famous, and the subject's own) were analyzed during passive viewing. A familiarity effect was defined as a significant difference between the two familiar faces as opposed to the unfamiliar face. A degree of familiarity effect was defined as a significant difference between all three conditions. The results show a familiarity effect 170 ms after stimulus onset (NI70), with larger amplitudes seen for both familiar faces. Conversely, a degree of familiarity effect arose approximately 250 ms after stimulus onset (P2) in the form of progressively smaller amplitudes as a function of familiarity (subject's face < famous face < unfamiliar). These results demonstrate that the structural encoding of faces, as reflected by N170 activities, can be modulated by familiarity and that facial representations acquire specific properties as a result of experience. Moreover, these results confirm the hypothesis that N170 is sensitive to face vs. object discriminations and to the discrimination among faces.

Caldara, R, Rossion, B, Bovet, P, & Hauert, C-A, 2004: Event-related potentials and time course of the 'other-race' face classification advantage, NeuroReport, vol 15, pp 905-910.

Other-race faces are less accurately recognized than same race faces but classified faster by race. Using event-related potentials (ERPs), we captured the brain temporal dynamics of face classification by race processing performed by 12 Caucasian participants. As expected, participants were faster to classify by race Asian than Caucasian faces. ERPs results identified the occurrence of the other-race face classification advantage at around 240ms, in a stage related to the processing of visual information at the semantic level. The elaboration of individual face structural representation, reflected in the N170 face-sensitive component, was insufficient to achieve this process. Altogether, these findings suggest that the lesser experience of other-race faces engender fewer semantic representations, which in turn accelerate their speed of processing.

Campanella, S, Hanoteau, C, D�py, D, Rossion, B, Bruyer, R, Crommelinck, M, & Gu�ritc, J M, 2000: Right N170 modulation in a face discrimination task - An account for categorical perception of familiar faces, Psychophysiology, vol 37, pp 796-806.

Behavioral studies have shown that two different morphed faces belonging to the same identity are harder to discriminate than two faces stemming from two different identities. The temporal course of this categorical perception effect has been explored through event-related potentials. Three kinds of pairs were presented in a matching task: (1) two different morphed faces representing the same identity (within), (2) two other faces representing two different identities (between), and (3) two identical morphed faces (same). Following the second face onset in the pair, the amplitude of the right occipitotemporal negativity (N170) was reduced for within and same pairs as compared with between pairs, suggesting an identity priming effect. We also observed a modulation of the P3b wave, as the amplitude of the responses for within pairs was higher than for between and same pairs, suggesting a higher complexity of the task for within pairs. These results indicate that categorical perception of human faces has a perceptual origin in the right occipitotemporal hemisphere.

Curran, T, Tanaka, J W, & Weiskopf, D M, 2002: An electrophysiological comparison of visual categorization and recognition memory, Cognitive, Affective, & Behavioral Neuroscience, vol 2, no 1, pp 1-18.

Object categorization emphasizes the similarities that bind exemplars into categories, whereas recognition memory emphasizes the specific identification of previously encountered exemplars. Mathematical modeling has highlighted similarities in the computational requirements of these tasks, but neuropsychological research has suggested that categorization and recognition may depend on separate brain systems. Following training with families of novel visual shapes (blobs), event-related brain potentials (ERPs) were recorded during both categorization and recognition tasks. ERPs related to early visual processing (N1, 156-200 msec) were sensitive to category membership. Middle latency ERPs (FN400 effects, 300-500 msec) were sensitive to both category membership and old/new differences. Later ERPs (parietal effects, 400-800 msec) were primarily affected by old/new differences. Thus, there was a temporal transition so that earlier processes were more sensitive to categorical discrimination and later processes were more sensitive to recognition-related discrimination. Aspects of these results are consistent with both mathematical modeling and neuropsychological perspectives.

de Gelder, B, & Rouw, R, 2001: Beyond localization - A dynamical dual route account of face recognition, Acta Psychologica, vol 107, pp 183-207.

After decades of research the notion that faces are special is still at the heart of heated debates. New techniques like brain imaging have advanced some of the arguments but empirical data frm brain-damaged patients like paradoxical recognition effects have required more complex explanations aside from localisation of the face area in normal adults. In this paper we focus on configural face processes and discuss configural processes in prosopagnosics in the light of findings obtained in brain imaging studies. In order to account for data like paradoxical face recognition effects we propose a dual route model of face recognition. The model is based on the distinction between two separate aspects of face recognition, detection and identification, considered as dynamical and interrelated. In this perspective the face detection system appears as the stronger candidate for face-specific processes. The face identification system on the other hand is part of the object recognition system but derives its specificity in part from interaction with the face-specific detection system. The fact that face detection appears intact in some patients provides us with a possible explanation for the interference of configural processes on feature-based identification.

de Gelder, B, & Stekelenburg, J J, 2005: Naso-temporal asymmetry of the N170 for processing faces in normal viewers but not in developmental prosopagnosia, Neuroscience Letters, vol 376, pp 40-45.

Some elementary aspects of faces can be processed before cortical maturation or after lesion of primary visual cortex. Recent findings suggesting a role of an evolutionary ancient visual system in face processing have exploited the relative advantage of the temporal hemifield (nasal hemiretina). Here, we investigated whether under some circumstances face processing also shows a temporal hemifield advantage. We measured the face sensitive N170 to laterally presented faces viewed passively under monocular conditions and compared face recognition in the temporal and nasal hemiretina. A N170 response for upright faces was observed which was larger for projections to the nasal hemiretina/temporal hemifields. This pattern was not observed in a developmental prosopagnosic. These results point to the importance of the early stages of face processing for normal face recognition abilities and suggest a potentially important factor in the origins of developmental prosopagnosia.

de Haan, M, Pascalis, O, & Johnson, M H, 2002: Specialization of Neural Mechanisms Underlying Face Recognition in Human Infants, Journal of Cognitive Neuroscience, vol 14, no 2, pp 199-209.

Newborn infants respond preferentially to simple face-like patterns, raising the possibility that the face-specific regions identified in the adult cortex are functioning from birth. We sought to evaluate this hypothesis by characterizing the specificity of infants' electrocortical responses to faces in two ways: (1) comparing responses to faces of humans with those to faces of nonhuman primates; and 2) comparing responses to upright and inverted faces. Adults' face-responsive N170 event-related potential (ERP) component showed specificity to upright human faces that was not observable at any point in the ERPs of infants. A putative ''infant N170'' did show sensitivity to the species of the face, but the orientation of the face did not influence processing until a later stage. These findings suggest a process of gradual specialization of cortical face processing systems during postnatal development.

Eimer, M, 2000: Event-Related Brain Potentials Distinguish Processing Stages Involved in Face Perception and Recognition, Clinical Neurophysiology, vol 111, pp 694-705.

Objectives: An event-related brain potential (ERP) study investigated how different processing stages involved in face identification are reflected by ERP modulations, and how stimulus repetitions and attentional set influence such effects.
Methods: ERPs were recorded in response to photographs of familiar faces, unfamiliar faces, and houses. In Part I, participants had to detect infrequently presented targets (hands), in Part II, attention was either directed towards or away from the pictorial stimuli.
Results: The face-specific N170 component elicited maximally at lateral temporal electrodes was not affected by face familiarity. When compared with unfamiliar faces, familiar faces elicited an enhanced negativity between 300 and 500 ms ('N400f') which was followed by an enhanced positivity beyond 500 ms post-stimulus ('P600f'). In contrast to the 'classical' N400, these effects were parietocentrally distributed. They were attenuated, but still reliable, for repeated presentations of familiar faces. When attention was directed to another demanding task, no 'N400f' was elicited, but the 'P600f' effect remained to be present. Conclusions: While the N170 reflects the pre-categorical structural encoding of faces, the 'N400f' and 'P600f' are likely to indicate subsequent processes involved in face recognition. Impaired structural encoding can result in the disruption of face identification. This is illustrated by a neuropsychological case study, demonstrating the absence of the N170 and later ERP indicators of face recognition in a prosopagnosic patient.

Eimer, M, 2000: The face-specific N170 component reflects late stages in the structural encoding of faces, NeuroReport, vol 11, pp 2319-2324.

To investigate which stages in the structural encoding of faces are reflected by the face- specific N170 component, ERPs (event-related brain potentials) were recorded in response to different types of face and non-face stimuli. The N170 was strongly attenuated for cheek and back views of faces relative to front and profile views, demonstrating that it is not merely triggered by head detection. Attenuated and delayed N170 components were elicited for faces lacking internal features as well as for faces without external features, suggesting that it is not exclusively sensitive to salient internal features. It is suggested that the N170 is linked to late stages of structural encoding, where representations of global face configurations are generated in order to be utilised by subsequent face recognition processes.

Emery, N J, 2000: The eyes have it: the neuroethology, function and evolution of social gaze, Neuroscience & Biobehavioral Reviews 24:581-604.

Gaze is an important component of social interaction. The function, evolution and neurobiology of gaze processing are therefore of interest to a number of researchers. This review discusses the evolutionary role of social gaze in vertebrates (focusing on primates), and a hypothesis that this role has changed substantially for primates compared to other animals. This change may have been driven by morphological changes to the face and eyes of primates, limitations in the facial anatomy of other vertebrates, changes in the ecology of the environment in which primates live, and a necessity to communicate information about the environment, emotional and mental states. The eyes represent different levels of signal value depending on the status, disposition and emotional state of the sender and receiver of such signals. There are regions in the monkey and human brain which contain neurons that respond selectively to faces, bodies and eye gaze. The ability to follow another individual's gaze direction is affected in individuals with autism and other psychopathological disorders, and after particular localized brain lesions. The hypothesis that gaze following is "hard-wired" in the brain, and may be localized within a circuit linking the superior temporal sulcus, amygdala and orbitofrontal cortex is discussed.

Gauthier, I, Curran, T, Curby, K M, & Collins, D, 2003: Perceptual interference supports a non-modular account of face processing, Nature Neuroscience, vol 6, no 4, pp 428-432.

The perception for faces and nonface objects share common early visual processing stages. Some argue, however, that the brain eventually processes faces separately from other objects, within a domain-specific module dedicated to face perception. This apparent specialization for faces could, alternatively, result from people's expertise with this category of stimuli. Here we used behavioral and electrophysiological measures of interference to address the functional independence of face and object processing. If the expert processing of faces and cars depend on mechanisms related to holistic perception (oblgatory processing of all parts), then for human subjects who are presumed to be face experts, car perception should interfere with concurrent face perception. Furthermore, such interference should increase with greater expertise in car identification, and indeed this is what we found. Event-related potentials (ERPs) suggest that this interference arose from perceptual processes contributing to the holistic processing of both objects of expertise and faces.

Gauthier, I, & Nelson, C A, 2001: The development of face expertise, Current Opinion in Neurobiology, vol 11, pp 219-224.

Recent neuroimaging studies in adults indicate that visual areas selective for recognition of faces can be recruited through expertise for nonface objects. This reflects a new emphasis on experience in theories of visual specialization. In addition, novel work infers differences between categories of nonface objects, allowing a re-interpretation of differences seen between recognition of faces and objects. Whether there are experience-independent precursors of face expertise remains unclear; indeed, parallels between literature for infants and adults suggest that methodological issues need to be addressed before strong conclusions can be drawn regarding the origins of face recognition.

Gauthier, I, Skudlarski, P, Gore, J C, & Anderson, A W, 2002: Expertise for Cars and Birds Recruits Brain Areas Involved in Face Recognition, Nature Neuroscience, vol 3, no 2, pp 191-197.

Expertise with unfamiliar objects ('greebles') recruits face-selective areas in the fusiform gyrus (FFA) and occipital lobe (OFA). Here we extend this finding to other homogeneous categories. Bird and car experts were tested with functional magnetic resonance imaging during tasks with faces, familiar objects, cars and birds. Homogeneous categories activated the FFA more than familiar objects. Moreover, the right FFA and OFA showed significant expertise effects. An independent behavioral test of expertise predicted relative activation in the right FFA for birds versus cars within each group. The results suggest that level of categorization and expertise, rather than superficial properties of objects, determine the specialization of the FFA.

Goffaux, V, Gauthier, I, & Rossion, B, 2003: Spatial scale contribution to early visual differences between face and object processing, Cognitive Brain Research, vol 16, pp 416-424.

Event-related potential (ERP) studies have highlighted an occipito-temporal potential, the N170, which is larger for faces than for other categories and delayed by stimulus inversion of faces, but not of other objects. We examined how high-pass and low-pass filtering modulate such early differences between the processing of faces and objects. Sixteen grey-scale pictures of faces and cars were filtered to preserve only relatively low (LSF) or high (HSF) spatial frequencies and were presented upright or upside-down. Subjects reported the orientation of the faces and cars in broad-pass and filtered conditions. In the broad-pass condition, we replicated typical N170 face-specific effects of amplitude and delay with inversion. These effects were also present in the LSF condition. However, a completely different pattern was revealed by the HSF condition: (1) a similar N170 amplitude for cars as compared to faces and (2) an absence of N170 latency delay with face inversion. These results show that the source of early processing differences between faces and objects is related to the extraction of information contained mostly in the LSF, which conveys coarse configuration cues particularly salient for face processing.

Goshen-Gottstein, Y, & Ganel, T, 2000: Repetition Priming for Familiar and Unfamiliar Faces in a Sex-Judgment Task: Evidence for a Common Route for the Processing of Sex and Identity, J. Experimental Psychology: Learning, Memory, and Cognition, vol 26, no 5, pp 1198-1214.

Repetition priming for faces was examined in a sex-judgment task given at test. Priming was found for edited, hair-removed photos of unfamiliar and familiar faces after a single presentation at study. Priming was also observed for the edited photos when study and test faces were different exemplars. Priming was not observed, however, when sex judgments were made at test to photos of complete, hair-included faces. These findings were interpreted by assuming that, for edited faces, internal features are attended, thereby activating face-recognition units that support performance. With complete faces, however, participants provided speeded judgments based primarily on the hairstyle. It is suggested that, for both familiar and unfamiliar faces, a common locus exists for the processing of the identity of a face and its sex. A single face-recognition model for the processing of familiar and unfamiliar faces is advocated.

Grabowski, T J, Damasio, H, Tranel, D, Ponto, L L B, Hichwa, R D, & Damasio, A R, 2001: A role for left temporal pole in the retrieval of words for unique entities, Human Brain Mapping, vol 13, no 4, pp 199-212.

Both lesion and functional imaging studies have implicated sectors of high-order association cortices of the left temporal lobe in the retrieval of words for objects belonging to varied conceptual categories. In particular, the cortices located in the left temporal pole have been associated with naming unique persons from faces. Because this neuroanatomical-behavioral association might be related to either the specificity of the task (retrieving a name at unique level) or to the possible preferential processing of faces by anterior temporal cortices, we performed a PET imaging experiment to test the hypothesis that the effect is related to the specificity of the word retrieval task. Normal subjects were asked to name at unique level entities from two conceptual categories: famous landmarks and famous faces. In support of the hypothesis, naming entities in both categories was associated with increases in activity in the left temporal pole. No main effect of category (faces vs. landmarks/buildings) or interaction of task and category was found in the left temporal pole. Retrieving names for unique persons and for names for unique landmarks activate the same brain region. These findings are consistent with the notion that activity in the left temporal pole is linked to the level of specificity of word retrieval rather than the conceptual class to which the stimulus belongs.

Grill-Spector, K, 2004: The functional organization of the ventral visual pathway and its relationship to object recognition, in Kanwisher, N, & Duncan, J, eds: Functional Neuroimaging of Visual Cognition, Oxford University Press.

Functional neuroimaging has greatly enhanced our knowledge of the brain, and has been one of the most important tools in cognitive neuroscience. At the same time, the full power of neuroimaging can be realized only if there is convergence with theories based on other approaches including computational modelling, behavioural experiments, and electrophysiology in the behaving animal. In this book, Nancy Kanwisher and John Duncan have brought together leading cognitive neuroscientists to present groundbreaking new research on the neural bases of vision.

Grill-Spector, K, & Kanwisher, N, 2005: Visual Recognition: as soon as you know it is there, you know what it is, Psychological Science, vol 16, no 2, pp 152-160.

What is the sequence of processing steps involved in visual object recognition? We varied the exposure duration of natural image stimuli and measured subjects' performance on three different tasks, each designed to tap a different candidate component process of object recognition. For each exposure duration, accuracy was lower and reaction time longer on a within-category identification task (e.g., distinguishing pigeons from other birds) than on a perceptual categorization task (e.g., birds versus cars). However, strikingly, subjects performed just as quickly and accurately at each exposure duration on the categorization task as they did on a task requiring only object detection: by the time subjects knew an image contained an object at all, they already knew its category. These findings place powerful constraints on theories of object recognition.

Hanley, J R, Smith, S T, & Hadfield, J, 1998: I Recognise You but I Can't Place You: An Investigation of Familiar-only Experiences During Tests of Voice and Face Recognition, Quart. J. Experimental Psychology, vol 51A, no 1, pp 179-195.

In this paper, we examine in detail the situation in which a subject finds that a face or voice is familiar but is unable to retrieve any biographical information about the person concerned. In two experiments, subjects were asked to identify a set of 40 celebrities either from hearing their voice or from seeing their face. Although many more celebrities were identified and named in response to their face than their voice, the results showed that there was a very large number of occasions when a celebrity's voice was felt to be familiar but the subject was unable to retrieve any biographical information about the person. This situation occurred less frequently in response to seeing a celebrity's face; when a face was found familiar, the subject was much more likely to be able to recall the celebrity's occupation. The possibility that these results might have come about because subjects were using different criteria to determine familiarity in the face and voice conditions was investigated and discounted. An additional finding was that when subjects found a face to be familiar-only, they were able to recall significantly more additional information about the person when they were cued by the person' s voice than when they simply saw the face again. These results are discussed in relation to the models of person recognition put forward by Bruce and Young (1986) and Burton, Bruce, and Johnston (1990).

Haxby, J V, Hoffman, E A, & Gobbini, M I, 2000: The distributed neural system for face perception, Trends in Cognitive Sciences, vol 4, no 6, pp 223-233.

Face perception, perhaps the most highly developed visual skill in humans, is mediated by a distributed neural system in humans that is comprised of multiple, bilateral regions. We propose a model for the organization of this system that emphasizes a distinction between the representation of invariant and changeable aspects of faces. The representation of invariant aspects of faces underlies the recognition of individuals, whereas the representation of changeable aspects of faces, such as eye gaze, expression, and lip movement, underlies the perception of information that facilitates social communication. The model is also hierarchical insofar as it is divided into a core system and an extended system. The core system is comprised of occipitotemporal regions in extrastriate visual cortex that mediate the visual analysis of faces. In the core system, the representation of invariant aspects is mediated more by the face- responsive region in the fusiform gyrus, whereas the representation of changeable aspects is mediated more by the face-responsive region in the superior temporal sulcus. The extended system is comprised of regions from neural systems for other cognitive functions that can be recruited to act in concert with the regions in the core system to extract meaning from faces.

Herzmann, G, Schweinberger, S R, Sommer, S, & Jentzsch, I, 2004: What's special about personally familiar faces? A multimodal approach, Psychophysiology, 41:688-701.

Dual-route models of face recognition suggest separate cognitive and affective routes. The predictions of these models were assessed in recognition tasks with unfamiliar, famous, and personally familiar faces. Whereas larger autonomic responses were only triggered for personally familiar faces, priming effects in reaction times to these faces, presumably re?ecting cognitive recognition processes, were equal to those of famous faces. Activation of stored structural representations of familiar faces (face recognition units) was assessed by recording the N250r component in event-related brain potentials. Face recognition unit activation increased from unfamiliar over famous to personally familiar faces, suggesting that there are stronger representations for personally familiar than for famous faces. Because the topographies of the N250r for personally and famous faces were indistinguishable, a similar network of face recognition units can be assumed for both types of faces.

Ishai, A, Ungerleider, L G, Martin, A, Schouten, J L, & Haxby, J V, 1999: Distributed Representation of Objects in the Human Ventral Visual Pathway, Proc. Natl. Acad. Sci. USA, vol 96, pp 9379-9384.

Brain imaging and electrophysiological recording studies in humans have reported discrete cortical regions in posterior ventral temporal cortex that respond preferentially to faces, buildings, and letters. These findings suggest a category-specific anatomically segregated modular organization of the object vision pathway. Here we present data from a functional MRI study in which we found three distinct regions of ventral temporal cortex that responded preferentially to faces and two categories of other objects, namely houses and chairs, and had a highly consistent topological arrangement. Although the data could be interpreted as evidence for separate modules, we found that each category also evoked significant responses in the regions that responded maximally to other stimuli. Moreover, each category was associated with its own differential pattern of response across ventral temporal cortex. These results indicate that the representation of an object is not restricted to a region that responds maximally to that object, but rather is distributed across a broader expanse of cortex. We propose that the functional architecture of the ventral visual pathway is not a mosaic of category-specific modules but instead is a continuous representation of information about object form that has a highly consistent and orderly topological arrangement.

Itier, R J, & Taylor, M J, 2004: N170 or N1? Spatiotemporal Differences between Object and Face Processing Using ERPs, Cerebral Cortex, vol 14, no 2, pp 132-142.

The ERP component N170 is face-sensitive, yet its specificity for faces is controversial. We recorded ERPs while subjects viewed upright and inverted faces and seven object categories. Peak, topography and segmentation analyses were performed. N170 was earlier and larger to faces than to all objects. The classic increase in amplitude and latency was found for inverted faces on N170 but also on P1. Segmentation analyses revealed an extra map found only for faces, reflecting an extra cluster of activity compared to objects. While the N1 for objects seems to reflect the return to baseline from the P1, the N170 for faces reflects a supplement activity. The electrophysiological 'specificity' of faces could lie in the involvement of extra generators for face processing compared to objects and the N170 for faces seems qualitatively different from the N1 for objects. Object and face processing also differed as early as 120 ms.

Itier, R J, & Taylor, M J, 2004: Effects of repetition learning on upright, inverted and contrast-reversed face processing using ERPs, NeuroImage, vol 21, pp 1518-1532.

The effects of short-term learning on memory for inverted, contrastreversed and upright faces were investigated using event-related potentials (ERPs) in a target/nontarget discrimination task following a learning phase of the target. Subjects were equally accurate for all three face types although responding more slowly to inverted and negative faces compared to upright faces. Face type affected both early ERP components P1 and N170, and long-latency components at frontal and parietal sites, reflecting the difficulty of processing inverted faces. Different effects of face type were found for P1 and N170 latencies and amplitudes, suggesting face processing could start around 100-120 ms and is sensitive to facial configuration. Repetition effects were also found on both early and long-latency components. Reduced N170 latency and amplitude for repeated targets are likely due to perceptual priming. Repetition effects on the N250 were delayed for inverted and negative faces, suggesting delayed access to stored facial representations for these formats. Increased frontopolar positivity at 250-300 ms and parietal positivity from 300 to 500 ms reflected familiarity 'old- new' repetition effects that were of similar magnitude for all three face types, indexing the accurate recognition of all faces. Thus, while structural encoding was disrupted by inversion and contrast-reversal, the learning phase was sufficient to abolish the effects of these configural manipulations behaviourally; all three face types were equally well recognised and this was reflected as equally large parietal old-new effects.

Itier, R J, Taylor, M J, & Lobaugh, N J, 2004: Spatiotemporal analysis of event-related potentials to upright, inverted, and contrast-reversed faces: Effects on encoding and recognition, Psychophysiology, 41:643-653.

In an n-back face recognition task where subjects responded to repeated stimuli, ERPs were recorded to upright, inverted, and contrast-reversed faces. The effects of inversion and contrast reversal on face encoding and recognition were investigated using the multivariate spatiotemporal partial least squares (PLS) analysis. The con?gural manipulations affected early processing (100-200 ms) at posterior sites: Inversion effects were parietal and lateral, whereas contrast-reversal effects were more occipital and medial, suggesting different underlying generators. A later reactivation of face processing areas was unique to inverted faces, likely due to processing dif?culties. PLS also indicated that the ''old-new'' repetition effect was maximal for upright faces and likely involved frontotemporal areas. Marked processing differences between inverted and contrast-reversed faces were seen, but these effects were similar at encoding and recognition.

Jenkins, R, Lavie, N, & Driver, J, 2003: Ignoring famous faces: Category-specific dilution of distractor interference, Perception & Psychophysics, vol 65, no 2, pp 298-309.

The extent to which famous distractor faces can be ignored was assessed in six experiments. Subjects categorized famous printed target names as those of pop stars or politicians, while attempting to ignore a flanking famous face distractor that could be congruent (e.g., a politician's name and face) or incongruent (e.g., a politician's name with a pop star's face). Congruency effects on reaction times indicated distractor intrusion. An additional, response-neutral flanker (neither pop star nor politician) could also be present. Congruency effects from the critical distractor face were reduced (diluted) by the presence of an intact anonymous face, but not by phase-shifted versions, inverted faces, or meaningful nonface objects. By contrast, congruency effects from other types of distracting objects (musical instruments, fruits), when printed names for these classes were categorized, were diluted equivalently by intact faces, phase-shifted faces, or meaningful nonface objects. Our results suggest that distractor faces act differently from other types of distractors, suffering from only face-specific capacity limits.

Joassin, F, Campanella, S, Debatisse, S, Guerit, J M, Bruyer, R, & Crommelinck, M, 2004: The electrophysiological correlates sustaining the retrieval of face-name associations: An ERP study, Psychophysiology, 41:625-635.

An ERP study on 9 healthy participants was carried out to temporally constrain the neural network proposed by Campanella et al. (2001) in a PET study investigating the cerebral areas involved in the retrieval of face-name associations. Three learning sessions served to familiarize the participants with 24 face-name associations grouped in 12 male/female couples. During EEG recording, participants were confronted with four experimental conditions, requiring the retrieval of previously learned couples on the basis of the presentation of name-name (NN), face-face (FF), name-face (NF), or face-name (FN) pairs of stimuli. The main analysis of this experiment consisted in the subtraction of the nonmixed conditions (NN and FF) from the mixed conditions (NF and FN). It revealed two main ERP components: a negative wave peaking at left parieto- occipital sites around 285 ms and its positive counterpart recorded at left centro-frontal electrodes around 300ms. Moreover, a dipole modeling using three dipoles whose localization corresponded to the three cerebral areas observed in the PET study (left inferior frontal gyrus, left medial frontal gyrus, left inferior parietal lobe) explained more than 90% of the variance of the results. The complementarity between anatomical and neurophysiological techniques allowed us to discuss the temporal course of these cerebral activities and to propose an interactive and original anatomo-temporal model of the retrieval of face-name associations.

Joyce, C A, & Cottrell, G W, 2004: Solving the Visual Expertise Mystery, Proc. 8th Neur. Computation & Psychology Workshop.

Through brain imaging studies and studies of brain-lesioned patients with face or object recognition deficits, the fusiform face area (FFA) has been identified as a face-specific processing area. Recent work, however, illustrates that the FFA is also responsive to a wide variety of non-face objects if levels of discrimination and expertise are controlled. The mystery is why an expertise area, whose initial domain of expertise is presumably faces, would be recruited for these other domains. Here we show that features tuned for fine-level discrimination within one visually homogeneous class have high-variance responses across that class. This variability generalizes to other homogenous classes, providing a foothold for learning.

Liu, J, Harris, A, & Kanwisher, N, 2002: Stages of processing in face perception: an MEG study, Nature Neuroscience, vol 5, no 9, pp 910-916.

Here we used magnetoencephalography (MEG) to investigate stages of processing in face perception in humans. We found a face-selective MEG response occurring only 100 ms after stimulus onset (the 'M100'), 70 ms earlier than previously reported. Further, the amplitude of this M100 response was correlated with successful categorization of stimuli as faces, but not with successful recognition of individual faces, whereas the previously-described face-selective 'M170' response was correlated with both processes. These data suggest that face processing proceeds through two stages: an initial stage of face categorization, and a later stage at which the identity of the individual face is extracted.

Loftus, G R, Oberg, M A, & Dillon, A M, 2004: Linear Theory, Dimensional Theory, and the Face-Inversion Effect, Psychological Review, vol 111, no 4, pp 835-863.

We contrast 2 theories within whose context problems are conceptualized and data interpreted. By traditional linear theory, a dependent variable is the sum of main-effect and interaction terms. By dimensional theory, independent variables yield values on internal dimensions that in turn determine performance. We frame our arguments within an investigation of the face-inversion effect-the greater processing disadvantage of inverting faces compared with non-faces. We report data from 3 simulations and 3 experiments wherein faces or non-faces are studied upright or inverted in a recognition procedure. The simulations demonstrate that (a) critical conclusions depend on which theory is used to interpret data and (b) dimensional theory is the more flexible and consistent in identifying underlying psychological structures, because dimensional theory subsumes linear theory as a special case. The experiments demonstrate that by dimensional theory, there is no face-inversion effect for unfamiliar faces but a clear face-inversion effect for celebrity faces.

L�schow, A, Sander, T, B�hm, S G, Nolte, G, Trahms, L, & Curio, G, 2004: Looking for faces: Attention modulates early occipitotemporal object processing, Psychophysiology, 41:350-360.

Looking for somebody's face in a crowd is one of the most important examples of visual search. For this goal, attention has to be directed to a well-de?ned perceptual category.When this categorically selective process starts is, however, still unknown. To this end, we used magnetoencephalography (MEG) recorded over right human occipitotemporal cortex to investigate the time course of attentional modulation of perceptual processes elicited by faces and by houses. The ?rst face-distinctive MEG response was observed at 160-170 ms (M170). Nevertheless, attention did not start to modulate face processing before 190ms. The ?rst house-distinctive MEG activity was also found around 160-170 ms. However, house processing was not modulated by attention before 280 ms (90 ms later than face processing). Further analysis revealed that the attentional modulation of face processing is not due to later, for example, back-propagated activation of the M170 generator. Rather, subsequent stages of occipitotemporal object processing were modulated in a category-speci?c manner and with preferential access to face processing.

Maurer, D, Le Grand, R, & Mondloch, C J, 2002: The many faces of configural processing, Trends in Cognitive Sciences, vol 6, no 6, pp 255-260.

Adults' expertise in recognizing faces has been attributed to configural processing. We distinguish three types of configural processing: detecting the first-order relations that define faces (i.e. two eyes above a nose and mouth), holistic processing (glueing the features together into a gestalt), and processing second-order relations (i.e. the spacing among features).We provide evidence for their separability based on behavioral marker tasks, their sensitivity to experimental manipulations, and their patterns of development. We note that inversion affects each type of configural processing, not just sensitivity to second-order relations, and we review evidence on whether configural processing is unique to faces.

McCarthy, G, Puce, A, Belger, A, & Allison, T, 1999: Electrophysiological Studies of Human Face Perception II: Response Properties of Face-specific Potentials Generated in Occipitotemporal Cortex, Cerebral Cortex, vol 9, pp 431-444.

In the previous paper the locations and basic response properties of N200 and other face-specific event-related potentials (ERPs) were described. In this paper responsiveness of N200 and related ERPs to the perceptual features of faces and other images was assessed. N200 amplitude did not vary substantially, whether evoked by colored or grayscale faces; normal, blurred or line-drawing faces; or by faces of different sizes. Human hands evoked small N200s at face-specific sites, but evoked hand-specific ERPs at other sites. Cat and dog faces evoked N200s that were 73% as large as to human faces. Hemifield stimulation demonstrated that the right hemisphere is better at processing information about upright faces and transferring it to the left hemisphere, whereas the left hemisphere is better at processing information about inverted faces and transferring it to the right hemisphere. N200 amplitude was largest to full faces and decreased progressively to eyes, face contours, lips and noses viewed in isolation. A region just lateral to face-specific N200 sites was more responsive to internal face parts than to faces, and some sites in ventral occipitotemporal cortex were face-partspecific. Faces with eyes averted or closed evoked larger N200s than those evoked by faces with eyes forward. N200 amplitude and latency were affected by the joint effects of eye and head position in the right but not in the left hemisphere. Full and three-quarter views of faces evoked larger N200s than did profile views. The results are discussed in relation to behavioral studies in humans and single-cell recordings in monkeys.

Mnatsakanian, E V, & Tarkka, I M, 2004: Familiar-face recognition and comparison: source analysis of scalp-recorded event-related potentials, Clinical Neurophysiology, 115(4):880-886.

Objective: We studied the event-related potentials elicited by categorical matching of faces. The purpose was to find cortical sources responsible for face recognition and comparison.
Methods: Nineteen healthy volunteers participated in the study. Each trial began with one of the two cues (S1) followed by consecutive pictures (S2 and S3). Each picture was a photograph of a familiar face with a superimposed abstract dot pattern. One cue directed attention to compare faces and another to compare patterns. 128- channel electroencephalogram was recorded. Spatio-temporal multiple dipole source models were generated using Brain Electromagnetic Source Analysis 2000, for the window of 80-600 ms from S3 onset.
Results: The obtained model for face recognition and comparison contained 8 dipoles explaining 97% of grand average and about 90% of individual data and showing temporal and spatial separation of sources: in the frontal region, in the occipital cortex, and in the bilateral medial temporal and inferotemporal regions. Different faces elicited larger components than same person's faces around 400 ms, mainly explained by frontal dipoles.
Conclusions: The sources in our models estimate the activity common for both Face task conditions (the recognition of a familiar person) and also differential activity, related to the match/mismatch item processing.

Moscovitch, M, Winocur, G, & Behrmann, M, 1997: What is special about face recognition? Nineteen experiments on a person with visual object agnosia and dyslexia but normal face recognition, J. Cognitive Neuroscience, vol 9, no 5, pp 555-604.

In order to study face recognition in relative isolation from visual processes that may also contribute to object recognition and reading, we investigated CK, a man with normal face recognition but with object agnosia and dyslexia caused by a closed-head injury. We administered recognition tests of upright faces, of family resemblance, of agetransformed faces, of caricatures, of cartoons, of inverted faces, and of face features, of disguised faces, of perceptually degraded faces, of fractured faces, of faces parts, and of faces whose parts were made of objects. We compared CK's performance with that of at least 12 control participants. We found that CK performed as well as controls as long as the face was upright and retained the configurational integrity among the internal facial features, the eyes, nose, and mouth. This held regardless of whether the face was disguised or degraded and whether the face was represented as a photo, a caricature, a cartoon, or a face composed of objects. In the last case, CK perceived the face but, unlike controls, was rarely aware that it was composed of objects. When the face, or just the internal features, were inverted or when the configurational gestalt was broken by fracturing the face or misaligning the top and bottom halves, CK's performance suffered far more than that of controls. We conclude that face recognition normally depends on two systems: (1) a holistic, face-specific system that is dependent on orientation-specific coding of second-order relational features (internal), which is intact in CK and (2) a partbased object-recognition system, which is damaged in CK and which contributes to face recognition when the face stimulus does not satisfy the domain-specific conditions needed to activate the face system.

M�nte, T, Brack, M, Grootheer, O, Matzke, M, & Johannes, S, 1998: Brain potentials reveal the timing of face identity and expression judgments, Neuroscience Research, vol 30, pp 25-34.

Event-related brain potentials (ERPs) were recorded from multiple scalp locations from young human subjects while they performed two different face processing tasks. The first task entailed the presentation of pairs of faces in which the second face was either a different view of the first face or a different view of a different face. The subjects had to decide whether or not the two faces depicted the same person. In the second task, pairs of faces (frontal views) were presented with the task of judging whether the expression of the second face matched that of the face. Incongruous faces in the view (identity) matching task gave rise to a negativity peaking at about 350 ms with a frontocentral maximum. This effect was similar to the N400 obtained in linguistic tasks. ERP effects in the expression matching task were much later and had a different distribution. This pattern of results corresponds well with neuropsychological and neuroimaging data suggesting specialized neuronal populations subserving identity and expression analysis but adds a temporal dimension to previous investigations.

O'Toole, A J, Bartlett, J C, & Abdi, H, 2000: A signal detection model applied to the stimulus: Understanding covariances in face recognition experiments in the context of face sampling distributions, Visual Cognition, vol 7, no 4, pp 437-463.

We provide a description and interpretation of signal detection theory as applied to the analysis of an individual stimulus in a recognition experiment. Despite the common use of signal detection theory in this context, especially in the face recognition literature, the assumptions of the model have rarely beenmade explicit. In a series of simulations, we first varied the stability of d' and C in face sampling distributions and report the pattern of correlations between the hit and false alarm rate components of the model across the simulated experiments. These kinds of correlation measures have been reported in recent face recognition papers and have been considered to be theoretically important. The simulation data we report revealed widely different correlation expectations as a function of the parameters of the face sampling distribution, making claims of theoretical importance for any particular correlation questionable. Next, we report simulations aimed at exploring the effects of face sampling distribution parameters on correlations between individual components of the signal detection model (i.e. hit and false alarm rates), and other facial measures such as typicality ratings. These data indicated that valid interpretations of such correlations need to make reference to the parameters of the relevant face sampling distribution.

Paller, K A, Gonsalves, B, Grabowecky, M, Bozic, V S, & Yamada, S, 2000: Electrophysiological correlates of recollecting faces of known and unknown individuals, NeuroImage, vol 11, no 2, pp 98-110.

We recorded brain potentials from healthy human subjects during a recognition test in order to monitor neural processing associated with face recollection. Subjects first attempted to memorize 40 faces; half were accompanied by a voice simulating that person speaking (e.g., "I'm Jimmy and I was a roadie for the Grateful Dead") and half were presented in silence. In the test phase, subjects attempted to discriminate both types of old faces (i.e., "named" and "unnamed" faces) from new faces. Recognition averaged 87% correct for named faces, 74% correct for unnamed faces, and 91% correct for new faces. Potentials to old faces were more positive than those to new faces from 300 to 600 ms after face onset. For named faces, the old-new ERP difference was observed at anterior and posterior scalp locations. For unnamed faces, the old-new ERP difference was observed only at posterior scalp locations. Results from a prior experiment suggest that these effects do not reflect perceptual priming of faces. The posterior portion of the old-new ERP difference was thus interpreted as a neural correlate of retrieval of visual face information and the anterior portion as an indication of retrieval of person-specific semantic information.

Pickering, E, & Schweinberger, S, 2003: N200, N250r, and N400 Event-Related Brain Potentials Reveal Three Loci of Repetition Priming for Familiar Names, Journal of Experimental Psychology: Learning, Memory, and Cognition, vol 29, no 6, pp 1298-1311.

The authors assessed immediate repetition effects on event-related potentials (ERPs) while participants performed familiarity decisions for written personal names. For immediately repeated familiar names, the authors observed 3 distinct ERP modulations. At 180-220 ms, a posterior N200 effect occurred for names preceded by same-font primes only. In addition, an increased left temporal negativity (N250r, 220-300 ms) and a reduced central-parietal negativity (N400, 300-400 ms) were seen both for same-font and different-font repetitions. In a 2nd experiment, when names were preceded by either their corresponding face or the face of a different celebrity, only the N400 effect was preserved. These findings suggest that the N200, N250r, and N400 effects reflect facilitated processing at font-specific featural, lexical, and semantic levels of processing, respectively.

Pizzagalli, D A, Lehmann, D, Hendrick, A M, Regard, M, Pascual-Marqui, R D, & Davidson, R J, 2002: Affective Judgments of Faces Modulate Early Activity (~160 ms) within the Fusiform Gyri, NeuroImage, vol 16, pp 663-677.

Functional neuroimaging studies have implicated the fusiform gyri (FG) in structural encoding of faces, while event-related potential (ERP) and magnetoencephalography studies have shown that such encoding occurs approximately 170 ms poststimulus. Behavioral and functional neuroimaging studies suggest that processes involved in face recognition may be strongly modulated by socially relevant information conveyed by faces. To test the hypothesis that affective information indeed modulates early stages of face processing, ERPs were recorded to individually assessed liked, neutral, and disliked faces and checkerboard-reversal stimuli. At the N170 latency, the cortical three- dimensional distribution of current density was computed in stereotactic space using a tomographic source localization technique. Mean activity was extracted from the FG, defined by structure-probability maps, and a meta-cluster delineated by the coordinates of the voxel with the strongest face-sensitive response from five published functional magnetic resonance imaging studies. In the FG, 160 ms poststimulus, liked faces elicited stronger activation than disliked and neutral faces and checkerboard-reversal stimuli. Further, confirming recent results, affect-modulated brain electrical activity started very early in the human brain ( 112 ms). These findings suggest that affective features conveyed by faces modulate structural face encoding. Behavioral results from an independent study revealed that the stimuli were not biased toward particular facial expressions and confirmed that liked faces were rated as more attractive. Increased FG activation for liked faces may thus be interpreted as reflecting enhanced attention due to their saliency.

Pourtois, G, Schwartz, S, Seghier, M L, Lazeyras, F, & Vuilleumier, P, 2005: View-independent coding of face identity in frontal and temporal cortices is modulated by familiarity: an event-related fMRI study, NeuroImage, vol 24, pp 1214-1224.

Face recognition is a unique visual skill enabling us to recognize a large number of person identities, despite many differences in the visual image from one exposure to another due to changes in viewpoint, illumination, or simply passage of time. Previous familiarity with a face may facilitate recognition when visual changes are important. Using event-related fMRI in 13 healthy observers, we studied the brain systems involved in extracting face identity independent of modifications in visual appearance during a repetition priming paradigm in which two different photographs of the same face (either famous or unfamiliar) were repeated at varying delays. We found that functionally defined face-selective areas in the lateral fusiform cortex showed no repetition effects for faces across changes in image views, irrespective of pre-existing familiarity, suggesting that face representations formed in this region do not generalize across different visual images, even for well-known faces. Repetition of different but easily recognizable views of an unfamiliar face produced selective repetition decreases in a medial portion of the right fusiform gyrus, whereas distinct views of a famous face produced repetition decreases in left middle temporal and left inferior frontal cortex selectively, but no decreases in fusiform cortex. These findings reveal that different views of the same familiar face may not be integrated within a single representation at initial perceptual stages subserved by the fusiform face areas, but rather involve later processing stages where more abstract identity information is accessed.

Puce, A, Allison, T, & McCarthy, G, 1999: Electrophysiological Studies of Human Face Perception III: Effects of Top-down Processing on Face-specific Potentials, Cerebral Cortex, vol 9, pp 445-458.

This is the last in a series of papers dealing with intracranial event-related potential (ERP) correlates of face perception. Here we describe the results of manipulations that may exert top-down influences on face recognition and face-specific ERPs, and the effects of cortical stimulation at face-specific sites. Ventral face specific N200 was not evoked by affective stimuli; showed little or no habituation; was not affected by the familiarity or unfamiliarity of faces; showed no semantic priming; and was not affected by face-name learning or identification. P290 and N700 were affected by semantic priming and by face-name learning and identification. The early fraction of N700 and face-specific P350 exhibited significant habituation. About half of the AP350 sites exhibited semantic priming, whereas the VP350 and LP350 sites did not. Cortical stimulation evoked a transient inability to name familiar faces or evoked face-related hallucinations at two-thirds of facespecific N200 sites. These results are discussed in relation to human behavioral studies and monkey single-cell recordings. Discussion of results of all three papers concludes that: face-specific N200 reflects the operation of a module specialized for the perception of human faces; ventral and lateral occipitotemporal cortex are composed of a complex mosaic of functionally discrete patches of cortex of variable number, size and location; in ventral cortex there is a posterior-to-anterior trend in the location of patches in the order letter-strings, form, hands, objects, faces and face parts; P290 and N700 at face-specific N200 sites, and face-specific P350, are subject to top-down influences.

Rakover, S S, 2002: Featural vs. Configurational Information in Faces: a Conceptual and Empirical Analysis, British J. Psychology, vol 93, pp 1-30.

The perception and memory of faces have been accounted for by the processing of two kinds of facial information: featural and configurational. The starting point of this article is the definition and accepted usage of these two concepts of facial information. I discuss these definitions and their various ramifications from three aspects: methodological, theoretical and empirical. In the section on methodology, I review several of the basic manipulations for changing facial information. In the theoretical section, I consider four fundamental hypotheses associated with these two kinds of facial information: the featural, configurational, holistic and norm hypotheses (the norm-based hypothesis and the 'hierarchy of schemas' hypothesis). In the section on empirical evidence, I survey relevant studies on the topic and consider these hypotheses through a description of various empirical phenomena that carry clear implications for the subject of the study. In conclusion, I propose two alternative directions for future research: first, a 'taskinformation' approach, which involves specifying what information is used for different tasks; and secondly, taking a different approach to the definition of the visual features for face processing, for example by using principal components analysis (PCA).

Rossion, B, Caldara, R, Seghier, M, Schuller, A-M, Lazeyras, F, & Mayer, E, 2003: A network of occipito-temporal face-sensitive areas besides the right middle fusiform gyrus is necessary for normal face processing, Brain, vol 126, pp 2381-2395.

Neuroimaging studies have identified at least two bilateral areas of the visual extrastriate cortex that respond more to pictures of faces than objects in normal human subjects in the middle fusiform gyrus [the 'fusiform face area' (FFA)] and, more posteriorly, in the inferior occipital cortex ['occipital face area' (OFA)], with a right hemisphere dominance. However, it is not yet clear how these regions interact which each other and whether they are all necessary for normal face perception. It has been proposed that the right hemisphere FFA acts as an isolated ('modular') processing system for faces or that this region receives its face-sensitive inputs from the OFA in a feedforward hierarchical model of face processing. To test these proposals, we report a detailed neuropsychological investigation combined with a neuroimaging study of a patient presenting a deficit restricted to face perception, consecutive to bilateral occipito-temporal lesions. Due to the asymmetry of the lesions, the left middle fusiform gyrus and the right inferior occipital cortex were damaged but the right middle fusiform gyrus was structurally intact. Using functional MRI, we disclosed a normal activation of the right FFA in response to faces in the patient despite the absence of any feedforward inputs from the right OFA, located in a damaged area of cortex. Together, these findings show that the integrity of the right OFA is necessary for normal face perception and suggest that the face-sensitive responses observed at this level in normal subjects may arise from feedback connections from the right FFA. In agreement with the current literature on the anatomical basis of prosopagnosia, it is suggested that the FFA and OFA in the right hemisphere and their re-entrant integration are necessary for normal face processing.

Rossion, B, Curran, T, & Gauthier, I, 2002: A defense of the subordinate-level expertise account for the N170 component, Cognition, vol 85, pp 189-196.

A recent paper in this journal reports two event-related potential (ERP) experiments interpreted as supporting the domain specificity of the visual mechanisms implicated in processing faces (Cognition 83 (2002) 1). The authors argue that because a large neurophysiological response to faces (N170) is less influenced by the task than the response to objects, and because the response for human faces extends to ape faces (for which we are not expert), we should reject the hypothesis that the face-sensitivity reflected by the N170 can be accounted for by the subordinate-level expertise model of object recognition (Nature Neuroscience 3 (2000) 764). In this commentary, we question this conclusion based on some of our own ERP work on expert object recognition as well as the work of others.

Rossion, B, Gauthier, I, Delvenne, J-F, Tarr, M J, Bruyer, R, & Marc Crommelinck, M, 1999: Does the N170 occipito-temporal component reflect a face-specific structural encoding stage?, Proc. 7th Annual Workshop on Object Perception and Memory.

Many neuroimaging studies have increased our knowledge of the neural correlates of face processing (e.g. 1-3), but the temporal aspects of this function remain largely unclear. Recently, it was suggested in several event-related potentials studies (4-12) that face processing differs from visual object processing at 170 ms following stimulus onset. The electrophysiological component at which this dissociation takes place is best recorded at occipitotemporal sites, bilaterally, and has been termed the N170 (4). The N170 has been interpreted as reflecting a face-specific "structural encoding stage", performed prior to the recognition of a face as familiar or not (4-8). This interpretation is based on findings that: i) the N170 component is either absent (4) or strongly reduced (7) for non-face objects; ii) it is considered to be insensitive to scrambling of a face's features (4) and to face inversion (5); and iii) it is unaffected by face familiarity (6,11). Here we test the claims that the N170 component is face-specific, and that it reflects a "structural encoding stage" for faces.

Rossion, B, Gauthier, I, Tarr, M J, Despland, P, Bruyer, R, Linotte, S, & Crommelinck, M, 2000: The N170 occipito-temporal component is delayed and enhanced to inverted faces but not to inverted objects, NeuroReport, vol 11, pp 69-74.

Behavioral studies have shown that picture-plane inversion impacts face and object recognition differently, thereby suggesting face-specific processing mechanisms in the human brain. Here we used event-related potentials to investigate the time course of this behavioral inversion effect in both faces and novel objects. ERPs were recorded for 14 subjects presented with upright and inverted visual categories, including human faces and novel objects (Greebles). A N170 was obtained for all categories of stimuli, including Greebles. However, only inverted faces delayed and enhanced N170 (bilaterally). These observations indicate that the N170 is not specific to faces, as has been previously claimed. In addition, the amplitude difference between faces and objects does not reflect face-specific mechanisms since it can be smaller than between non-face object categories. There do exist some early differences in the time-course of categorization for faces and non-faces across inversion. This may be attributed either to stimulus category per se (e.g. face-specific mechanisms) or to differences in the level of expertise between these categories.

Rossion, B, Joyce, C A, Cottrell, G W, & Tarr, M J, 2003: Early lateralization and orientation tuning for face, word and object processing in the visual cortex, Neuroimage, vol 20, no 3, pp 1609-1624.

Event-related potential (ERP) studies of the human brain have shown that object categories can be reliably distinguished as early as 130-170 ms on the surface of occipito-temporal cortex, peaking at the level of the N170 component. Consistent with this finding, neuropsychological and neuroimaging studies suggest major functional distinctions within the human object recognition system, particularly in hemispheric advantage, between the processing of words (left), faces (right) and objects (bilateral). Given these observations, our aim was to (1) characterize the differential response properties of the N170 to pictures of faces, objects and words across hemispheres; and (2) test whether an effect of inversion for highly familiar and mono-oriented non-face stimuli such as printed words can be observed at the level of the N170. Scalp EEG (53 channels) was recorded in 15 subjects performing an orientation decision task with pictures of faces, words and cars presented upright or inverted. All three categories elicited at the same latency a robust N170 component associated with a positive counterpart at centro-frontal sites (vertex positive potential, VPP). While there were minor amplitude differences at the level of the occipital medial P1 between linguistic and non-linguistic categories, scalp topographies and source analyses indicated strong hemispheric and orientation effects starting at the level of the N170, which was right lateralized for faces, smaller and bilateral for cars, and as large for printed words in the left hemisphere as for faces. The entire N170/VPP complex was accounted for by two dipolar sources located in the lateral inferior occipital cortex/posterior fusiform gyrus. These two locations were roughly equivalent across conditions but differed in strength and lateralization. Inversion delayed the N170 (and VPP) response for all categories, with an increasing delay for cars, words, and faces respectively, as suggested by source modeling analysis. Such results show that early processes in object recognition respond to category-specific visual information, and are associated with strong lateralization and orientation bias.

Rossion, B, Kung, C-C, & Tarr, M J, 2004: Visual expertise with nonface objects leads to competition with the early perceptual processing of faces in the human occipitotemporal cortex, Proc. Nat'l. Acad. Sci. USA, vol 101, no 40, pp 14521-14526.

Human electrophysiological studies have found that the processing of faces and other objects differs reliably at ~150 ms after stimulus onset, faces giving rise to a larger occipitotemporal field potential on the scalp, termed the N170.We hypothesize that visual expertise with nonface objects leads to the recruitment of early face-related categorization processes in the occipitotemporal cortex, as reflected by the N170. To test this hypothesis, the N170 in response to laterally presented faces was measured while subjects concurrently viewed centrally presented, novel, nonface objects (asymmetric ''Greebles''). The task was simply to report the side of the screen on which each face was presented. Five subjects were tested during three eventrelated potential sessions interspersed throughout a training protocol during which they became experts with Greebles. After expertise training, the N170 in response to faces was substantially decreased (~20% decrease in signal relative to that when subjects were novices) when concurrently processing a nonface object in the domain of expertise, but not when processing untrained objects of similar complexity. Thus, faces and nonface objects in a domain of expertise compete for early visual categorization processes in the occipitotemporal cortex.

Rotshtein, P, Henson, R N, Treves, A, Driver, J, & Dolan, R J, 2005: Morphing Marilyn into Maggie dissociates physical and identity face representations in the brain, Nature Neuroscience, vol 8, pp 107-113.

How the brain represents different aspects of faces remains controversial. Here we presented subjects with stimuli drawn from morph continua between pairs of famous faces. In the paired presentations, a second face could be identical to the first, could share perceived identity but differ physically (30% along the morph continuum), or could differ physically by the same distance along the continuum (30%) but in the other direction. We show that, behaviorally, subjects are more likely to classify face pairs in the third paired presentation as different and that this effect is more pronounced for subjects who are more familiar with the faces. In functional magnetic resonance imaging (fMRI), inferior occipital gyrus (IOG) shows sensitivity to physical rather than to identity changes, whereas right fusiform gyrus (FFG) shows sensitivity to identity rather than to physical changes. Bilateral anterior temporal regions show sensitivity to identity change that varies with the subjects' pre-experimental familiarity with the faces. These findings provide neurobiological support for a hierarchical model of face perception.

Schwaninger, A, Carbon, C-C, & Leder, H, 2003: Expert Face Processing: Specialization and Constraints, in Schwarzer, G, & Leder, H, eds: The Development of Face Processing, G�ttingen: Hogrefe & Huber.

This book draws together, for the first time, the latest scientific findings from leading international researchers on how face recognition develops. It is only in recent years that methods acceptable in experimental psychology have been developed for studying this vital and unique process. While other publications have concentrated on computer modeling of face processing and the like, this one is unique in that it looks at fundamental (and so far unanswered) questions such as: What are the roots of and reasons for our ability to recognize faces? How much of this ability is learned and how much innate?

Schweinberger, S R, & Burton, A M, 2003: Covert recognition and the neural substrate for face processing, Cortex, vol 39, pp 9-30.

In this viewpoint, we discuss the new evidence on covert face recognition in prosopagnosia presented by Bobes et al. (2003, this issue) and by Sperber and Spinnler (2003, this issue). Contrary to earlier hypotheses, both papers agree that covert and overt face recognition are based on the same mechanism. In line with this suggestion, an analysis of reported cases with prosopagnosia indicates that a degree of successful encoding of facial representations is a prerequisite for covert recognition to occur. While we agree with this general conclusion as far as Bobes et al.'s and Sperber and Spinnler's data are concerned, we also discuss evidence for a dissociation between different measures of covert recognition. Specifically, studies in patients with Capgras delusion and patients with prosopagnosia suggest that skin conductance and behavioural indexes of covert face recognition are mediated by partially different mechanisms. We also discuss implications of the new data for models of normal face recognition that have been successful in simulating covert recognition phenomena (e.g., Young and Burton, 1999, and O'Reilly et al., 1999). Finally, in reviewing recent neurophysiological and brain imaging evidence concerning the neural system for face processing, we argue that the relationship between ERP components (specifically, N170, N250r, and N400) and different cognitive processes in face recognition is beginning to emerge.

Schweinberger, S R, Huddy, V, & Burton, A M, 2004: N250r: a Face-Selective Brain Response to Stimulus Repetitions, NeuroReport, vol 15, pp 1501-1505.

We investigated event-related brain potentials elicited by repetitions of cars, ape faces, and upright and inverted human faces. A face-selective N250r response to repetitions emerged over right temporal regions, consistent with a source in the fusiform gyrus. N250r was largest for human faces, clear for ape faces, non-significant for inverted faces, and completely absent for cars. Our results suggest that face-selective neural activity starting at ~200ms and peaking at ~250-300ms is sensitive to repetition and relates to individual recognition.

Schweinberger, S R, Pickering, E C, Jentzsch, I, Burton, A M, & Kaufmann, J M, 2002: Event-related brain potential evidence for a response of inferior temporal cortex to familiar face repetitions, Cognitive Brain Research, vol 14, pp 398-409.

We investigated immediate repetition effects in the recognition of famous faces by recording event-related brain potentials (ERPs) and reaction times (RTs). Participants recognized celebrities' faces that were preceded by either the same picture, a different picture of the same celebrity, or a different famous face. Face repetition caused two distinct ERP modulations. Repetitions elicited a strong modulation of an N250 component (~200-300 ms) over inferior temporal regions. The N250 modulation showed a degree of image specificity in that it was still significant for repetitions across different pictures, though reduced in amplitude. ERPs to repeated faces were also more positive than those to unprimed faces at parietal sites from 400 to 600 ms, but these later effects were largely independent of whether the same or a different image of the celebrity had served as prime. Finally, no influence of repetition was observed for the N170 component. Dipole source modelling suggested that the N250 repetition effect (N250r) may originate from the fusiform gyrus. In contrast, source localisation of the N170 implicated a significantly more posterior location, corresponding to a lateral occipitotemporal source outside the fusiform gyrus.

Schyns, P G, Jentzsch, I, Johnson, M, Schweinberger, S R, & Gosselin, F, 2003: A principled method for determining the functionality of brain responses, NeuroReport, vol 14, pp 1665-1669.

A challenging issue in relating brain function to perception and cognition concerns the functional interpretation of brain responses. For example, while there is agreement that the N170 component of event-related potentials is sensitive to face processing, there is considerable debate about whether its response reflects a structural encoder for faces, a feature (e.g. eye) detector, or something else. We introduce a principled approach to determine the stimulus features driving brain responses. Our analyses on two observers resolving different face categorization tasks (gender and expressive or not) reveal that the N170 responds to the eyes within a face irrespective of task demands. This suggests a new methodology to attribute function to different components of the neural system for perceiving complex stimuli.

Shah, N J, Marshall, J C, Zafiris, O, Schwab, A, Zilles, K, Markowitsch, H J, & Fink, G R, 2001: Neural correlates of person famliarity - A functional magnetic resonance imaging study with clinical implications, Brain, vol 124, pp 804-815.

Neural activity was measured in 10 healthy volunteers by functional MRI while they viewed familiar and unfamilar faces and listened to familiar and unfamilar voices. The familiar faces and voices were those of people personally known to the subjects; they were not people who are more widely famous in the media. Changes in neural activity associated with stimulus modality irrespective of familiarity were observed in modules previously demostrated to be activated by faces (fusiform gyrus bilaterally) and voices (superior temporal gyrus bilaterally). Irrespective of stimulus modality, familiar faces and voices (relative to unfamiliar faces and voices) was associated with increased neural activity in the posterior cingulate cortex, including the retrospenial cortex. Our results suggest that recognizing a person involves information flow from modality- specific modules in the temporal cortex to the retrospenial cortex. The latter area has recently been implicated in episodic memory and emotional salience, and now seems to be a key area involved in assessing the familiarity of a person. We propose that disturbaces in the information flow described may underlie neurological and psychiatric disorders of the recogitio of familiar faces, voices and persons (prosopagnosia, phonagosia anf Capgras delusion, respectively).

Smith, M L, Gosselin, F, & Schyns, P G, 2004: Receptive Fields for Flexible Face Categorizations, Psychological Science vol 15, no 11, pp 753-761.

Examining the receptive fields of brain signals can elucidate how information impinging on the former modulates the latter. We applied this time-honored approach in early vision to the higher-level brain processes underlying face categorizations. Electroencephalograms in response to face-information samples were recorded while observers resolved two different categorizations (gender, expressive or not). Using a method with low bias and low variance, we compared, in a common space of information states, the information determining behavior (accuracy and reaction time) with the information that modulates emergent brain signals associated with early face encoding and later category decision. Our results provide a time line for face processing in which selective attention to diagnostic information for categorizing stimuli (the eyes and their second-order relationships in gender categorization; the mouth in expressive- or-not categorization) correlates with late electrophysiological (P300) activity, whereas early face-sensitive occipitotemporal (N170) activity is mainly driven by the contralateral eye, irrespective of the categorization task.

Stekelenburg, J J & de Gelder, B, 2004: The neural correlates of perceiving human bodies: an ERP study on the body-inversion effect, NeuroReport, vol 15, pp 777-780.

The present study investigated the neural correlates of perceiving human bodies. Focussing on the N170 as an index of structural encoding, we recorded event-related potentials (ERPs) to images of bodies and faces (either neutral or expressing fear) and objects, while subjects viewed the stimuli presented either upright or inverted. The N170 was enhanced and delayed to inverted bodies and faces, but not to objects. The emotional content of faces affected the left N170, the occipito-parietal P2, and the frontocentral N2, whereas body expressions affected the frontal vertex positive potential (VPP) and a sustained fronto-central negativity (300-500ms). Our results indicate that, like faces, bodies are processed configurally, and that within each category qualitative differences are observed for emotional as opposed to neutral images.

Sugiura, M, Kawashima, R, Nakamura, K, Sato, N, Nakamura, A, Kato, T, Hatano, K, Schormann, T, Zilles, K, Sato, K, Ito, K, & Fukuda, H, 2001: Activation Reduction in Anterior Temporal Cortices during Repeated Recognition of Faces of Personal Acquaintances, NeuroImage, vol 13, pp 877-890.

Repeated recognition of the face of a familiar individual is known to show semantic repetition priming effect. In this study, normal subjects were repeatedly presented faces of their colleagues, and the effect of repetition on the regional cerebral blood flow change was measured using positron emission tomography. They repeated a set of three tasks: the familiar-face detection (F) task, the facial direction discrimination (D) task, and the perceptual control (C) task. During five repetitions of the F task, familiar faces were presented six times from different views in a pseudorandom order. Activation reduction through the repetition of the F tasks was observed in the bilateral anterior (anterolateral to the polar region) temporal cortices which are suggested to be involved in the access to the long-term memory concerning people. The bilateral amygdala, the hypothalamus, and the medial frontal cortices, were constantly activated during the F tasks, and considered to be associated with the behavioral significance of the presented familiar faces. Constant activation was also observed in the bilateral occipitotemporal regions and fusiform gyri and the right medial temporal regions during perception of the faces, and in the left medial temporal regions during the facial familiarity detection task, which are consistent with the results of previous functional brain imaging studies. The results have provided further information about the functional segregation of the anterior temporal regions in face recognition and long- term memory.

Tanaka, J W, & Curran, T, 2001: A neural basis for expert object recognition, Psychological Science, vol 12, no 1, pp 43-47.

Although most adults are considered to be experts in the recognition of faces, fewer people specialize n the recognition of other objects, such as birds and dogs. In this research, the neurophysiological processes associated with expert bird and dog recognition were investigated usinf event-related potentials. An enhanced early negative component (N170, 164ms) was found when bird and dog experts categorized objects in their domain of expertise relative to when they categorized objects outside their domain of expertise. This finding indicates that objects from well-learned categories are neurologically differentiated from objects from lesser- known categories at a relatively early stage of visual processing.

Tanaka, J W, Curran, T, & Sheinberg, D, 2005: The training and transfer of real-world, perceptual expertise, Psychological Science, vol 16, no 2, pp 145-151.

A hallmark of perceptual expertise is that experts classify objects at a more specific, subordinate level of abstraction than novices. To what extent, does subordinate level learning contribute to the transfer of perceptual expertise to novel exemplars and novel categories? In this study, participants learned to classify ten varieties of wading birds and ten varieties of owls at either the subordinate, species (e.g., "white crown heron," "screech owl") or family ("wading bird", "owl") level of abstraction. During the six days of training, the amount of visual exposure was equated such that participants received an equal number of learning trials for wading birds and owls. Pre- and post-training performance was measured in a "same/different" discrimination task in which participants judged whether pairs of bird stimuli belonged to the "same" or "different" species. Participants trained in species level discrimination demonstrated greater transfer to novel exemplars and novel species categories than participants trained in family level discrimination. These findings suggest that perceptual categorization, not perceptual exposure per se, is important for the development and generalization of visual expertise.

Tanaka, J W, & Gauthier, I, 1997: Expertise in Object and Face Recognition, in Goldstone, R L, Schyns, P G, & Medin, D L, eds: Psychology of Learning & Motivation, Mechanisms of Perceptual Learning, vol 36, pp 83-125.

Abstract unavailable.

Tempini, M L, Price, C J, Josephs, O, Vandenberghe, R, Cappa, S F, Kapur, N, Frackowiak, R S J, 1998: The neural systems sustaining face and proper-name processing, Brain, vol 121, no 11, pp 2103-2118.

This PET study has revealed the neural system involved in implicit face, proper-name and object name processing during an explicit visual 'same' versus 'different' matching task. Within the identified system, some areas were equally active irrespective of modality (faces or names) or type of stimuli (famous and non-famous) while other areas exhibited differential effects. Our findings support the hypothesis that faces and names involve differential pre-semantic processing prior to accessing a common neural system of stored knowledge of personal identity which overlaps with the one associated with object knowledge. The areas specialized for the perceptual analysis of faces (irrespective of whether they are famous or non-famous) are the right lingual and bilateral fusiform gyri, while the areas specialized for famous stimuli (irrespective of whether they are faces or names) spread from the left anterior temporal to the left temporoparietal regions. One specific area, the more lateral portion of the left anterior middle temporal gyrus, showed increased activation for famous faces relative to famous proper names and for famous proper names relative to common names. The differential responsiveness of this region when processing familiar people suggests functional segregation of either personal attributes or, more likely, the demands placed on processes that retrieve stored knowledge when stimuli have highly similar visual features but unique semantic associations.

Tong, F, Nakayama, K, Moscovitch, M, Weinrib, O, & Kanwisher, N, 2000: Response Properties of the Human Fusiform Face Area, Cognitive Neuropsychology, vol 17, pp 257-279.

We used functional magnetic resonance imaging to study the response properties of the human fusiform face area (FFA: Kanwisher, McDermott, & Chun, 1997) to a variety of face like stimuli in order to clarify the functional role of this region. FFA responses were found to be (1) equally strong for cat, cartoon and human faces despite very different image properties, (2) equally strong for entire human faces and faces with eyes occluded but weaker for eyes shown alone, (3) equal for front and profile views of human heads, but declining in strength as faces rotated away from view, and (4) weakest for nonface objects and houses. These results indicate that generalisation of the FFA response across very different face types cannot be explained in terms of a specific response to a salient facial feature such as the eyes or a more general response to heads. Instead, the FFA appears to be optimally tuned to the broad category of faces.

Tran, B A, Joyce, C A, & Cottrell, G W, 2004: Visual Expertise Depends on How You Slice the Space, Proc. 26th Annual Cognitive Science Conference.

Previous studies using fMRI have found that the Fusiform Face Area (FFA) responds selectively to face stimuli. More recently however, studies have shown that FFA activation is not face-specific, but can also occur for other objects if the level of experience with the objects is controlled. Our neurocomputational models of visual expertise suggest that the FFA may perform fine-level discrimination by amplifying small differences in visually homogeneous categories. This is reflected in a large spread of the stimuli in the highdimensional representational space. This view of the FFA as a general, fine-level discriminator has been disputed on a number of counts. It has been argued that the objects used in human and network expertise studies (e.g. cars, birds, Greebles) are too "face-like" to conclude that the FFA is a general-purpose processor. Further, in our previous models, novice networks had fewer output possibilities than expert networks, leaving open the possibility that learning more discriminations, rather than learning fine-level discriminations, may be responsible for the results. To challenge these criticisms, we trained networks to perform fine-level discrimination on fonts, an obviously non-face category, and showed that these font networks learn a new task faster than networks trained to identify letters. In addition, all networks had the same number of output options, illustrating that visual expertise does not rely on number of discriminations, but rather on how the representational space is partitioned.

Trujillo, L T, Peterson, M A, Kaszniak, A W, & Allen, J B, 2005: EEG phase synchrony differences across visual perception conditions may depend on recording and analysis methods, Clinical Neurophysiology, 116(1):172-189.

Objective: (1) To investigate the neural synchrony hypothesis by examining if there was more synchrony for upright than inverted Mooney faces, replicating a previous study; (2) to investigate whether inverted stimuli evoke neural synchrony by comparing them to a new scrambled control condition, less likely to produce face perception.
Methods: Multichannel EEG was recorded via nose reference while participants viewed upright, inverted, and scrambled Mooney face stimuli. Gamma-range spectral power and inter-electrode phase synchrony were calculated via a wavelet-based method for upright stimuli perceived as faces and inverted/scrambled stimuli perceived as non- faces.
Results: When the frequency of interest was selected from the upright condition exhibiting maximal spectral power responses (as in the previous study) greater phase synchrony was found in the upright than inverted/scrambled conditions. However, substantial synchrony was present in all conditions, suggesting that choosing the frequency of interest from the upright condition only may have been biased. In addition, artifacts related to nose reference contamination by micro-saccades were found to be differentially present across experimental conditions in the raw EEG. When frequency of interest was selected instead from each experimental condition and the data were transformed to a laplacian 'reference free' derivation, the between-condition phase synchrony differences disappeared. Spectral power differences were robust to the change in reference, but not the combined changes in reference and frequency selection criteria.
Conclusions: Synchrony differences between face/non-face perceptions depend upon frequency selection and recording reference. Optimal selection of these parameters abolishes differential synchrony between conditions.
Significance: Neural synchrony is present not just for face percepts for upright stimuli, but also for non-face percepts achieved for inverted/scrambled Mooney stimuli.

Vinette, C, Gosselin, F, & Schyns, P G, 2004: Spatio-Temporal Dynamics of Face Recognition in a Flash: It's in the Eyes , Cognitive Science, vol 28, pp 289-301.

We adapted the Bubbles procedure [Vis. Res. 41 (2001) 2261] to examine the effective use of information during the first 282 ms of face identification. Ten participants each viewed a total of 5100 faces sub-sampled in space-time. We obtained a clear pattern of effective use of information: the eye on the left side of the image became diagnostic between 47 and 94 ms after the onset of the stimulus; after 94 ms, both eyes were used effectively. This preference for the eyes increased with practice, and was not solely due to the informativeness of the eyes for the task at hand. The bias for the eye on the left side of the image is explained in terms of hemispheric specialization. Although there were individual differences, most participants exhibited this pattern of effective use of information. An intriguing finding is that most participants displayed a clear sinusoidal modulation of effective use of attention through time with a frequency of about 10.6 Hz.

Wild, H A, & Busey, T A, 2004: Seeing faces in the noise: Stochastic activity in perceptual regions of the brain may influence the perception of ambiguous stimuli, Psychonomic Bulletin & Review, vol 11, no 3, pp 475-481.

Research on binocular rivalry and motion direction discrimination suggests that stochastic activity early in visual processing influences the perception of ambiguous stimuli. Here, we extend this to higher level tasks of word and face processing. In Experiment 1, we used blocked gender and word discrimination tasks, and in Experiment 2, we used a face versus word discrimination task. Stimuli were embedded in noise, and some trials contained only noise. In Experiment 1, we found a larger response in the N170, an ERP component associated with faces, to the noise-alone stimulus when observers were performing the gender discrimination task. The noise-alone trials in Experiment 2 were binned according to the observer's behavioral response, and there was a greater response in the N170 when they reported seeing a face. After considering various top-down and priming-related explanations, we raise the possibility that seeing a face in noise may result from greater stochastic activity in neural face processing regions.

Xua, Y, Liub, J, & Kanwisher, N, 2005: The M170 is selective for faces, not for expertise, Neuropsychologia, vol 43, pp 588-597.

Are the mechanisms for face perception selectively involved in processing faces per se, or do they also participate in the processing of any class of visual stimuli that share the same basic configuration and for which the observer has gained substantial visual expertise? Here we tested the effects of visual expertise on the face-selective "M170", a magnetoencephalography (MEG) response component that occurs 170 ms after stimulus onset and is involved in the identification of individual faces. In Experiment 1, cars did not elicit a higher M170 response (relative to control objects) in car experts compared to controls subjects. In Experiment 2, the M170 amplitude was correlated with successful face identification, but not with successful car identification in car experts. These results indicate that the early face processing mechanisms marked by the M170 are involved in the identification of faces in particular, not in the identification of any objects of expertise.