Representational Dynamics of Facial Viewpoint Encoding

J Cogn Neurosci. 2017 Apr;29(4):637-651. doi: 10.1162/jocn_a_01070. Epub 2016 Oct 28.

Abstract

Faces provide a wealth of information, including the identity of the seen person and social cues, such as the direction of gaze. Crucially, different aspects of face processing require distinct forms of information encoding. Another person's attentional focus can be derived based on a view-dependent code. In contrast, identification benefits from invariance across all viewpoints. Different cortical areas have been suggested to subserve these distinct functions. However, little is known about the temporal aspects of differential viewpoint encoding in the human brain. Here, we combine EEG with multivariate data analyses to resolve the dynamics of face processing with high temporal resolution. This revealed a distinct sequence of viewpoint encoding. Head orientations were encoded first, starting after around 60 msec of processing. Shortly afterward, peaking around 115 msec after stimulus onset, a different encoding scheme emerged. At this latency, mirror-symmetric viewing angles elicited highly similar cortical responses. Finally, about 280 msec after visual onset, EEG response patterns demonstrated a considerable degree of viewpoint invariance across all viewpoints tested, with the noteworthy exception of the front-facing view. Taken together, our results indicate that the processing of facial viewpoints follows a temporal sequence of encoding schemes, potentially mirroring different levels of computational complexity.

MeSH terms

  • Adult
  • Electroencephalography / methods*
  • Facial Recognition / physiology*
  • Female
  • Humans
  • Male
  • Signal Processing, Computer-Assisted*
  • Space Perception / physiology*
  • Time Factors
  • Young Adult