Depth in convolutional neural networks solves scene segmentation

PLoS Comput Biol. 2020 Jul 24;16(7):e1008022. doi: 10.1371/journal.pcbi.1008022. eCollection 2020 Jul.

Abstract

Feed-forward deep convolutional neural networks (DCNNs) are, under specific conditions, matching and even surpassing human performance in object recognition in natural scenes. This performance suggests that the analysis of a loose collection of image features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Research in humans however suggests that while feedforward activity may suffice for sparse scenes with isolated objects, additional visual operations ('routines') that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicate that with an increase in network depth, there is an increase in the distinction between object- and background information. For more shallow networks, results indicated a benefit of training on segmented objects. Overall, these results indicate that, de facto, scene segmentation can be performed by a network of sufficient depth. We conclude that the human brain could perform scene segmentation in the context of object identification without an explicit mechanism, by selecting or "binding" features that belong to the object and ignoring other features, in a manner similar to a very deep convolutional neural network.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adolescent
  • Adult
  • Brain
  • Female
  • Humans
  • Male
  • Neural Networks, Computer*
  • Pattern Recognition, Visual*
  • Recognition, Psychology
  • Reproducibility of Results
  • Signal Processing, Computer-Assisted*
  • Visual Cortex / physiology*
  • Visual Perception*
  • Young Adult

Grants and funding

This work was supported by an Advanced Investigator Grant by the European Research Council (ERC grant FAB4V #339374) to EdH. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.