PT - JOURNAL ARTICLE AU - Georgin Jacob AU - R. T. Pramod AU - Harish Katti AU - S. P. Arun TI - Do deep neural networks see the way we do? AID - 10.1101/860759 DP - 2019 Jan 01 TA - bioRxiv PG - 860759 4099 - http://biorxiv.org/content/early/2019/12/02/860759.short 4100 - http://biorxiv.org/content/early/2019/12/02/860759.full AB - Deep neural networks have revolutionized computer vision, and their object representations match coarsely with the brain. As a result, it is widely believed that any fine scale differences between deep networks and brains can be fixed with increased training data or minor changes in architecture. But what if there are qualitative differences between brains and deep networks? Do deep networks even see the way we do? To answer this question, we chose a deep neural network optimized for object recognition and asked whether it exhibits well-known perceptual and neural phenomena despite not being explicitly trained to do so. To our surprise, many phenomena were present in the network, including the Thatcher effect, mirror confusion, Weber’s law, relative size, multiple object normalization and sparse coding along multiple dimensions. However, some perceptual phenomena were notably absent, including processing of 3D shape, patterns on surfaces, occlusion, natural parts and a global advantage. Our results elucidate the computational challenges of vision by showing that learning to recognize objects suffices to produce some perceptual phenomena but not others and reveal the perceptual properties that could be incorporated into deep networks to improve their performance.