User profiles for Wieland Brendel
Wieland BrendelFellow at ELLIS Institut Tübingen, Group Leader, Max Planck Institute for Intelligent Systems Verified email at tuebingen.mpg.de Cited by 12843 |
Approximating cnns with bag-of-local-features models works surprisingly well on imagenet
Deep Neural Networks (DNNs) excel on many complex perceptual tasks but it has proven
notoriously difficult to understand how they reach their decisions. We here introduce a high-…
notoriously difficult to understand how they reach their decisions. We here introduce a high-…
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
Convolutional Neural Networks (CNNs) are commonly thought to recognise objects by learning
increasingly complex representations of object shapes. Some recent studies suggest a …
increasingly complex representations of object shapes. Some recent studies suggest a …
On adaptive attacks to adversarial example defenses
Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to
adversarial examples. We find, however, that typical adaptive evaluations are incomplete. We …
adversarial examples. We find, however, that typical adaptive evaluations are incomplete. We …
Self-supervised learning with data augmentations provably isolates content from style
Self-supervised representation learning has shown remarkable success in a number of
domains. A common practice is to perform data augmentation via hand-crafted transformations …
domains. A common practice is to perform data augmentation via hand-crafted transformations …
Shortcut learning in deep neural networks
Deep learning has triggered the current rise of artificial intelligence and is the workhorse of
today’s machine intelligence. Numerous success stories have rapidly spread all over science…
today’s machine intelligence. Numerous success stories have rapidly spread all over science…
Partial success in closing the gap between human and machine vision
A few years ago, the first CNN surpassed human performance on ImageNet. However, it
soon became clear that machines lack robustness on more challenging test cases, a major …
soon became clear that machines lack robustness on more challenging test cases, a major …
A simple way to make neural networks robust against diverse image corruptions
The human visual system is remarkably robust against a wide range of naturally occurring
variations and corruptions like rain or snow. In contrast, the performance of modern image …
variations and corruptions like rain or snow. In contrast, the performance of modern image …
[HTML][HTML] Five points to check when comparing visual perception in humans and machines
With the rise of machines to human-level performance in complex recognition tasks, a
growing amount of work is directed toward comparing information processing in humans and …
growing amount of work is directed toward comparing information processing in humans and …
Decision-based adversarial attacks: Reliable attacks against black-box machine learning models
Many machine learning algorithms are vulnerable to almost imperceptible perturbations of
their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety …
their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety …
On evaluating adversarial robustness
Correctly evaluating defenses against adversarial examples has proven to be extremely
difficult. Despite the significant amount of recent work attempting to design defenses that …
difficult. Despite the significant amount of recent work attempting to design defenses that …