RT Journal Article SR Electronic T1 Transformer protein language models are unsupervised structure learners JF bioRxiv FD Cold Spring Harbor Laboratory SP 2020.12.15.422761 DO 10.1101/2020.12.15.422761 A1 Roshan Rao A1 Joshua Meier A1 Tom Sercu A1 Sergey Ovchinnikov A1 Alexander Rives YR 2020 UL http://biorxiv.org/content/early/2020/12/15/2020.12.15.422761.abstract AB Unsupervised contact prediction is central to uncovering physical, structural, and functional constraints for protein structure determination and design. For decades, the predominant approach has been to infer evolutionary constraints from a set of related sequences. In the past year, protein language models have emerged as a potential alternative, but performance has fallen short of state-of-the-art approaches in bioinformatics. In this paper we demonstrate that Transformer attention maps learn contacts from the unsupervised language modeling objective. We find the highest capacity models that have been trained to date already outperform a state-of-the-art unsupervised contact prediction pipeline, suggesting these pipelines can be replaced with a single forward pass of an end-to-end model.1Competing Interest StatementThe authors have declared no competing interest.