Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Single Layers of Attention Suffice to Predict Protein Contacts

Nicholas Bhattacharya, View ORCID ProfileNeil Thomas, Roshan Rao, Justas Dauparas, Peter K. Koo, David Baker, Yun S. Song, Sergey Ovchinnikov
doi: https://doi.org/10.1101/2020.12.21.423882
Nicholas Bhattacharya
1UC Berkeley
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Neil Thomas
1UC Berkeley
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Neil Thomas
Roshan Rao
1UC Berkeley
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Justas Dauparas
4University of Washington
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Peter K. Koo
5Cold Spring Harbor Laboratory
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
David Baker
4University of Washington
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yun S. Song
1UC Berkeley
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sergey Ovchinnikov
8Harvard University
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Preview PDF
Loading

Abstract

The established approach to unsupervised protein contact prediction estimates co-evolving positions using undirected graphical models. This approach trains a Potts model on a Multiple Sequence Alignment, then predicts that the edges with highest weight correspond to contacts in the 3D structure. On the other hand, increasingly large Transformers are being pretrained on protein sequence databases but have demonstrated mixed results for downstream tasks, including contact prediction. This has sparked discussion about the role of scale and attention-based models in unsupervised protein representation learning. We argue that attention is a principled model of protein interactions, grounded in real properties of protein family data. We introduce a simplified attention layer, factored attention, and show that it achieves comparable performance to Potts models, while sharing parameters both within and across families. Further, we extract contacts from the attention maps of a pretrained Transformer and show they perform competitively with the other two approaches. This provides evidence that large-scale pretraining can learn meaningful protein features when presented with unlabeled and unaligned data. We contrast factored attention with the Transformer to indicate that the Transformer leverages hierarchical signal in protein family databases not captured by our single-layer models. This raises the exciting possibility for the development of powerful structured models of protein family databases.1

Competing Interest Statement

The authors have declared no competing interest.

Footnotes

  • nthomas{at}berkeley.edu, rmrao{at}berkeley.edu, justas{at}uw.edu, koo{at}cshl.edu, dabaker{at}uw.edu, yss{at}berkeley.edu, so{at}g.harvard.edu

  • Fix the last name of Justas Daupras to Justas Dauparas.

  • ↵1 Code available at https://github.com/nickbhat/iclr-2021-factored-attention

Copyright 
The copyright holder for this preprint is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under a CC-BY-NC-ND 4.0 International license.
Back to top
PreviousNext
Posted December 22, 2020.
Download PDF
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Single Layers of Attention Suffice to Predict Protein Contacts
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Single Layers of Attention Suffice to Predict Protein Contacts
Nicholas Bhattacharya, Neil Thomas, Roshan Rao, Justas Dauparas, Peter K. Koo, David Baker, Yun S. Song, Sergey Ovchinnikov
bioRxiv 2020.12.21.423882; doi: https://doi.org/10.1101/2020.12.21.423882
Reddit logo Twitter logo Facebook logo LinkedIn logo Mendeley logo
Citation Tools
Single Layers of Attention Suffice to Predict Protein Contacts
Nicholas Bhattacharya, Neil Thomas, Roshan Rao, Justas Dauparas, Peter K. Koo, David Baker, Yun S. Song, Sergey Ovchinnikov
bioRxiv 2020.12.21.423882; doi: https://doi.org/10.1101/2020.12.21.423882

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Bioinformatics
Subject Areas
All Articles
  • Animal Behavior and Cognition (4246)
  • Biochemistry (9175)
  • Bioengineering (6806)
  • Bioinformatics (24066)
  • Biophysics (12160)
  • Cancer Biology (9566)
  • Cell Biology (13828)
  • Clinical Trials (138)
  • Developmental Biology (7660)
  • Ecology (11739)
  • Epidemiology (2066)
  • Evolutionary Biology (15547)
  • Genetics (10672)
  • Genomics (14364)
  • Immunology (9515)
  • Microbiology (22914)
  • Molecular Biology (9135)
  • Neuroscience (49163)
  • Paleontology (358)
  • Pathology (1487)
  • Pharmacology and Toxicology (2584)
  • Physiology (3851)
  • Plant Biology (8351)
  • Scientific Communication and Education (1473)
  • Synthetic Biology (2301)
  • Systems Biology (6207)
  • Zoology (1304)