Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Transformer protein language models are unsupervised structure learners

View ORCID ProfileRoshan Rao, View ORCID ProfileJoshua Meier, View ORCID ProfileTom Sercu, View ORCID ProfileSergey Ovchinnikov, View ORCID ProfileAlexander Rives
doi: https://doi.org/10.1101/2020.12.15.422761
Roshan Rao
1UC Berkeley,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Roshan Rao
  • For correspondence: roshan_rao@berkeley.edu rmrao@berkeley.edu
Joshua Meier
2Facebook AI Research,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Joshua Meier
  • For correspondence: jmeier@fb.com
Tom Sercu
3Facebook AI Research,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Tom Sercu
  • For correspondence: tsercu@fb.com
Sergey Ovchinnikov
4Harvard University,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Sergey Ovchinnikov
  • For correspondence: so@g.harvard.edu
Alexander Rives
5Facebook AI Research & New York University,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Alexander Rives
  • For correspondence: arives@cs.nyu.edu
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Data/Code
  • Preview PDF
Loading

Abstract

Unsupervised contact prediction is central to uncovering physical, structural, and functional constraints for protein structure determination and design. For decades, the predominant approach has been to infer evolutionary constraints from a set of related sequences. In the past year, protein language models have emerged as a potential alternative, but performance has fallen short of state-of-the-art approaches in bioinformatics. In this paper we demonstrate that Transformer attention maps learn contacts from the unsupervised language modeling objective. We find the highest capacity models that have been trained to date already outperform a state-of-the-art unsupervised contact prediction pipeline, suggesting these pipelines can be replaced with a single forward pass of an end-to-end model.1

Competing Interest Statement

The authors have declared no competing interest.

Footnotes

  • ↵* Work performed during an internship at Facebook.

  • https://github.com/facebookresearch/esm

  • 2 PSICOV fails to converge on 24 sequences using default parameters. Following the suggestion in github.com/psipred/psicov, we increase ρ to 0.005, 0.01, and thereafter by increments of 0.01, to a maximum of 0.1. PSICOV fails to converge altogether on 6 / 14842 sequences. We assign a score of 0 for these sequences.

  • 3 PSICOV fails to converge on 3 / 15 targets with default parameters. We follow the procedure suggested in https://github.com/psipred/psicov to increase rho to 0.005 for those domains.

Copyright 
The copyright holder for this preprint is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under a CC-BY-NC-ND 4.0 International license.
Back to top
PreviousNext
Posted December 15, 2020.
Download PDF
Data/Code
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Transformer protein language models are unsupervised structure learners
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Transformer protein language models are unsupervised structure learners
Roshan Rao, Joshua Meier, Tom Sercu, Sergey Ovchinnikov, Alexander Rives
bioRxiv 2020.12.15.422761; doi: https://doi.org/10.1101/2020.12.15.422761
Reddit logo Twitter logo Facebook logo LinkedIn logo Mendeley logo
Citation Tools
Transformer protein language models are unsupervised structure learners
Roshan Rao, Joshua Meier, Tom Sercu, Sergey Ovchinnikov, Alexander Rives
bioRxiv 2020.12.15.422761; doi: https://doi.org/10.1101/2020.12.15.422761

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Synthetic Biology
Subject Areas
All Articles
  • Animal Behavior and Cognition (4658)
  • Biochemistry (10313)
  • Bioengineering (7636)
  • Bioinformatics (26241)
  • Biophysics (13481)
  • Cancer Biology (10650)
  • Cell Biology (15363)
  • Clinical Trials (138)
  • Developmental Biology (8467)
  • Ecology (12776)
  • Epidemiology (2067)
  • Evolutionary Biology (16794)
  • Genetics (11373)
  • Genomics (15431)
  • Immunology (10580)
  • Microbiology (25087)
  • Molecular Biology (10172)
  • Neuroscience (54234)
  • Paleontology (398)
  • Pathology (1660)
  • Pharmacology and Toxicology (2884)
  • Physiology (4326)
  • Plant Biology (9213)
  • Scientific Communication and Education (1582)
  • Synthetic Biology (2545)
  • Systems Biology (6761)
  • Zoology (1459)