Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

TMbed – Transmembrane proteins predicted through Language Model embeddings

View ORCID ProfileMichael Bernhofer, View ORCID ProfileBurkhard Rost
doi: https://doi.org/10.1101/2022.06.12.495804
Michael Bernhofer
1Department of Informatics, Bioinformatics and Computational Biology - i12, Technical University of Munich (TUM), Boltzmannstr. 3, 85748 Garching, Germany
2TUM Graduate School, Center of Doctoral Studies in Informatics and its Applications (CeDoSIA), Boltzmannstr. 11, 85748 Garching, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Michael Bernhofer
  • For correspondence: bernhoferm@rostlab.org
Burkhard Rost
1Department of Informatics, Bioinformatics and Computational Biology - i12, Technical University of Munich (TUM), Boltzmannstr. 3, 85748 Garching, Germany
3Institute for Advanced Study (TUM-IAS), Lichtenbergstr. 2a, 85748 Garching, Germany & TUM School of Life Sciences Weihenstephan (TUM-WZW), Alte Akademie 8, Freising, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Burkhard Rost
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Supplementary material
  • Data/Code
  • Preview PDF
Loading

Abstract

Background: Despite the immense importance of transmembrane proteins (TMP) for molecular biology and medicine, experimental 3D structures for TMPs remain about 4-5 times underrepresented compared to non-TMPs. Today's top methods can accurately predict structures for many, but the annotations of the transmembrane regions remains a limiting step for proteome-wide predictions. Results: Here, we present a novel method, dubbed TMbed. Inputting embeddings from protein Language Models (in particular ProtT5), TMbed completes predictions of alpha helical and beta barrel TMPs for entire proteomes within hours on a single consumer-grade desktop machine at performance levels similar or better than methods, which are using evolutionary information (extracted from family alignments). On the per-protein level, TMbed correctly identified 61 of the 65 beta-barrel TMPs (94±7%) and 579 of the 593 alpha-helical TMPs (98±1%) in a non-redundant data set, at false positive rates well below 1% (erred on 31 of 5859 non-membrane proteins). On the per-segment level, TMbed correctly placed, on average, 9 of 10 transmembrane segments within five residues of the experimental observation. Although limited by GPU memory, our method can handle sequence of up to 4200 residues on standard graphics cards used in common desktop PCs (e.g., NVIDIA GeForce RTX 3060). Conclusions: TMbed accurately predicts alpha helical and beta barrel TMPs. Utilizing protein Language Models and GPU acceleration it can predict the human in less than an hour. Availability: Our code, method, and data sets are freely available in the GitHub repository, https://github.com/BernhoferM/TMbed

Competing Interest Statement

The authors have declared no competing interest.

Footnotes

  • https://github.com/BernhoferM/TMbed

Copyright 
The copyright holder for this preprint is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under a CC-BY 4.0 International license.
Back to top
PreviousNext
Posted June 15, 2022.
Download PDF

Supplementary Material

Data/Code
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
TMbed – Transmembrane proteins predicted through Language Model embeddings
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
TMbed – Transmembrane proteins predicted through Language Model embeddings
Michael Bernhofer, Burkhard Rost
bioRxiv 2022.06.12.495804; doi: https://doi.org/10.1101/2022.06.12.495804
Digg logo Reddit logo Twitter logo Facebook logo Google logo LinkedIn logo Mendeley logo
Citation Tools
TMbed – Transmembrane proteins predicted through Language Model embeddings
Michael Bernhofer, Burkhard Rost
bioRxiv 2022.06.12.495804; doi: https://doi.org/10.1101/2022.06.12.495804

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Bioinformatics
Subject Areas
All Articles
  • Animal Behavior and Cognition (3605)
  • Biochemistry (7575)
  • Bioengineering (5529)
  • Bioinformatics (20806)
  • Biophysics (10333)
  • Cancer Biology (7986)
  • Cell Biology (11644)
  • Clinical Trials (138)
  • Developmental Biology (6610)
  • Ecology (10214)
  • Epidemiology (2065)
  • Evolutionary Biology (13622)
  • Genetics (9543)
  • Genomics (12851)
  • Immunology (7923)
  • Microbiology (19550)
  • Molecular Biology (7666)
  • Neuroscience (42125)
  • Paleontology (308)
  • Pathology (1258)
  • Pharmacology and Toxicology (2203)
  • Physiology (3268)
  • Plant Biology (7044)
  • Scientific Communication and Education (1294)
  • Synthetic Biology (1951)
  • Systems Biology (5427)
  • Zoology (1118)