Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Adversarial Attacks on Protein Language Models

Ginevra Carbone, Francesca Cuturello, Luca Bortolussi, Alberto Cazzaniga
doi: https://doi.org/10.1101/2022.10.24.513465
Ginevra Carbone
1Department of Mathematics and Geosciences, University of Trieste, Trieste, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: ginevra.carbone@phd.units.it alberto.cazzaniga@areasciencepark.it
Francesca Cuturello
2Institute of Research and Technology (RIT), AREA Science Park, Trieste, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Luca Bortolussi
1Department of Mathematics and Geosciences, University of Trieste, Trieste, Italy
3Modeling and Simulation Group, Saarland University, Saarland, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Alberto Cazzaniga
2Institute of Research and Technology (RIT), AREA Science Park, Trieste, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: ginevra.carbone@phd.units.it alberto.cazzaniga@areasciencepark.it
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Data/Code
  • Preview PDF
Loading

Abstract

Deep Learning models for protein structure prediction, such as AlphaFold2, leverage Transformer architectures and their attention mechanism to capture structural and functional properties of amino acid sequences. Despite the high accuracy of predictions, biologically insignificant perturbations of the input sequences, or even single point mutations, can lead to substantially different 3d structures. On the other hand, protein language models are often insensitive to biologically relevant mutations that induce misfolding or dysfunction (e.g. missense mutations). Precisely, predictions of the 3d coordinates do not reveal the structure-disruptive effect of these mutations. Therefore, there is an evident inconsistency between the biological importance of mutations and the resulting change in structural prediction. Inspired by this problem, we introduce the concept of adversarial perturbation of protein sequences in continuous embedding spaces of protein language models. Our method relies on attention scores to detect the most vulnerable amino acid positions in the input sequences. Adversarial mutations are biologically diverse from their references and are able to significantly alter the resulting 3d structures.

Competing Interest Statement

The authors acknowledge AREA Science Park supercomputing platform ORFEO made available for conducting the research reported in this paper, and the technical support of the staff of the Laboratory of Data Engineering. F.C. was supported by the grant PNR ``FAIR-by-design". A.C. was supported by the ARGO funding program.

Footnotes

  • francesca.cuturello{at}areasciencepark.it

  • luca.bortolussi{at}gmail.com

  • Merged main file with supplementary.

  • https://github.com/ginevracoal/adversarial-protein-sequences

Copyright 
The copyright holder for this preprint is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under a CC-BY-NC-ND 4.0 International license.
Back to top
PreviousNext
Posted October 27, 2022.
Download PDF
Data/Code
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Adversarial Attacks on Protein Language Models
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Adversarial Attacks on Protein Language Models
Ginevra Carbone, Francesca Cuturello, Luca Bortolussi, Alberto Cazzaniga
bioRxiv 2022.10.24.513465; doi: https://doi.org/10.1101/2022.10.24.513465
Digg logo Reddit logo Twitter logo Facebook logo Google logo LinkedIn logo Mendeley logo
Citation Tools
Adversarial Attacks on Protein Language Models
Ginevra Carbone, Francesca Cuturello, Luca Bortolussi, Alberto Cazzaniga
bioRxiv 2022.10.24.513465; doi: https://doi.org/10.1101/2022.10.24.513465

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Bioinformatics
Subject Areas
All Articles
  • Animal Behavior and Cognition (4085)
  • Biochemistry (8755)
  • Bioengineering (6477)
  • Bioinformatics (23331)
  • Biophysics (11740)
  • Cancer Biology (9144)
  • Cell Biology (13237)
  • Clinical Trials (138)
  • Developmental Biology (7410)
  • Ecology (11364)
  • Epidemiology (2066)
  • Evolutionary Biology (15084)
  • Genetics (10397)
  • Genomics (14006)
  • Immunology (9115)
  • Microbiology (22036)
  • Molecular Biology (8777)
  • Neuroscience (47345)
  • Paleontology (350)
  • Pathology (1420)
  • Pharmacology and Toxicology (2480)
  • Physiology (3703)
  • Plant Biology (8045)
  • Scientific Communication and Education (1431)
  • Synthetic Biology (2207)
  • Systems Biology (6014)
  • Zoology (1249)