Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Towards Effective and Generalizable Fine-tuning for Pre-trained Molecular Graph Models

Jun Xia, Jiangbin Zheng, Cheng Tan, Ge Wang, Stan Z. Li
doi: https://doi.org/10.1101/2022.02.03.479055
Jun Xia
1Zhejiang University
2School of Engineering, Westlake University
3Institute of Advanced Technology, Westlake Institute for Advanced Study
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: xiajun@westlake.edu.cn
Jiangbin Zheng
2School of Engineering, Westlake University
3Institute of Advanced Technology, Westlake Institute for Advanced Study
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Cheng Tan
2School of Engineering, Westlake University
3Institute of Advanced Technology, Westlake Institute for Advanced Study
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ge Wang
2School of Engineering, Westlake University
3Institute of Advanced Technology, Westlake Institute for Advanced Study
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Stan Z. Li
2School of Engineering, Westlake University
3Institute of Advanced Technology, Westlake Institute for Advanced Study
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Preview PDF
Loading

Abstract

Graph Neural Networks (GNNs) and Transformer have emerged as dominant tools for AI-driven drug discovery. Many state-of-the-art methods first pre-train GNNs or the hybrid of GNNs and Transformer on a large molecular database and then fine-tune on downstream tasks. However, different from other domains such as computer vision (CV) or natural language processing (NLP), getting labels for molecular data of downstream tasks often requires resource-intensive wet-lab experiments. Besides, the pre-trained models are often of extremely high complexity with huge parameters. These often cause the fine-tuned model to over-fit the training data of downstream tasks and significantly deteriorate the performance. To alleviate these critical yet under-explored issues, we propose two straightforward yet effective strategies to attain better generalization performance: 1. MolAug, which enriches the molecular datasets of down-stream tasks with chemical homologies and enantiomers; 2. WordReg, which controls the complexity of the pre-trained models with a smoothness-inducing regularization built on dropout. Extensive experiments demonstrate that our proposed strategies achieve notable and consistent improvements over vanilla fine-tuning and yield multiple state-of-the-art results. Also, these strategies are model-agnostic and readily pluggable into fine-tuning of various pre-trained molecular graph models. We will release the code and the fine-tuned models.

Competing Interest Statement

The authors have declared no competing interest.

Footnotes

  • {xiajun{at}westlake.edu.cn, zhengjiangbin{at}westlake.edu.cn, tancheng{at}westlake.edu.cn, wangge{at}westlake.edu.cn, stan.zq.li{at}westlake.edu.cn}

Copyright 
The copyright holder for this preprint is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission.
Back to top
PreviousNext
Posted February 06, 2022.
Download PDF
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Towards Effective and Generalizable Fine-tuning for Pre-trained Molecular Graph Models
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Towards Effective and Generalizable Fine-tuning for Pre-trained Molecular Graph Models
Jun Xia, Jiangbin Zheng, Cheng Tan, Ge Wang, Stan Z. Li
bioRxiv 2022.02.03.479055; doi: https://doi.org/10.1101/2022.02.03.479055
Reddit logo Twitter logo Facebook logo LinkedIn logo Mendeley logo
Citation Tools
Towards Effective and Generalizable Fine-tuning for Pre-trained Molecular Graph Models
Jun Xia, Jiangbin Zheng, Cheng Tan, Ge Wang, Stan Z. Li
bioRxiv 2022.02.03.479055; doi: https://doi.org/10.1101/2022.02.03.479055

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Bioinformatics
Subject Areas
All Articles
  • Animal Behavior and Cognition (4372)
  • Biochemistry (9561)
  • Bioengineering (7075)
  • Bioinformatics (24800)
  • Biophysics (12581)
  • Cancer Biology (9929)
  • Cell Biology (14306)
  • Clinical Trials (138)
  • Developmental Biology (7935)
  • Ecology (12085)
  • Epidemiology (2067)
  • Evolutionary Biology (15965)
  • Genetics (10910)
  • Genomics (14716)
  • Immunology (9850)
  • Microbiology (23597)
  • Molecular Biology (9463)
  • Neuroscience (50750)
  • Paleontology (369)
  • Pathology (1537)
  • Pharmacology and Toxicology (2675)
  • Physiology (4003)
  • Plant Biology (8646)
  • Scientific Communication and Education (1506)
  • Synthetic Biology (2388)
  • Systems Biology (6417)
  • Zoology (1345)