Abstract
Closing the gap between measurable genetic information and observable traits is a longstand-ing challenge in genomics. Yet, the prediction of molecular phenotypes from DNA sequences alone remains limited and inaccurate, often driven by the scarcity of annotated data and the inability to transfer learnings between prediction tasks. Here, we present an extensive study of foundation models pre-trained on DNA sequences, named the Nucleotide Transformer, rang-ing from 50M up to 2.5B parameters and integrating information from 3,202 diverse human genomes, as well as 850 genomes selected across diverse phyla, including both model and non-model organisms. These transformer models yield transferable, context-specific representations of nucleotide sequences, which allow for accurate molecular phenotype prediction even in low-data settings. We show that the developed models can be fine-tuned at low cost and despite low available data regime to solve a variety of genomics applications. Despite no supervision, the transformer models learned to focus attention on key genomic elements, including those that regulate gene expression, such as enhancers. Lastly, we demonstrate that utilizing model rep-resentations can improve the prioritization of functional genetic variants. The training and ap-plication of foundational models in genomics explored in this study provide a widely applicable stepping stone to bridge the gap of accurate molecular phenotype prediction from DNA sequence. Code and weights available at: https://github.com/instadeepai/nucleotide-transformer in Jax and https://huggingface.co/InstaDeepAI in Pytorch. Example notebooks to apply these models to any downstream task are available on HuggingFace.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
- Systematic comparison of our model to five different pre-trained models: DNABERT-1, DNABERT-2, HyenaDNA (1kb and 32kb) and Enformer. - Two additional controls for our fine-tuning approach: probing from raw tokens and fine-tuning from randomly initialized checkpoint. - Additional benchmark datasets and comparison with baselines such as SpliceAI. - Additional interpretation analyses of pre-trained and fine-tuned Nucleotide Transformer models. - Development of a new version of Nucleotide Transformer models (NT-v2) that achieve the same performance while being ten times smaller and having a context length twice as large at 12kb. - Example notebooks to apply these models to any downstream task available on HuggingFace.
↵2 https://huggingface.co/spaces/InstaDeepAI/nucleotide_transformer_benchmark
↵4 http://ftp.1000genomes.ebi.ac.uk/vol1/ftp/data_collections/1000G_2504_high_coverage/working/20201028_3202_phased/20201028_3202_phased/
↵6 https://jax.readthedocs.io/en/latest/_autosummary/jax.pmap.html
↵9 https://git.unistra.fr/nscalzitti/spliceator/-/tree/master/Data/Datasets
↵10 http://deepsea.princeton.edu/media/code/deepsea_train_bundle.v0.9.tar.gz
↵16 https://api.wenglab.org/screen_v13/fdownloads/GRCh38-ccREs.bed