Abstract
Pre-trained models have been transformative in natural language, computer vision, and now protein sequences by enabling accuracy with few training examples. We show how to use pretrained sequence models in Bayesian optimization to design new protein sequences with minimal labels (i.e., few experiments). Pre-trained models give good predictive accuracy at low data and Bayesian optimization guides the choice of which sequences to test. Pre-trained sequence models also obviate the common requirement of finite pools. Any sequence can be considered. We show significantly fewer labeled sequences are required for many sequence design tasks, including creating novel peptide inhibitors with AlphaFold. This work should enable calibrated predictions with few examples and iterative design with low data (1-50).
Competing Interest Statement
The authors have declared no competing interest.