TY - JOUR T1 - Convolutions are competitive with transformers for protein sequence pretraining JF - bioRxiv DO - 10.1101/2022.05.19.492714 SP - 2022.05.19.492714 AU - Kevin K. Yang AU - Alex X. Lu AU - Nicolo Fusi Y1 - 2022/01/01 UR - http://biorxiv.org/content/early/2022/11/21/2022.05.19.492714.abstract N2 - Pretrained protein sequence language models largely rely on the transformer architecture. However, transformer run-time and memory requirements scale quadratically with sequence length. We investigate the potential of a CNN-based architecture for protein sequence masked language model pretraining and subsequent finetuning. CNNs are competitive on the pretraining task with transformers across several orders of magnitude in parameter size while scaling linearly with sequence length. More importantly, CNNs are competitive with and occasionally superior to transformers across an extensive set of downstream evaluations, including structure prediction, zero-shot mutation effect prediction, and out-of-domain generalization. We also demonstrate strong performance on sequences longer than the positional embeddings allowed in the current state-of-the-art transformer protein masked language models. Finally, we close with a call to disentangle the effects of pretraining task and model architecture when studying pretrained protein sequence models.Competing Interest StatementThe authors have declared no competing interest. ER -