TY - JOUR T1 - Interpreting Deep Neural Networks Beyond Attribution Methods: Quantifying Global Importance of Genomic Features JF - bioRxiv DO - 10.1101/2020.02.19.956896 SP - 2020.02.19.956896 AU - Peter K. Koo AU - Matt Ploenzke Y1 - 2020/01/01 UR - http://biorxiv.org/content/early/2020/02/20/2020.02.19.956896.abstract N2 - Despite deep neural networks (DNNs) having found great success at improving performance on various prediction tasks in computational genomics, it remains difficult to understand why they make any given prediction. In genomics, the main approaches to interpret a high-performing DNN are to visualize learned representations via weight visualizations and attribution methods. While these methods can be informative, each has strong limitations. For instance, attribution methods only uncover the independent contribution of single nucleotide variants in a given sequence. Here we discuss and argue for global importance analysis which can quantify population-level importance of putative features and their interactions learned by a DNN. We highlight recent work that has benefited from this interpretability approach and then discuss connections between global importance analysis and causality. ER -