Abstract
Recent development of novel methods based on deep neural networks has transformed how high-content microscopy cellular images are analyzed. Nonetheless, it is still a challenge to identify cellular phenotypic changes caused by chemical or genetic treatments and to elucidate the relationships among treatments in an unsupervised manner, due to the large data volume, high phenotypic complexity and the presence of a priori unknown phenotypes. Here we benchmarked five deep neural network methods and two feature engineering methods on a well-characterized public data set. In contrast to previous benchmarking efforts, the manual annotations were not provided to the methods, but rather used as evaluation criteria afterwards. The seven methods individually performed feature extraction or representation learning from cellular images, and were consistently evaluated for downstream phenotype prediction and clustering tasks. We identified the strengths of individual methods across evaluation metrics, and further examined the biological concepts of features automatically learned by deep neural networks.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
{mark.bray{at}novartis.com, eric.durand{at}novartis.com, jian.fang{at}novartis.com, daniela.gabriel{at}novartis.com, rens.janssens{at}novartis.com, ioannis.moutsatsos{at}novartis.com, stephan.spiegel{at}novartis.com}, adeweck{at}ccia.org.au, xianzhang{at}gmail.com
↵1 Authors collaborated on the project as a team; names are ordered alphabetically.