PT - JOURNAL ARTICLE
AU - Y.L. Wang
AU - Y.-C. Lin
TI - Traction Force Microscopy by Deep Learning
AID - 10.1101/2020.05.20.107128
DP - 2020 Jan 01
TA - bioRxiv
PG - 2020.05.20.107128
4099 - http://biorxiv.org/content/early/2020/05/22/2020.05.20.107128.short
4100 - http://biorxiv.org/content/early/2020/05/22/2020.05.20.107128.full
AB - Cells interact mechanically with their surrounding by exerting forces and sensing forces or force-induced displacements. Traction force microscopy (TFM), purported to map cell-generated forces or stresses, represents an important tool that has powered the rapid advances in mechanobiology. However, to solve the ill-posted mathematical problem, its implementation has involved regularization and the associated compromises in accuracy and resolution. Here we applied neural network-based deep learning as a novel approach for TFM. We modified a network for processing images to process vector fields of stress and strain. Furthermore, we adapted a mathematical model for cell migration to generate large sets of simulated stresses and strains for training the network. We found that deep learning-based TFM yielded results qualitatively similar to those from conventional methods but at a higher accuracy and resolution. The speed and performance of deep learning TFM make it an appealing alternative to conventional methods for characterizing mechanical interactions between cells and the environment.Statement of Significance Traction Force Microscopy has served as a fundamental driving force for mechanobiology. However, its nature as an ill-posed inverse problem has posed serious challenges for conventional mathematical approaches. The present study, facilitated by large sets of simulated stresses and strains, describes a novel approach using deep learning for the calculation of traction stress distribution. By adapting the UNet neural network for handling vector fields, we show that deep learning is able to minimize much of the limitations of conventional approaches to generate results with speed, accuracy, and resolution.