Abstract
The aboveground plant efficiency has improved significantly in recent years, and the improvement has led to a steady increase in global food production. The improvement of belowground plant efficiency has the potential to further increase food production. However, the belowground plant roots are harder to study, due to inherent challenges presented by root phenotyping. Several tools for identifying root anatomical features in root cross-section images have been proposed. However, the existing tools are not fully automated and require significant human effort to produce accurate results. To address this limitation, we propose a fully automated approach, called Deep Learning for Root Anatomy (DL-RootAnatomy), for identifying anatomical traits in root cross-section images. Using the Faster Region-based Convolutional Neural Network (Faster R-CNN), the DL-RootAnatomy models detect objects such as root, stele and late metaxylem, and predict rectangular bounding boxes around such objects. Subsequently, the bounding boxes are used to estimate the root diameter, stele diameter, and late metaxylem number and average diameter. Experimental evaluation using standard object detection metrics, such as intersection-over-union and mean average precision, has shown that our models can accurately detect the root, stele and late metaxylem objects. Furthermore, the results have shown that the measurements estimated based on predicted bounding boxes have very small root mean square error when compared with the corresponding ground truth values, suggesting that DL-RootAnatomy can be used to accurately detect anatomical features. Finally, a comparison with existing approaches, which involve some degree of human interaction, has shown that the proposed approach is more accurate than existing approaches on a subset of our data. A webserver for performing root anatomy using our deep learning pre-trained models is available at https://rootanatomy.org, together with a link to a GitHub repository that contains code that can be used to re-train or fine-tune our network with other types of root-cross section images. The labeled images used for training and evaluating our models are also available from the GitHub repository.