Abstract
A continuing challenge in quantitative cell biology is the accurate and robust 3D segmentation of structures of interest from fluorescence microscopy images in an automated, reproducible, and widely accessible manner for subsequent interpretable data analysis. We describe the Allen Cell and Structure Segmenter (Segmenter), a Python-based open source toolkit developed for 3D segmentation of cells and intracellular structures in fluorescence microscope images. This toolkit brings together classic image segmentation and iterative deep learning workflows first to generate initial high-quality 3D intracellular structure segmentations and then to easily curate these results to generate the ground truths for building robust and accurate deep learning models. The toolkit takes advantage of the high-replicate 3D live cell image data collected at the Allen Institute for Cell Science of over 30 endogenous fluorescently tagged human induced pluripotent stem cell (hiPSC) lines. Each cell line represents a different intracellular structure with one or more distinct localization patterns within undifferentiated hiPS cells and hiPSC-derived cardiomyocytes. The Segmenter consists of two complementary elements, a classic image segmentation workflow with a restricted set of algorithms and parameters and an iterative deep learning segmentation workflow. We created a collection of 20 classic image segmentation workflows based on 20 distinct and representative intracellular structure localization patterns as a “lookup table” reference and starting point for users. The iterative deep learning workflow can take over when the classic segmentation workflow is insufficient. Two straightforward “human-in-the-loop” curation strategies convert a set of classic image segmentation workflow results into a set of 3D ground truth images for iterative model training without the need for manual painting in 3D. The deep learning model architectures used in this toolkit were designed and tested specifically for 3D fluorescence microscope images and implemented as readable scripts. The Segmenter thus leverages state of the art computer vision algorithms in an accessible way to facilitate their application by the experimental biology researcher.
We include two useful applications to demonstrate how we used the classic image segmentation and iterative deep learning workflows to solve more challenging 3D segmentation tasks. First, we introduce the ‘Training Assay’ approach, a new experimental-computational co-design concept to generate more biologically accurate segmentation ground truths. We combined the iterative deep learning workflow with three Training Assays to develop a robust, scalable cell and nuclear instance segmentation algorithm, which could achieve accurate target segmentation for over 98% of individual cells and over 80% of entire fields of view. Second, we demonstrate how to extend the lamin B1 segmentation model built from the iterative deep learning workflow to obtain more biologically accurate lamin B1 segmentation by utilizing multi-channel inputs and combining multiple ML models. The steps and workflows used to develop these algorithms are generalizable to other similar segmentation challenges. More information, including tutorials and code repositories, are available at allencell.org/segmenter.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
Important Method Update: 1. Add the Training Assay Approach 2. Add the DNA and membrane dye based cell and nuclear segmentation algorithm to generate instance segmentation of cells and nuclei/mitotic DNA from DNA dye and membrane dye images. 3. Add new lamin B1 segmentation algorithm for obtaining more biologically accurate results