Custom built scanner and simple image processing pipeline enables low-cost, high-throughput phenotyping of maize ears ===================================================================================================================== * Cedar Warman * John E Fowler ## Abstract High-throughput phenotyping systems are becoming increasingly powerful, dramatically changing our ability to document, measure, and detect phenomena. Unfortunately, taking advantage of these trends can be difficult for scientists with few resources, particularly when studying nonstandard biological systems. Here, we describe a powerful, cost-effective combination of a custom-built imaging platform and open-source image processing pipeline. Our maize ear scanner was built with off-the-shelf parts for <$80. When combined with a cellphone or digital camera, videos of rotating maize ears were captured and digitally flattened into projections covering the entire surface of the ear. Segregating GFP and anthocyanin seed markers were clearly distinguishable in ear projections, allowing manual annotation using ImageJ. Using this method, statistically powerful transmission data can be collected for hundreds of maize ears, accelerating the phenotyping process. ## Introduction High-throughput plant phenotyping is rapidly transforming crop improvement, disease management, and basic research (reviewed in Fahlgren et al., 2015; Mahlein, 2016; Tardieu et al., 2017). However, commercial phenotyping platforms remain out of reach for many laboratories, often requiring large initial investments of thousands or tens of thousands of dollars. While the cost of high-throughput sequencing has rapidly decreased, the cost of high-throughput phenotyping has remained high. New methods of low-cost, large-scale phenotyping are required to fully leverage the increasing availability of large datasets (e.g., genome sequences) and relevant quantitative statistical tools. High-throughput phenotyping methods have been developed in several agricultural and model plant systems, including Arabidopsis and maize. Arabidopsis is well-suited to high-throughput phenotyping due to its small stature, rapid growth, and simple culture. Various systems have been created to measure Arabidopsis roots (Yazdanbakhsh and Fisahn, 2009; Slovak et al., 2014), rosettes (Arvidsson et al., 2011; Zhang et al., 2012; Awlia et al., 2016) and whole plants (Jiang et al., 2014). Most of these systems require robotic automation, which can drive up costs. Attempts at reducing costs rely on simple cameras and open-source image processing computational pipelines (Vasseur et al., 2018). Unlike Arabidopsis, maize plants are large, have a long growth cycle, and are typically grown seasonally outdoors. Because of these characteristics, maize is inherently more difficult to phenotype than Arabidopsis. However, as a consequence of its agricultural importance and utility as a model system, there has been substantial progress towards deploying maize phenotyping systems, both in the private (Choudhury et al., 2016) and academic (Miller et al., 2017) realms. As with Arabidopsis, several systems focus on phenotyping maize roots (Clark et al., 2013; Jiang et al., 2019) and above-ground vegetation (Chaivivatrakul et al., 2014; Junker et al., 2014; Choudhury et al., 2016; Zhang et al., 2017). Like with Arabidopsis, these systems are largely dependent on costly robotics and cameras. Among maize tissues, ears are another target of interest for high-throughput phenotyping. Ears, with the seeds they carry, contain information about the plant and its progeny. They are easily stored, and do not require phenotyping equipment to be in place in the field or greenhouse at specific times during the growing season. Ears are a primary agricultural product of maize, which has led the majority of previous phenotyping efforts to focus on aspects of the ear that influence yield, such as ear size, row number, and seed dimensions (Liang et al., 2016; Miller et al., 2017; Makanza et al., 2018). These studies have used techniques that varied from expensive and specialized three-dimensional or line-scanning cameras (Wen et al., 2019; Liang et al., 2016) to relatively low-cost flatbed scanners and digital cameras (Miller et al., 2017; Makanza et al., 2018). Beyond their agricultural importance, studying maize ears can answer fundamental questions about basic biology. The transmission of mutant genes can be easily tracked in maize seeds by taking advantage of a wide variety of visible endosperm markers (Neuffer et al., 1997; Li et al., 2013), which can be genetically linked to a mutant of interest (e.g. Arthur et al., 2003; Phillips and Evans, 2011; Bai et al., 2016; Huang et al., 2017). On the ear, seeds occur as an ordered array of progeny, which allows the transmission of mutant alleles to be tracked not only by individual cross, but within individual ears. The transmission of marker genes has thus far been quantified by hand, either by counting seeds on ears or after they have been removed. This approach has several limitations, among them a lack of a permanent record of the surface arrangement of seeds on the ear. The same disadvantages apply to most high-throughput seed phenotyping methods, which generally rely on seeds being removed from the ear before scanning and do not typically include marker information. Here we address this missing aspect of high-throughput phenotyping in maize. Our rotational ear scanner and image processing pipeline is a cost-effective method for high-throughput ear phenotyping. By taking advantage of the cylindrical form of the maize ear, flat projections can be produced that provide a digital record of the surface of the ear, which can then be quantified in a variety of ways to track seed markers. Limiting materials to easily acquired parts and a basic camera makes this approach accessible to most if not all labs. ## Results ### Design and construction of the maize ear scanner To efficiently phenotype maize ears, we designed a simple, custom-built scanner (Ear Rotational Scanner, ERS, v1.0) centered around a rotating ear. To scan the entire surface of the roughly cylindrical ear, the ear is rotated 360° while a stationary camera records a video, which can then be processed into a cylindrical projection. Materials for constructing the scanner were limited to those that were widely available and cost-effective (Table 1). The frame of the scanner was built from dimensional lumber, with a movable mechanism built from drawer slides that enables a wide range of ear sizes to be accommodated (Figure 1A). A rotisserie motor spins the ear at a constant speed, which is then imaged with a standard digital camera or cell phone (Figure 1B). The scanning process takes approximately 1 minute per ear, including the insertion of the ear into the scanner and video capture. For scanning ears carrying an engineered GFP marker that is highly expressed in the endosperm (Li et al., 2013), ears were illuminated with a blue LED light with an orange filter placed in front of the camera. View this table: [Table 1.](http://biorxiv.org/content/early/2019/09/27/780650/T1) Table 1. Materials and costs for scanner construction. ![Figure 1.](http://biorxiv.org/https://www.biorxiv.org/content/biorxiv/early/2019/09/27/780650/F1.medium.gif) [Figure 1.](http://biorxiv.org/content/early/2019/09/27/780650/F1) Figure 1. Efficient, cost-effective maize ear phenotyping with rotational scanner. **(A)** Schematics of rotational ear scanner in closed position (left) and open position (right). Full construction diagrams are available in Supplemental File 1. **(B)** Image of scanner with ear in place. Camera is positioned in front of the ear as shown, with the ear centered in the frame. A video is captured as the ear spins through one full rotation, which is then processed to project the surface of the ear onto a single flat image. ### Processing videos into flat ear projections The output of the scanner is a video of the rotating ear. This video could be directly quantified, but we found a ‘flat’ image projection most useful for visualizing the entire surface of the ear, as well as for quantifying the distribution of seed markers. To produce this projection, videos were first uploaded to a local computer and annotated with identifying metadata (Figure 2A). Videos were then transferred to a high-performance computing cluster to be processed for generation of the projections; while this video processing step is more efficient on a computing cluster, it can alternatively be completed on a local computer. After processing, the resulting flat images were transferred back to a local computer for assessment and quantification. Video processing consisted of three steps (Figure 2B). In the first, frames were extracted from the video into separate images using the command line utility FFmpeg. Next, images were cropped to the center horizontal row of pixels using the command line utility ImageMagick. Finally, all rows of pixels, one from each frame, were appended sequentially, resulting in the final image. ![Figure 2.](http://biorxiv.org/https://www.biorxiv.org/content/biorxiv/early/2019/09/27/780650/F2.medium.gif) [Figure 2.](http://biorxiv.org/content/early/2019/09/27/780650/F2) Figure 2. Processing videos into flat ear projections. **(A)** Image annotation and processing workflow. Videos are annotated on a local computer, followed by generation of the projection (‘flat image’) via processing either on the same computer or on a high-performance computing cluster for speed improvements. Flat images are small in size, and can be quantified on a local computer. **(B)** The video flattening process begins by extracting individual frames using FFmpeg. After frames are extracted, each frame is cropped to the middle horizontal row of pixels using the command line utility ImageMagick. The resulting collection of pixel rows, one per frame, is then concatenated into a single image depicting the entire surface of the ear. ### Example images, manual quantification, and test case Our scanner was tested using a variety of maize ears representing several widely used seed markers (Figure 3A). Both anthocyanin (*c1*) and fluorescent (*Ds-GFP*) seed markers were easily distinguishable in the final images, as well as other markers such as *bt1, a2*, and *pr1*. Color and fluorescent seed markers were manually quantified on the digital projections using the FIJI distribution of ImageJ (Figure 3B). Using this approach, annotation of an entire ear could be completed in 5 to 10 minutes, depending on the size of the ear and the relative experience level of the annotator. In addition to producing total quantities of each seed marker, this process results in coordinates for each annotated seed, which can be further analyzed if desired. Manual annotations of scanner images in ImageJ were compared to manually counting the seeds on the ear (Figure 3C). We observed a strong correlation between these two methods (R2 > 0.999), validating our scanner method. To test the utility of the maize ear scanner, we scanned and quantified over 400 ears with marker-linked mutations in >50 genes. With these methods, we were able to detect weak but significant transmission defects (∼45% transmission of a marker-linked mutation) for a number of mutant alleles, using both anthocyanin and GFP seed markers. ![Figure 3.](http://biorxiv.org/https://www.biorxiv.org/content/biorxiv/early/2019/09/27/780650/F3.medium.gif) [Figure 3.](http://biorxiv.org/content/early/2019/09/27/780650/F3) Figure 3. Examples of ear surface projections and quantification using ImageJ. **(A)** Representative ear projections demonstrating ease of visibility and tracking for several widely-used maize seed markers. From top to bottom: *c1*; *Ds-GFP*; *bt1/a2/pr1* (linked markers on chromosome 5). **(B)** Seed phenotypes were manually annotated on the ear image using the ImageJ Cell Counter plugin (detail top), allowing quantification of relative transmission of each marker. In this process, seed location as well as marker identities are recorded in an output xml file, allowing for optional downstream analysis of seed distributions (bottom). **(C)** Comparison of manually counting seeds on ears vs manually counting seeds from images. ## Discussion Maize ears encode a vast amount of information. In agriculture, they provide insights into the value of a crop through yield and seed quality. In basic research, they open a window to molecular biology through mutant phenotypes and the transmission of seed markers. For example, assessing transmission rates of marked mutations on the maize ear (with seed populations of up to 600 progeny from a single cross) can generate statistically powerful data, which can then provide biological evidence for gene function during the haploid gametophyte phase. Our goal was to develop a methodology to capture some of this information via digital imaging to facilitate downstream quantitative analyses. Through capturing a video of a rotating ear, a flat image of the surface of the ear can be created that enables standardized, replicable phenotyping of seed marker distribution, as well as providing a permanent digital record of ears. Our scanner is fast, cost-effective, and capable of bringing digital image phenotyping to any lab interested in maize ears, dramatically scaling up the amount of quantitative data that can be feasibly generated. While our scanner provides very useful phenotyping data, it has some notable limitations. Cylindrical projections are a convenient way of visualizing the entire surface of an ear in a single image. However, because maize ears are not perfect cylinders, the projections distort regions of the ear that are not cylindrical, typically the top and bottom, resulting in seeds that appear larger than those in the middle of the ear (Figure 3A). Because of these distortions, measuring qualities like seed and ear dimensions can be challenging. While approximate values for these metrics can be calculated, in the future more precise measurements could potentially use the source video as input to model the ear in three dimensions. In addition, curved ears become highly distorted when scanned using this method. Thus, we have limited the use of the projections to relatively straight, uniform-thickness ears. With low cost as a primary goal, our scanner design has room for optimizations and improvements. Among these is better integration of the mechanisms for ear rotation, video capture, and processing. One improvement would be to drive both a configurable motor and a camera from a simple computer, such as a Raspberry Pi. Although these alterations would add cost and complexity to the process, the efficiency gains would likely offset these sacrifices. Capturing video data from ears produces a lasting record of experiments. These data can be used in a variety of ways, such as measuring patterns of seed distribution, quantifying empty space on the ear, and recording other phenotypes such as abnormal or aborted seeds. Ultimately, recording ear data future-proofs experiments, which may benefit from yet-undeveloped methods of quantification. One future quantification method is automated seed counting. Hand annotation of seeds on ear projections using ImageJ is significantly faster than marking seeds on the ear, but remains a time-consuming and tedious process. However, the resulting data can be used to train machine-learning models to identify seeds. As these models are developed, they are likely to dramatically accelerate the phenotyping process. ## Materials and Methods ### Building the maize ear scanner The maize ear scanner was built from dimensional lumber and widely available parts. For detailed plans and three-dimensional models, see Supplemental File 1. The base of the scanner was built from a nominal 2×12 (38×286 mm) fir board, while the frame of the scanner was built from nominal 2×2 (38×38 mm) cedar boards. Boards were fastened together with screws. Strict adherence to materials and exact dimensions of the scanner frame is not necessary, as long as the scanner is structurally sound and large enough to accommodate ears of varying sizes. To rotate the maize ear, a standard rotisserie motor (Minostar universal grill electric replacement rotisserie motor, 120 volt 4 watt) was attached to the base of the scanner by way of a wood enclosure. Rotisserie motors are widely available and require no specialized knowledge of electronics to use. If desired, the efficiency of the scanner could be improved by using a customized motor. The rotation speed of the rotisserie motor used in this scanner (∼2 RPM) was slightly slower than optimal. A faster rotation speed could improve the overall scanning time, which is ultimately dependent on the frame rate of the camera. The lower portion of the ear was fastened to the rotisserie motor using a 5/16” (8 mm) steel rod, which can be easily removed from the motor when switching ears. The top of the steel rod was ground to a flattened point with a bench grinder to allow it to be inserted into the pith at the center of the base of the ear. The top of the ear was held in place with an adjustable assembly constructed from a nominal 2×4 board (38×89 mm) fastened to drawer slides (Liberty D80618C-ZP-W 18-inch ball bearing drawer slides) on either side of the scanner frame (Supplemental File 1). In the center of the 2×4, facing down towards the top of the ear, is a steel pin mounted on a pillow block bearing (Letool 12mm mounted housing self-aligning pillow flange block bearing). The steel pin (12mm) was sharpened to a point to penetrate the top of the ear as it is lowered, temporarily holding it in place as the ear is rotated during scanning. Because the pin can be moved up and down on the drawer slides, a variety of ear sizes can be accommodated in the scanner. For full spectrum visible light images, ambient lighting was used. To capture GFP fluorescence, a blue light (Clare Chemical HL34T) was used to illuminate the ear. An orange filter (Neewer camera flash color gel kit) was placed in front of the camera lens to partially filter out non-GFP wavelengths. More cost effective blue light illumination is possible (e.g. Wayllshine blue LED flashlight, $9.00), however we found achieving sufficient brightness to be challenging without the higher power LED light. ### Workflow for scanning an ear The scanning process begins by trimming the top and bottom of the ear to expose the central pith. The bottom pin is then inserted into the bottom of the ear, after which the pin with ear attached is placed in the rotisserie motor. The top of the ear is secured by lowering the top pin into the pith at the top of the ear. After turning on the rotisserie motor, a video is captured by the camera that encompasses at least one complete rotation of the ear. The ear can then be removed from the scanner, and the next ear scanned. A detailed, illustrated protocol for scanning ears with the maize ear scanner using ears with GFP seed markers and a Sony DSCWX220 can be found in Supplemental File 2. ### Creating flat images After videos are imported to a computer, they are processed to flat images. Frames are first extracted from videos to png formatted images using the command line utility FFmpeg ([https://ffmpeg.org](https://ffmpeg.org)) with default options (ffmpeg -i ./”$file” -threads 4 ./maize\_processing\_folder/output_%04d.png). These images are then cropped to the central row of pixels using the command line utility ImageMagick ([https://imagemagick.org/](https://imagemagick.org/)) (mogrify -verbose -crop 1920×1+0+540 +repage ./maize\_processing\_folder/*.png). The collection of single pixel row images is then appended in sequential order (convert -verbose –append +repage ./maize\_processing\_folder/*.png ./”$name.png”). Finally, the image is rotated and cropped (mogrify -rotate “180” +repage ./”$name.png”; mogrify -crop 1920×746+0+40 +repage ./”$name.png”). We chose the convention of a horizontal flattened image with the top of the ear to the right and the bottom of the ear to the left. Because our videos were captured vertically, a rotation was required after appending the individual frames. Depending on the users video capture orientation and desired final image orientation, some modification may be necessary. The dimensions of the final crop of the image depend both on the rotation speed and the video frame rate. It is not necessary to capture one exact rotation, as long as the video captures at least one rotation. If the ears rotate at a consistent speed, one full rotation will always be the same number of frames. Because each frame becomes one pixel in height, the image can be cropped to a height corresponding to the number of frames that encompass one rotation. Because rotation speeds and frame rates vary, it is recommended that the user records the dimensions of an ear and compares those dimensions to the final output to ensure that there is no distortion. The final crop can be adjusted to address any possible distortion. For very high frame rates, the FFmpeg frame extraction rate can also be adjusted using the -vf option and adjusting the playback frames per second (e.g. to extract 10 frames per second: -vf fps=10). A GitHub repository containing the script used to create flat images from videos is located at [https://github.com/fowler-lab-osu/flatten\_all\_videos\_in\_pwd](https://github.com/fowler-lab-osu/flatten\_all_videos_in_pwd). ### Quantifying seeds using flat images Seeds were quantified from flat ear images using the Cell Counter plugin of the FIJI distribution of ImageJ (Schindelin et al., 2012). Ears were assigned counter types to correspond to different seed markers, after which seeds on ear images were located and annotated by hand. The Cell Counter plugin exports results in an xml file, which contains the coordinates and marker type of every annotated seed. This file can be processed to create a map of seed locations on the ear. A detailed protocol describing the quantification process can be found in Supplemental File 3. ## Supporting information Supplemental File 1 [[supplements/780650_file02.zip]](pending:yes) Supplemental File 2 [[supplements/780650_file03.pdf]](pending:yes) Supplemental File 3 [[supplements/780650_file04.pdf]](pending:yes) ## Supplemental Data **Supplemental File 1.** Rotational ear scanner construction plans. **Supplemental File 2.** Scanning fluorescent ears with the rotational ears scanner protocol. **Supplemental File 3.** Quantifying seeds in flat images using ImageJ protocol. ## Acknowledgements We thank O. Childress, H. Fowler, B. Galardi, B. Hamilton, R. Hartman, C. Lambert for their seed counting assistance. In addition, we thank O. Childress for protocol feedback and J. Preece for useful discussions. This work was supported by NSF grants IOS-1340050 and IOS-1832186, as well as funds from the OSU Department of Botany and Plant Pathology. We also thank the OSU Center for Genome Research and Biocomputing for providing computational infrastructure enabling this project. ## Footnotes * Revised acknowledgement section. * Received September 23, 2019. * Revision received September 26, 2019. * Accepted September 27, 2019. * © 2019, Posted by Cold Spring Harbor Laboratory This pre-print is available under a Creative Commons License (Attribution-NonCommercial 4.0 International), CC BY-NC 4.0, as described at [http://creativecommons.org/licenses/by-nc/4.0/](http://creativecommons.org/licenses/by-nc/4.0/) ## Works Cited 1. Arthur, K.M., Vejlupkova, Z., Meeley, R.B., and Fowler, J.E. (2003). Maize ROP2 GTPase provides a competitive advantage to the male gametophyte. Genetics 165: 2137–2151. [Abstract/FREE Full Text](http://biorxiv.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6ODoiZ2VuZXRpY3MiO3M6NToicmVzaWQiO3M6MTA6IjE2NS80LzIxMzciO3M6NDoiYXRvbSI7czozNzoiL2Jpb3J4aXYvZWFybHkvMjAxOS8wOS8yNy83ODA2NTAuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 2. Arvidsson, S., Pérez-Rodríguez, P., and Mueller-Roeber, B. (2011). A growth phenotyping pipeline for Arabidopsis thaliana integrating image analysis and rosette area modeling for robust quantification of genotype effects. New Phytol. 191: 895–907. [CrossRef](http://biorxiv.org/lookup/external-ref?access_num=10.1111/j.1469-8137.2011.03756.x&link_type=DOI) [PubMed](http://biorxiv.org/lookup/external-ref?access_num=21569033&link_type=MED&atom=%2Fbiorxiv%2Fearly%2F2019%2F09%2F27%2F780650.atom) [Web of Science](http://biorxiv.org/lookup/external-ref?access_num=000292924600026&link_type=ISI) 3. Awlia, M., Nigro, A., Fajkus, J., Schmoeckel, S.M., Negrão, S., Santelia, D., Trtílek, M., Tester, M., Julkowska, M.M., and Panzarová, K. (2016). High-Throughput Non-destructive Phenotyping of Traits that Contribute to Salinity Tolerance in Arabidopsis thaliana. Front. Plant Sci. 7: 1414. 4. Bai, F., Daliberti, M., Bagadion, A., Xu, M., Li, Y., Baier, J., Tseung, C.-W., Evans, M.M.S., and Settles, A.M. (2016). Parent-of-Origin-Effect rough endosperm Mutants in Maize. Genetics 204: 221–231. [Abstract/FREE Full Text](http://biorxiv.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6ODoiZ2VuZXRpY3MiO3M6NToicmVzaWQiO3M6OToiMjA0LzEvMjIxIjtzOjQ6ImF0b20iO3M6Mzc6Ii9iaW9yeGl2L2Vhcmx5LzIwMTkvMDkvMjcvNzgwNjUwLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 5. Chaivivatrakul, S., Tang, L., Dailey, M.N., and Nakarmi, A.D. (2014). Automatic morphological trait characterization for corn plants via 3D holographic reconstruction. Comput. Electron. Agric. 109: 109–123. 6. Choudhury, S.D., Stoerger, V., Samal, A., Schnable, J.C., Liang, Z., and Yu, J.-G. (2016). Automated vegetative stage phenotyping analysis of maize plants using visible light images. In KDD workshop on data science for food, energy and water, San Francisco, California, USA (researchgate.net). 7. Clark, R.T., Famoso, A.N., Zhao, K., Shaff, J.E., Craft, E.J., Bustamante, C.D., McCouch, S.R., Aneshansley, D.J., and Kochian, L.V. (2013). High-throughput two-dimensional root system phenotyping platform facilitates genetic analysis of root growth and development. Plant Cell Environ. 36: 454–466. [CrossRef](http://biorxiv.org/lookup/external-ref?access_num=10.1111/j.1365-3040.2012.02587.x&link_type=DOI) [Web of Science](http://biorxiv.org/lookup/external-ref?access_num=000312997700017&link_type=ISI) 8. Fahlgren, N., Gehan, M.A., and Baxter, I. (2015). Lights, camera, action: high-throughput plant phenotyping is ready for a close-up. Curr. Opin. Plant Biol. 24: 93–99. [CrossRef](http://biorxiv.org/lookup/external-ref?access_num=10.1016/j.pbi.2015.02.006&link_type=DOI) [PubMed](http://biorxiv.org/lookup/external-ref?access_num=25733069&link_type=MED&atom=%2Fbiorxiv%2Fearly%2F2019%2F09%2F27%2F780650.atom) 9. Huang, J.T., Wang, Q., Park, W., Feng, Y., Kumar, D., Meeley, R., and Dooner, H.K. (2017). Competitive Ability of Maize Pollen Grains Requires Paralogous Serine Threonine Protein Kinases STK1 and STK2. Genetics 207: 1361–1370. [Abstract/FREE Full Text](http://biorxiv.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6ODoiZ2VuZXRpY3MiO3M6NToicmVzaWQiO3M6MTA6IjIwNy80LzEzNjEiO3M6NDoiYXRvbSI7czozNzoiL2Jpb3J4aXYvZWFybHkvMjAxOS8wOS8yNy83ODA2NTAuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 10. Jiang, H., Xu, Z., Aluru, M.R., and Dong, L. (2014). Plant chip for high-throughput phenotyping of Arabidopsis. Lab Chip 14: 1281–1293. [CrossRef](http://biorxiv.org/lookup/external-ref?access_num=10.1039/c3lc51326b&link_type=DOI) [PubMed](http://biorxiv.org/lookup/external-ref?access_num=24510109&link_type=MED&atom=%2Fbiorxiv%2Fearly%2F2019%2F09%2F27%2F780650.atom) [Web of Science](http://biorxiv.org/lookup/external-ref?access_num=000332454600005&link_type=ISI) 11. Jiang, N., Floro, E., Bray, A.L., Laws, B., Duncan, K.E., and Topp, C.N. (2019). Three-Dimensional Time-Lapse Analysis Reveals Multiscale Relationships in Maize Root Systems with Contrasting Architectures. Plant Cell 31: 1708–1722. [Abstract/FREE Full Text](http://biorxiv.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6OToicGxhbnRjZWxsIjtzOjU6InJlc2lkIjtzOjk6IjMxLzgvMTcwOCI7czo0OiJhdG9tIjtzOjM3OiIvYmlvcnhpdi9lYXJseS8yMDE5LzA5LzI3Lzc4MDY1MC5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 12. Junker, A., Muraya, M.M., Weigelt-Fischer, K., Arana-Ceballos, F., Klukas, C., Melchinger, A.E., Meyer, R.C., Riewe, D., and Altmann, T. (2014). Optimizing experimental procedures for quantitative evaluation of crop plant performance in high throughput phenotyping systems. Front. Plant Sci. 5: 770. 13. Liang, X., Wang, K., Huang, C., Zhang, X., Yan, J., and Yang, W. (2016). A high-throughput maize kernel traits scorer based on line-scan imaging. Measurement 90: 453–460. 14. 1. T. Peterson Li, Y., Segal, G., Wang, Q., and Dooner, H.K. (2013). Gene Tagging with Engineered Ds Elements in Maize. In Plant Transposable Elements: Methods and Protocols, T. Peterson, ed (Humana Press: Totowa, NJ), pp. 83–99. 15. Mahlein, A.-K.(2016). Plant Disease Detection by Imaging Sensors – Parallels and Specific Demands for Precision Agriculture and Plant Phenotyping. Plant Dis. 100: 241–251. 16. Makanza, R., Zaman-Allah, M., Cairns, J.E., Eyre, J., Burgueño, J., Pacheco, Á., Diepenbrock, C., Magorokosho, C., Tarekegne, A., Olsen, M., and Prasanna, B.M. (2018). High-throughput method for ear phenotyping and kernel weight estimation in maize using ear digital imaging. Plant Methods 14: 49. 17. Miller, N.D., Haase, N.J., Lee, J., Kaeppler, S.M., de Leon, N., and Spalding, E.P. (2017). A robust, high-throughput method for computing maize ear, cob, and kernel attributes automatically from images. Plant J. 89: 169–178. [CrossRef](http://biorxiv.org/lookup/external-ref?access_num=10.1111/tpj.13320&link_type=DOI) 18. Neuffer, M.G., Coe, E.H., and Wessler, S.R. (1997). Mutants of maize (Cold Spring Harbor Laboratory Press). 19. Phillips, A.R. and Evans, M.M.S. (2011). Analysis of stunter1, a maize mutant with reduced gametophyte size and maternal effects on seed development. Genetics 187: 1085–1097. [Abstract/FREE Full Text](http://biorxiv.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6ODoiZ2VuZXRpY3MiO3M6NToicmVzaWQiO3M6MTA6IjE4Ny80LzEwODUiO3M6NDoiYXRvbSI7czozNzoiL2Jpb3J4aXYvZWFybHkvMjAxOS8wOS8yNy83ODA2NTAuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 20. Schindelin, J. et al. (2012). Fiji: an open-source platform for biological-image analysis. Nat. Methods 9: 676–682. [CrossRef](http://biorxiv.org/lookup/external-ref?access_num=10.1038/nmeth.2019&link_type=DOI) [PubMed](http://biorxiv.org/lookup/external-ref?access_num=22743772&link_type=MED&atom=%2Fbiorxiv%2Fearly%2F2019%2F09%2F27%2F780650.atom) [Web of Science](http://biorxiv.org/lookup/external-ref?access_num=000305942200021&link_type=ISI) 21. Slovak, R., Göschl, C., Su, X., Shimotani, K., Shiina, T., and Busch, W. (2014). A Scalable Open-Source Pipeline for Large-Scale Root Phenotyping of Arabidopsis. Plant Cell 26: 2390–2403. [Abstract/FREE Full Text](http://biorxiv.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6OToicGxhbnRjZWxsIjtzOjU6InJlc2lkIjtzOjk6IjI2LzYvMjM5MCI7czo0OiJhdG9tIjtzOjM3OiIvYmlvcnhpdi9lYXJseS8yMDE5LzA5LzI3Lzc4MDY1MC5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 22. Tardieu, F., Cabrera-Bosquet, L., Pridmore, T., and Bennett, M. (2017). Plant Phenomics, From Sensors to Knowledge. Curr. Biol. 27: R770–R783. [CrossRef](http://biorxiv.org/lookup/external-ref?access_num=10.1016/j.cub.2017.05.055&link_type=DOI) 23. Vasseur, F., Bresson, J., Wang, G., Schwab, R., and Weigel, D. (2018). Image-based methods for phenotyping growth dynamics and fitness components in Arabidopsis thaliana. Plant Methods 14: 63. 24. Wen, W., Guo, X., Lu, X., Wang, Y., and Yu, Z. (2019). Multi-scale 3D Data Acquisition of Maize. In Computer and Computing Technologies in Agriculture XI (Springer International Publishing), pp. 108–115. 25. Yazdanbakhsh, N. and Fisahn, J. (2009). High throughput phenotyping of root growth dynamics, lateral root formation, root architecture and root hair development enabled by PlaRoM. Funct. Plant Biol. 36: 938–946. 26. Zhang, X. et al. (2017). High-Throughput Phenotyping and QTL Mapping Reveals the Genetic Architecture of Maize Plant Growth. Plant Physiol. 173: 1554–1564. [Abstract/FREE Full Text](http://biorxiv.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6MTI6InBsYW50cGh5c2lvbCI7czo1OiJyZXNpZCI7czoxMDoiMTczLzMvMTU1NCI7czo0OiJhdG9tIjtzOjM3OiIvYmlvcnhpdi9lYXJseS8yMDE5LzA5LzI3Lzc4MDY1MC5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 27. Zhang, X., Hause, R.J., Jr., and Borevitz, J.O. (2012). Natural Genetic Variation for Growth and Development Revealed by High-Throughput Phenotyping in Arabidopsis thaliana. G3 2: 29–34.