Using cameras for precise measurement of two-dimensional plant features: CASS

Using cameras for precise measurement of two-dimensional plant features: CASS

Abstract

Images are used frequently in plant phenotyping to capture measurements. This chapter offers a repeatable method for capturing two-dimensional measurements of plant parts in field or laboratory settings using a variety of camera styles (cellular phone, DSLR), with the addition of a printed calibration pattern. The method is based on calibrating the camera using information available from the EXIF tags from the image, as well as visual information from the pattern. Code is provided to implement the method, as well as a dataset for testing. We include steps to verify protocol correctness by imaging an artifact. The use of this protocol for two-dimensional plant phenotyping will allow data capture from different cameras and environments, with comparison on the same physical scale. We abbreviate this method as CASS, for CAmera aS Scanner.

1 Introduction

Images are used with increasing frequency in plant phenotyping for a variety of reasons. One reason is the ability to remotely capture data without disturbing the plant material, while another is the promise of high-throughput phenotyping via image processing pipelines such as those enabled by PlantCV [1].

However, to acquire precise data suitable for measurements of two-dimensional objects, the prevailing method in the community is to use a flatbed scanner. Shape analysis of leaves has used scanned images, for apple, grapevine, Claytonia L., and a mixture of species [2, 3, 4, 5]1. Scanners have also been used to analyze the shape of pansy petals [6] and Vitis vinifera L. seeds [7].

Cameras have been used to phenotype a range of plant structures and sizes, such as cranberry fruit shape and size [8] and root system architecture [9]. In both of these works, a disk of known diameter is added to the scene for scaling purposes.

1.1 Camera calibration

Figure 1: An aruco calibration pattern. This particular example has been printed on aluminum, so it can be cleaned during experiments, which is convenient in plant research.

The protocol in this paper transforms images acquired from a standard consumer camera such that measurements in pixels are representative of a planar scene. What this means in more detail is that we have emulated a flatbed scanner with a consumer camera; angles between lines are preserved, as are distance ratios. Physical measurements can be recovered from image measurements by dividing by the number of pixels per millimeter, similar to flatbed scanners. We abbreviate this method as CASS, for CAmera aS Scanner, using the tool from [10].

This method is needed because measurements of two-dimensional objects, when done in image space of camera-acquired images, are subject to diminished accuracy from physical perturbations. A small movement of the camera up or down will give the erroneous impression that an object is larger or smaller in terms of pixels. Image pixels are also subject to radial distortion and projective geometry that allows three-dimensional objects to be viewed in a two-dimensional image. In other words, pixels on one side of the image may not represent the same physical dimensions as pixels in another portion of the image.

The method at the center of this protocol makes use of established camera calibration procedures to mitigate the problems of the preceding paragraph. Camera calibration is the estimation of parameters that relate three coordinate systems: image, camera, and world, to each other. This chapter does not have to the space to deal with this topic in depth, but Hartley and Zisserman [11] is a good text on this topic. When camera calibration is completed, the coordinate systems have been defined relative to a standard, and the relationships of one coordinate system to another are known.

Calibration patterns are used to define coordinate systems relative to a standard. These may take many forms; in this work we use aruco patterns [12]; laid out in a grid, patterns define the X-Y plane of the world coordinate system as in Figure 1. The camera captures an image of the pattern to aid in defining the world coordinate system with respect to the image and camera coordinate systems.

Usually, many views of the pattern are captured to solve an optimization problem to fully calibrate the camera [13]. However, the Structure from Motion (SfM) community, [14], [15] began exploiting EXIF data, or Exchangeable image file format. EXIF data is a type of meta data that is common in today’s consumer cameras. Within SfM, the camera’s sensor size and some data from the EXIF file is used to generate an initial solution for some of the camera calibration parameters. We have borrowed this practice for calibrating in the phenotyping context.

1.2 Using CASS, camera as a scanner

Figure 2: Example of a grape cluster. This is a three-dimensional object, but we are interested in measuring aspects of the object where it meets the calibration pattern. Top row: input images of the same grape cluster, left two images are from an Apple iphone 6 (cellular phone camera), right two images are from a Canon EOS 60D DSLR camera. Bottom row: results of applying the method for the image above, where every 10 pixels equal 1 millimeter. Full images are available in [16].

The original intent of this method, CASS, was to develop a high-throughput substitute for slow flatbed scanners. The steps in Section 3 will give details for the user. A brief overview of the code is provided with this chapter: 1) calibrates the camera, per image, 2) computes the homography to transform the current image to the X-Y grid of the world coordinate system, and 3) warps the current image to match the world coordinate system’s X-Y grid.

Figure 3 shows the input images and the output of CASS. From the output images, users can apply their own computer vision techniques to identify the objects of interest. Measurements in pixels can be transformed to physical units by dividing by the user-selected scaling factor.

It is important to note a strong assumption when using this method, which is that the object is planar. In practical terms, the user should either use objects that are roughly planar, or consider the footprint of the object on the calibration pattern plane. CASS is not suitable for measuring objects that are non-planar, such as free-standing branches with the calibration pattern behind.

To verify that the protocol has been performed correctly, we also include instructions for verifying that the measurements are correct by way of an artifact.

2 Materials

The materials needed are:

  1. calibration pattern

  2. camera

  3. artifact

  4. code

The preparation of the calibration pattern is documented in 1. The style of the camera is not specific to CASS, and should be chosen for the user’s convenience. This method relies on the extraction of EXIF tags, so the camera should write EXIF data. At the time of this writing, this feature is common in consumer and cellular phone cameras. An artifact of a known size is needed to check that the protocol has been implemented correctly. In our example, we chose a playing card, as shown in Figure 3. A natural choice for an artifact may be a ruler.

Figure 3: Left: Apple iphone6 camera images of a x inch ( mm x mm) playing card. Right: results of applying the method for the image above, where every 10 pixels equal 1 millimeter. Black lines indicate measurements of the card in pixels. The horizontal line was pixels, so is equivalent to mm as measured by this system. The vertical line was pixels, which is equivalent to mm.

The code and test datasets are provided in [16]. Within [16], are some example data and two programs:
aruco-pattern-write and camera-as-scanner. To prepare for the experiments, download the example data and install the code (C++ code as well as a Docker image are provided).

3 Methods

  1. Prepare the aruco calibration pattern. The pattern should be printed such that and axes are equally scaled, and attached to a flat surface. A pattern is provided in the [16] resource, as well as code for generating a new pattern via aruco-pattern-write and instructions in its README. Considerations when generating a new pattern are in 1. The option of printing patterns on metal is discussed in 2.

  2. Arrange the object to be measured on top of the aruco pattern printout. If segmentation of the object from the scene is desired using an image processing technique, we suggest placing a solid-colored paper or fabric in between the object and the pattern. See 3 for more details.

  3. Acquire images of the object, including at minimum a 1-layer border of aruco tags on all four sides of the image. The image should generally be in focus, and acquired such that the camera body is parallel to the aruco pattern plane. However, the alignment does not have to be exact. See Figures 2 and 3 for examples. If using a cell phone camera, do not zoom. Standard image formats are all acceptable, as long as EXIF tags are generated.

  4. Acquire an image of an artifact (such as a ruler) of known size with the same protocol as in 3. We suggest that the artifact be rectangular in shape to allow for ease of measurement.

  5. Prepare the image and format information to run camera-as-scanner. This step assumes that the code has been installed according to its instructions, mentioned in Section 2.

    1. The preparation instructions for running CASS for a group of images is given with the README of [16]. Create a test directory.

    2. Look up the camera’s sensor size and convert to millimeters. This information may be found in the manufacturer’s provided information that came with the camera, or can be found online. Fill in the sensor size parameters in the appropriate file as indicated in 5a.

    3. Measure one of the squares of the printed aruco calibration pattern, in millimeters. Fill in the square length parameter of the appropriate file as as indicated in 5a.

    4. Move the images of the objects and image of the artifact to a directory with the name images within the test directory.

    5. Determine the number of pixels per millimeter for the transformed images, which will be an argument for running the code. The choice for depends on the size of the object, size of the calibration pattern, and how large one can tolerate the result image size. Suppose the aruco calibration pattern print is mm mm. The result images will be pixels pixels. See 4 for suggestions. In Figures 2 and 3, was chosen.

  6. Run the code camera-as-scanner with three, and optionally four, arguments: the directory and the specified files and directory from 5, an output directory, and the number of pixels per millimeter . The arguments and format for them can be found by running the program with flag --help. The optional fourth concerns whether intermediate results are written.

  7. Verify that the output is as expected, by inspecting the warped image corresponding to the artifact. Measure the width of the artifact in an image manipulation program such as ImageJ, KolourPaint, the GIMP, Adobe Photoshop, etc.; its units will be pixels . Measure the width of the physical artifact in millimeters: . The following should be true: . If not, then recheck the steps. The verification process was demonstrated with the playing card artifact in Figure 3.

4 Notes

  1. Note that the pattern can be scaled up or down to be suitable for the data acquisition context, such as the image provided in aruco-pattern-write as an example. It is not necessary for the camera to view the whole pattern. The patterns are black and white, so do not need to be printed in color.

  2. In our experiments, we have ordered prints of the patterns on aluminum. These have been convenient when working with fruit and plant material, because aluminum prints can be washed and cleaned. It is important that the aruco patterns not become occluded with dirt or stains.

  3. Concerning segmentation of the object from the scene of aruco pattern and solid-colored fabric or paper, we suggest that the solid-colored fabric or paper be chosen such that it is a contrasting color compared to the target object. The fabric or paper should be cleaned or replaced if there are dirt or stains. The color of the fabric or paper, whatever color is chosen, will not interfere with the detection of the aruco tags.

  4. As increases, so will the image size. We suggest trying a range of sizes with a small number of images, such as , to get a sense of the resulting file size and resolution of features of interest.

Acknowledgments

We acknowledge the support of USDA-NIFA Specialty Crops Research Initiative, VitisGen2 Project (award number 2017-51181-26829).

Footnotes

  1. Note that not all of the data in [5] is from a scanner.

References

  1. N. Fahlgren, M. Feldman, M. Gehan, M. Wilson, C. Shyu, D. Bryant, S. Hill, C. McEntee, S. Warnasooriya, I. Kumar, T. Ficor, S. Turnipseed, K. Gilbert, T. Brutnell, J. Carrington, T. Mockler, and I. Baxter, “A Versatile Phenotyping System and Analytics Platform Reveals Diverse Temporal Responses to Water Availability in Setaria,” Molecular Plant, vol. 8, no. 10, pp. 1520–1535, Oct. 2015. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1674205215002683
  2. Z. Migicovsky, M. Li, D. H. Chitwood, and S. Myles, “Morphometrics Reveals Complex and Heritable Apple Leaf Shapes,” Frontiers in Plant Science, vol. 8, 2018. [Online]. Available: https://www.frontiersin.org/articles/10.3389/fpls.2017.02185/full
  3. L. L. Klein, M. Caito, C. Chapnick, C. Kitchen, R. O’Hanlon, D. H. Chitwood, and A. J. Miller, “Digital Morphometrics of Two North American Grapevines (Vitis: Vitaceae) Quantifies Leaf Variation between Species, within Species, and among Individuals,” Frontiers in Plant Science, vol. 8, 2017. [Online]. Available: https://www.frontiersin.org/articles/10.3389/fpls.2017.00373/full
  4. T. R. Stoughton, R. Kriebel, D. D. Jolles, and R. L. O’Quinn, “Next-generation lineage discovery: A case study of tuberous Claytonia L.” American Journal of Botany, vol. 105, no. 3, pp. 536–548, 2018. [Online]. Available: https://bsapubs.onlinelibrary.wiley.com/doi/abs/10.1002/ajb2.1061
  5. M. Li, H. An, R. Angelovici, C. Bagaza, A. Batushansky, L. Clark, V. Coneva, M. J. Donoghue, E. Edwards, D. Fajardo, H. Fang, M. H. Frank, T. Gallaher, S. Gebken, T. Hill, S. Jansky, B. Kaur, P. C. Klahs, L. L. Klein, V. Kuraparthy, J. Londo, Z. Migicovsky, A. Miller, R. Mohn, S. Myles, W. C. Otoni, J. C. Pires, E. Rieffer, S. Schmerler, E. Spriggs, C. N. Topp, A. Van Deynze, K. Zhang, L. Zhu, B. M. Zink, and D. H. Chitwood, “Topological Data Analysis as a Morphometric Method: Using Persistent Homology to Demarcate a Leaf Morphospace,” Frontiers in Plant Science, vol. 9, 2018. [Online]. Available: https://www.frontiersin.org/articles/10.3389/fpls.2018.00553
  6. Y. Yoshioka, H. Iwata, N. Hase, S. Matsuura, R. Ohsawa, and S. Ninomiya, “Genetic Combining Ability of Petal Shape in Garden Pansy (Viola wittrockiana Gams) based on Image Analysis,” Euphytica, vol. 151, no. 3, pp. 311–319, Nov. 2006. [Online]. Available: http://link.springer.com/10.1007/s10681-006-9151-2
  7. M. Orrù, O. Grillo, G. Lovicu, G. Venora, and G. Bacchetta, “Morphological characterisation of Vitis vinifera L. seeds by image analysis and comparison with archaeological remains,” Vegetation History and Archaeobotany, vol. 22, no. 3, pp. 231–242, May 2013. [Online]. Available: https://doi.org/10.1007/s00334-012-0362-2
  8. L. Diaz-Garcia, G. Covarrubias-Pazaran, B. Schlautman, E. Grygleski, and J. Zalapa, “Image-based phenotyping for identification of QTL determining fruit shape and size in American cranberry (Vaccinium macrocarpon L.),” PeerJ, vol. 6, p. e5461, Aug. 2018. [Online]. Available: https://peerj.com/articles/5461
  9. A. Das, H. Schneider, J. Burridge, A. K. M. Ascanio, T. Wojciechowski, C. N. Topp, J. P. Lynch, J. S. Weitz, and A. Bucksch, “Digital imaging of root traits (DIRT): a high-throughput computing and collaboration platform for field-based root phenomics,” Plant Methods, vol. 11, no. 1, p. 51, Nov. 2015. [Online]. Available: https://doi.org/10.1186/s13007-015-0093-3
  10. B. A. Cook, “ACRONYM: Acronym CReatiON for You and Me,” arXiv:1903.12180 [astro-ph], Mar. 2019, arXiv: 1903.12180. [Online]. Available: http://arxiv.org/abs/1903.12180
  11. R. Hartley and A. Zisserman, Multiple view geometry in computer vision, 2nd ed.    Cambridge, UK ; New York: Cambridge University Press, 2003.
  12. S. Garrido-Jurado, R. Muñoz-Salinas, F. Madrid-Cuevas, and M. Marín-Jiménez, “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognition, vol. 47, no. 6, pp. 2280–2292, Jun. 2014. [Online]. Available: http://linkinghub.elsevier.com/retrieve/pii/S0031320314000235
  13. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, Nov. 2000, conference Name: IEEE Transactions on Pattern Analysis and Machine Intelligence.
  14. N. Snavely, S. M. Seitz, and R. Szeliski, “Photo Tourism: Exploring Photo Collections in 3D,” in ACM SIGGRAPH 2006 Papers, ser. SIGGRAPH ’06.    New York, NY, USA: ACM, 2006, pp. 835–846, event-place: Boston, Massachusetts. [Online]. Available: http://doi.acm.org/10.1145/1179352.1141964
  15. C. Wu, “Towards Linear-Time Incremental Structure from Motion,” in 2013 International Conference on 3D Vision.    Seattle, WA, USA: IEEE, Jun. 2013, pp. 127–134. [Online]. Available: http://ieeexplore.ieee.org/document/6599068/
  16. A. Tabb, “Data and Code from: Using cameras for precise measurement of two-dimensional plant features: CASS,” Feb. 2020, type: dataset. [Online]. Available: https://doi.org/10.5281/zenodo.3677473
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
410105
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description