A smartphone application to measure the quality of pest control spraying machines via image analysis

A smartphone application to measure the quality of pest control spraying machines via image analysis

Bruno B. Machado \titlenoteCorresponding Author: junio@icmc.usp.br.     Gabriel Spadon    Mauro S. Arruda    Wesley N. Goncalves    Andre C. P. L. F. Carvalho    Jose F. Rodrigues-Jr
 
Computer Science Department Federal University of Mato Grosso do Sul Ponta Pora
   Brazil    Brazil
Abstract

The need for higher agricultural productivity has demanded the intensive use of pesticides. However, their correct use depends on assessment methods that can accurately predict how well the pesticides’ spraying covered the intended crop region. Some methods have been proposed in the literature, but their high cost and low portability harm their widespread use. This paper proposes and experimentally evaluates a new methodology based on the use of a smartphone-based mobile application, named DropLeaf. Experiments performed using DropLeaf showed that, in addition to its versatility, it can predict with high accuracy the pesticide spraying. DropLeaf is a five-fold image-processing methodology based on: (i) color space conversion; (ii) threshold noise removal; (iii) convolutional operations of dilation and erosion; (iv) detection of contour markers in the water-sensitive card; and, (v) identification of droplets via the marker-controlled watershed transformation. The authors performed successful experiments over two case studies, the first using a set of synthetic cards and the second using a real-world crop. The proposed tool can be broadly used by farmers equipped with conventional mobile phones, improving the use of pesticides with health, environmental and financial benefits.

Mobile Application, Image Processing, Agriculture
\CopyrightYear

2018 \conferenceinfoSAC 2018, April 09-13, 2018, Pau, France \isbnxxx-x-xxxx-xxxx-x/xx/xx\acmPrice$15.00 \doihttp://dx.doi.org/xx.xxxx/xxxxxxx.xxxxxxx

\numberofauthors

1

{CCSXML}

<ccs2012> <concept> <concept_id>10003120.10003138.10003139.10010905</concept_id> <concept_desc>Human-centered computing Mobile computing</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003138.10003141.10010897</concept_id> <concept_desc>Human-centered computing Mobile phones</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept> <concept_id>10010147.10010178.10010224.10010245.10010247</concept_id> <concept_desc>Computing methodologies Image segmentation</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010224</concept_id> <concept_desc>Computing methodologies Computer vision</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012>

\ccsdesc

[500]Human-centered computing Mobile computing \ccsdesc[500]Human-centered computing Mobile phones \ccsdesc[500]Computing methodologies Image segmentation \ccsdesc[300]Computing methodologies Computer vision

1 Introduction

The world population is estimated to be 7 billion people with a projection of increasing to 9.2 billion by 2050. An increase that will demand nearly 70% more food due to changes in dietary habits (more dairy and grains) in underdeveloped countries [1]. To cope with such a challenge, it is mandatory to increase the productivity of existing land, which is achieved by means of less waste along the food chain, and by the use of pesticides. Pesticides correspond to chemical preparations for destroying weed plants (via herbicides), fungal (via fungicides), or insects (via insecticides) [2]. The use of pesticides is disseminated worldwide, accounting for a 40-billion-dollar annual budget [3] with tons of chemicals (roughly 2 kg per hectare [4]) being applied in all kinds of crops with the aim of increasing the production of food. Current trends point that a large range of agricultural and horticultural systems are to face heavier pressures from pests, leading to a higher demand for pesticides.

In this scenario, it is important that the correct amount of pesticide is sprayed on the crop fields. Too much and there might be residues in the produced food along with environmental contamination; too little and there might be regions of the crop that are not protected, reducing the productivity. Besides, irregular spray coverage might cause pest and/or weed resistance or behavioral avoidance [5, 6]. In order to evaluate the pulverization, it is necessary to measure the spray coverage, that is, the proportional area covered by the pesticide formulation droplets (water carrier, active ingredients, and adjuvant).

The problem of measuring the spray coverage abridges to knowing how much pesticide was sprayed on each part of the crop field. The standard manner to do that is to distribute oil or water-sensitive cards (WSC) over the soil; such cards are coated with a bromoethyl dye that turns blue in the presence of water [7]. The problem, then, becomes assessing each card by counting the number of droplets per unit area, by drawing their size distribution, and by estimating the percentage of the card area that was covered; these measures allow one to estimate the volume of sprayed pesticide per unit area of the crop. If done manually, this process is burdensome and imprecise. This is where automated solutions become the first need, motivating a number of commercial solutions including the Swath Kit [8], a pioneer computer-based process that uses image processing to analyze the water-sensitive cards; the USDA-ARS system [9], a camera-based system that uses 1- samples from the cards to form a pool of sensor data; the DropletScan [10], a flatbed scanner defined over a proprietary hardware; the DepositScan system, made of a laptop computer and a handheld business card scanner [11]; and the AgroScan system111 http://www.agrotec.etc.br/produtos/agroscan/, a batch-based outsource service that performs analyzes over collected cards. All these systems, however, are troublesome to carry throughout the field, requiring the collection, scanning, and post-processing of the cards, a time-consuming and labor-intensive process. An alternative is to use wired, or wireless, sensors [12]; an expensive solution that demands constant maintenance.

Since there is a consensus with respect to the need of achieving a homogeneous spray coverage to gain productivity in agricultural and horticultural systems, there is room for research in innovative means of evaluating the spraying of pesticides. Such means might benefit from the current commodity technology found in mobile cell phones, which carry computing resources powerful enough to perform a wide range of applications. In the form of a cell phone application (or app, for short), it is possible to conceive a readily-available solution, portable up to the crop field, to aid farmers and agronomists in the task of measuring the spray coverage and, hence, in the decision-making process concerning where and how to pulverize. This is the aim of the present study, in which we introduce DropLeaf, a cell phone application able to estimate the amount of pesticide sprayed on water-sensitive cards. DropLeaf works on regular smartphones, what significantly simplifies the assessment of the pesticide application. It uses the cell phone’s camera to capture images of the spray cards, instantly producing estimates of the spray coverage by means of image processing techniques.

The remainder of the paper is structured as follows. Section 2 describes the steps of the proposed approach to measure the quality of pest control spraying. In addition, in this section, we describe the techniques implemented in the mobile application. In Section 3, we show the results achieved by our application. Section 4 reviews major points related to our results. Conclusions come in Section 5.

2 Methodology & application

In this section, we introduce our methodology, named Dropleaf, to estimate the pesticide spray coverage. The aim of the technique is to measure the coverage area of water-sensitive spray cards, so to aid in the estimation of the crop pesticide coverage, as discussed in Section 1. DropLeaf is based on image processing techniques built up on a mobile application that is functional on commodity cell phones. The software calculates measures from the drops observed on the spray cards, presenting statistics that enable the assessment of the spraying:

  • Coverage Density (CD): given in percentage of covered area per area unit in ;

  • Volumetric Median Diameter (VMD): given by the 50th percentile of the diameter distribution;

  • Diameter Relative Span (DRS): given by , where is the 10th percentile and is the 90th percentile of the diameter distribution.

The three measures are used to understand how much of the field was covered with pesticide and how well the pesticide was dispersed; the finer the diameters and the higher the coverage area, the better the dispersion.

In order to calculate those measures, it is necessary to determine the diameter (in micrometers) of each drop observed in a given card. Manually, this is a laborious task that might take hours per card. Instead, DropLeaf uses an intricate image processing method that saves time and provides superior precision when compared to manual inspection and to former techniques.

Figure 1 illustrates the image processing method of DropLeaf, which consists of five steps applied to each spray card image: (I) color space conversion to grayscale; (II) binarization by thresholding (noise removal); (III) dilation and erosion in distinct copies of the image; (IV) complement operation over dilated and eroded images to produce contour markers; (V) drop identification via marker-controlled watershed. Following, we explain each step specifying why it was necessary and how it relates to the next step. To illustrate the steps of the method, we provide a running example whose initial spray card image is presented in Figure 1(a).

Figure 1: The image processing method of DropLeaf. It starts by loading an image of a water sensitive paper. Then, it performs a color-space transformation to obtain a grayscale version of the same image — Step 1. Subsequently, the grayscale image is binarized to isolate the drops and to remove noise — Step 2. Next, in two distinct copies of the noisy-removed image, it applies morphological operations of Dilation and Erosion; the dilated image is inverted to contrast with the color of the eroded one — Step 3. The next step is to compute the difference between the two images, delineating the contours (masks) of the droplets. The resulting image is used to identify the contours’ markers that delimit the area of each drop — Step 4. Finally, the contours are used to segment the drops in sheds — Step 5, providing the tool with a well-defined set of droplets.

2.1 Grayscale transformation

After the acquisition of an image via the cellphone camera , the Step 1 is to convert it to a grayscale image . This is necessary to ease the discrimination of the card surface from the drops that fell on it. We use the continuous domain of [0,1] so that our formalism is able to express any color depth; specifically we use 32 bits for RGB and 8 bits for grayscale. Color information is not needed as it would make the computation heavier and more complex. This first step, then, transforms the image into a grayscale representation, see Figure 1(b), according to:

(1)

2.2 Binarization

Here, the grayscale image passes through a threshold-based binarization process – Step 2, a usual step for image segmentation. Since the grayscale is composed of a single color channel, binarization can be achieved simply by choosing a threshold value. Gray values below the threshold become black, and values above the threshold become white. Since spray cards are designed to stress the contrast between the card and the drops, the threshold value can be set as a constant value – we use value corresponding to value in the 8-bit domain . This is a choice that removes noise and that favors faster processing if compared to more elaborated binarization processes like those based on clustering or on gray-levels distribution. Figure 1(c) depicts the result, an image given by:

(2)

2.3 Dilation and erosion

At this point, we need to identify the contours of the drops – Step 3, which will delimit their diameters. We use an approach based on convolution operators of dilation and erosion  [13]. We proceed by creating two copies of the binary image. One copy passes through dilation – Figure 1(d), a morphological operation that probes and expands the shapes found in an image. For dilation to occur, a structuring element (a square binary matrix) is necessary so to specify the extent of the dilation. We used a matrix so to dilate the drops by nearly 1 pixel. Note that, at this point, we still do not know about the drops; rather, the dilation convolution has the mathematical property of interacting with potential shapes to be segmented, thus allowing for drop identification. After dilation, the shapes that correspond to the drops will be 1 pixel larger all along their perimeters. Formally, we produce an image according to:

(3)

After that, the second copy of the binary image passes through erosion – Figure 1(e), also a morphological operation that, contrary to dilation, contracts the shapes found in the image. Again, we use a matrix as the structuring element so to erode the drops by nearly 1 pixel. Formally, we produce an image according to:

(4)

2.4 Contour identification

Given the two images produced by dilation and erosion — with drops larger than the original and with drops smaller than the original — the trick then is to identify the contours of the drops by measuring the difference between the dilated and the eroded drops. This is achieved by applying a complement operation over the two binary images – Step 4. To do so, first, we invert the eroded image, so that 1’s become 0’s and vice versa, obtaining image . Then, it is sufficient to perform the following pixel by pixel logic AND:

(5)

The result is the binary image that is depicted in Figure 1(f), in which only contour pixels have value 1.

2.5 Marker-based watershed segmentation

In the last step – Step 5, with contours properly marked on the image, we proceed to the drop identification considering the previously identified contours. To this end, we used the marker-based watershed segmentation. Watershed [14] is a technique that considers an image as a topographic relief in which the gray level of the pixels corresponds to their altitude. The transform proceeds by simulating the flooding of the landscape starting at the local minima. This process forms basins that are gradually fulfilled with water. Eventually, the water from different basins meet, indicating the presence of ridges (boundaries); this is an indication that a segment was found and delimited. The process ends when the water reaches the highest level in the color-encoding space. The problem with the classical watershed is that it might over-segment the image in the case of an excessive number of minima. For a better precision, we use the marker-controlled variation of the algorithm [15]. This variation is meant for images whose shapes define proper contours previously given to the algorithm. Given the contours (markers), the marker-based watershed proceeds by considering as minima, only the pixels within the boundaries of the contours. Watershed is an iterative algorithm computationally represented by a function watershed(Image i, Image[] contours). We use such a function to produce a set of segments (drops) over the gray-level image while considering the set of contours identified in the image , as follows:

(6)

where is a function that, given an image, returns a set of sub images (matrices) corresponding to the contours found in the image; meanwhile, watershed is a function that, given an input image and a set of sub images corresponding to contours, produces a set of segments stored in the array of input contours.

We use the product of the watershed function to produce our final output simply by drawing the segments over the original image, as illustrated in Figure 1(g). Notice, however, that the last image of the process, , is meant only for visualization. The analytical process, the core of the methodology, is computed over the set of segments.

2.6 Diameter processing

After segmentation is concluded, we have a set of segments, each corresponding to a drop of pesticide. The final step is to compute the measures presented at the beginning of this section: coverage density (CD), volumetric median diameter (VMD), and diameter relative span (DRS). Since we have the segments computationally represented by an array of binary matrices, we can calculate the area and the diameters of each drop by counting the pixels of each matrix. After counting, it is necessary to convert the diameter given in pixels into a diameter given in micrometers (), which, for the i-th drop, goes as follows:

(7)

where, is the width in pixels of the -th drop; is the width of the card in pixels; and is the width of the card in micrometers. Notice that we used , but we could have used as well; what matters is that the fraction provides a conversion ratio given in , which is not sensible to the axis; horizontal or vertical, the ratio is the same for a non-distorted image.

Notice that and are obtainable via image processing, after the segmentation method; meanwhile, is a constant provided by the user, corresponding to the real-world width of the card. Also, notice that we are considering that the diameter corresponds to the horizontal axis (the width) of the drop; it is possible, however, that the diameter corresponds to the vertical axis, in which case the formulation is straightly similar. Choosing between the horizontal and the vertical axes might be tricky in case the drop is elliptical, rather than circular. We solved this issue by extracting the diameter from the area of the drop. We use the formula of the circle area . With simple algebra, we conclude that given the area in pixels of the -th drop, its diameter in pixels is given by the following equation:

(8)

Rewriting Equation 7 by means of Equation 8, we get:

(9)

Once the diameter is converted into micrometers, it becomes trivial to compute all the measures that support the spray card analysis, as described in the beginning of Section 2.

Implementation details

The use of mobile devices to perform automatic tasks has increased fast [16]. The main reasons for it are the recent advances in hardware, such as sensors, processors, memories, and cameras. Thereby, smartphones have become new platforms for applications of image processing and computer vision [17, 18].

Mobile devices are an adequate mean to perform tasks in real-time in situ far from the laboratory. In this context, besides this methodology, the contribution of this paper is the development of a mobile application to measure the quality of pesticide spraying on water-sensitive cards. Our segmentation method was instantiated for Android devices over the Android Studio Integrated Development Environment222 https://developer.android.com/studio/index.html; Java was the programming language. The methods that constitute the process were imported from the OpenCV library333 http://opencv.org. The application is fully functional as depicted in Figure 2.

Figure 2: A preview of our fully functional application.

Figure 3: Control card provided by Hoechst.

3 Experimental results

In this section, we evaluate our methodology in the task of measuring the spray coverage deposition on water-sensitive cards. The goal is to have our technique correctly identifying the spray drops both in terms of density of spraying (percentage of coverage per ) and in terms of drop diameter. As so, the first set of experiments was conducted over a control card provided by enterprise Hoechst, demonstrating the accuracy in controlled conditions. The second set of experiments was conducted over a real water-sensitive card that was used on soy crops, demonstrating that the application works even during in situ conditions.

3.1 Control-card experiments

In this set of experiments, we use the card provided by the Agrotechnical Advisory of the German enterprise Hoechst. The card holds synthetic drops with sizes 50, 100, 250, 500, and 1,000, as shown in Figure 3; this card is used to calibrate equipment and to assess the accuracy of manual and automatic measuring techniques. Since the number of drops and their sizes are known, this first experiment works as a controlled validation of the methodology.

To measure the drops of the control card, we used a smartphone to capture the image of the card. In Table LABEL:tbl:Results, we present the average diameter of the drops, the area covered by the drops given in , the density given in drops per , the coverage density given in percentage of the card area, and the volumetric median diameter. We do not present the diameter relative span because, as all the drops are equal, there is no significant span. From the table, it is possible to conclude that the accuracy of the methodology is in accordance with the controlled protocol; that is, the known and measured diameters matched in most of the cases. Notice that it is not possible to achieve a perfect identification because of printing imperfections and numerical issues that inevitable rise at the micrometer scale. For example, for 1,000 drops, the average diameter was 1,007 . This first validation was necessary to test the ability of the tool in telling apart card background and drops.

For a comparative perspective, in Table LABEL:tab:microscope, we compare the covered area and the average diameter measured by our tool, by the tool DepositScan, and by a stereoscopic microscope (provided in the work of Zhu et al. [11]). The results demonstrated that the stereoscopic microscope had the best performance, as expected, since it is a fine-detail laborious inspection. DropLeaf presented the best results after the microscope, beating the precision of DepositScan for all the drop sizes, but 500 ; for 1,000 drops, the two tools had a similar performance, diverging in less than . In the experiments, one can notice that the bigger the drop, the smaller the error, which ranged from 41% to less that 1%. For bigger drops, the drop identification is next to perfect; for smaller ones, the error is much bigger; this is because of the size scale. When measuring drops as small as 50, a single extra pixel detected by the camera is enough to produce a big error. This problem was also observed in the work of Zhu et al. [11]).

By analyzing the data, we concluded that the error due to the size scale is predictable. Since it varies with the drop size, it is not linear; nevertheless, it is a pattern that can be corrected with the following general equation:

(10)

In the case of our tool, we used and . These values shall vary from method to method, as we observed for DepositScan and for the stereoscopic microscope.

3.2 Production-card experiments

In the second set of experiments, we used six cards produced in a real-world crop. The cards were produced after pulverization of soy crops in Brazil; they were provided by the Brazilian Agricultural Research Corporation (Embrapa). The cards were separated into three groups of two cards, which were classified as sparse, medium, and dense with respect to the density of drops, as can be verified in Figure LABEL:fig:realcards. These experiments aimed at testing the robustness of the methodology, that is, its ability in identifying drops even when they are irregular and/or they have touching borders. Table LABEL:tbl:DropleafResults shows the numerical results, including the number of drops, the coverage area, the density, the coverage density, the volumetric median diameter, and the diameter relative span. In this case, the table must be interpreted along with the figure, which presents the drops as identified by our methodology. The first four measures can be inspected visually. It is also possible to see that the right-hand side images in Figure LABEL:fig:realcards (the tool’s results stressed with colored drops), demonstrate that the segmentation matches the expectations of a quick visual inspection. The drops at the left are perfectly presented on the right. Other features are also noticeable. Density, for instance, raises as we visually inspect Figure LABEL:fig:realcards(a) to Figure LABEL:fig:realcards(f); the corresponding numbers in the table raise similarly. Counting the number of drops requires close attention and a lot of time; for the less dense Figure LABEL:fig:realcards(a) and Figure LABEL:fig:realcards(b), however, it is possible to verify the accuracy of the counting and segmentation provided by the tool.

The last two measures, VMD and DRS, provide parameters to understand the distribution of the drops’ diameters. For example, it is possible to see that, being more dense, cards (e) and (f) had a smaller median and a larger span of diameters. These measures indicate that the spraying is irregular and that it needs to be adjusted. Meanwhile, cards (a) and (b) are more regular, but not as dense as necessary, with a lot of blank spots. Cards (c) and (d), in turn, have a more uniform spraying and a more regular coverage.

4 Drop detection issues

This section discusses issues to be considered when developing technologies for spray card inspection. We faced such issues during our research; we discuss such issues as a further contribution that shall guide other researchers that deal with the same and with related problems.

4.1 Coverage factor

In our experience, we noticed that when the spraying gets too dense, not all of the information about the drops can be detected, no matter which technique is used for measuring; for instance, information about the number of drops, and their diameter distribution cannot be tracked anymore. This effect was already pointed out by Fox et al. [19], who claims that a total coverage on the card above 20% causes results to be unreliable; and coverages close to 70% are unfeasible.

This is because, with too much spray, the drops fall too close one to each other, causing overlaps; visually, is it like two or more drops became one. Effectively, this is what happens in the crop due to the intermolecular forces present in the water drops, which causes them to merge, forming bigger drops. Hence, it is needed caution, no matter which technique of assessment, whenever the total coverage area surpasses 20%, a circumstance when the diameter distribution is no longer accurate, and one must rely only on the coverage area for decision making. Although the diameter is not available, the large drops that might be detected indicate an excessive amount of pesticide or a malfunctioning of the spray device.

4.2 Angle of image capture

We also noticed that the image processing methodology used to detect the drops of all the studies presented so far, including ours, works only if the capture angle of the card is equal to 90 degrees. That is, the viewing angle of the camera/scanner must be orthogonal to the spray card surface. This is necessary because the pixels of the image are converted into a real-world dimension to express the diameter of the drops in ; therefore, it is necessary that the dimensions of the image be homogeneous with respect to scale. In case, the capture angle is not of 90 degrees, the image is distorted, resulting in different scales in each part of the image. For flatbed scanners, this is straightforward to guarantee; however, for handheld devices (cameras and smartphones), additional care is necessary. In such cases, one might need a special protocol in order to capture the image, like using a tripod, or some sort of apparatus to properly place the capturing device with respect to the spray card. This problem might also be solved by means of an image processing algorithm to remove eventual distortions, in which case, additional research and experimentation are necessary.

4.3 Minimum dots per inch (dpi)

Our experiments also reviewed that there must be a minimum amount of information on the spray card images in order to achieve the desired precision regarding the drops’ diameter. This minimum information is expressed by the dots per inch (dpi) property of the image capturing process; dpi is a well-known resolution measure that expresses how much pixels are necessary to reproduce the real-world dimension of one linear inch. If not enough pixels are captured per inch of the spray card during the capturing process, it becomes impossible to estimate, or even to detect, the diameter of the smallest drops. This might influence the diameter distribution analysis hiding problems in the spraying process.

In order to guide our research and development, we calculated and tested on the minimum dpi’s that are necessary for each desired drop diameter. In Table 1 one can see the minimum number of pixels to express each drop diameter for each dpi value; notice that some cells of the table are empty (filled with a hyphen) indicating that the diameter cannot be computationally expressed in that dpi resolution. Also, notice that, in the columns, the number of pixels for one same diameter increases with the resolution. Obviously, the more information, the more precision at the cost of more processing power, substantially more storage, and more network bandwidth when transferring images. From the table, it is possible to conclude that 600 dpi is the minimum resolution for robust analyses, since it can represent diameters as small as 50 ; meanwhile, a resolution of 1,200 dpi, although even more robust, might lead to drawbacks regarding the management of image files that are way too big. Notwithstanding, the fact that a resolution is enough to represent a given diameter is not a guarantee that drops with that diameter size will be detected; this is because the detection depends on other factors such as the quality of the lenses, and the image processing algorithm.

Table 1 is a guide for developers willing to computationally analyze spray cards, and also for agronomists who are deciding which equipment to buy in face of their needs.

\backslashboxdpi 50 100 300 600 1200 2400 2600
10 - - - - - - 1
50 - - - 1 2 5 5
100 - - 1 2 5 9 10
250 - 1 3 6 12 24 26
500 1 2 6 12 24 47 51
1,000 2 4 12 24 47 94 102
10,000 20 39 118 236 472 945 1024
Table 1: Pixels needed to represent a given length, given a dpi.

5 Conclusions

We introduced DropLeaf, a portable application to measure the pesticide coverage by means of the digitalization of water-sensitive spray cards. We verified that the precision of DropLeaf was enough to allow the use of mobile phones as substitutes for more expensive and troublesome methods of quantifying pesticide application in crops. The methodology was instantiated in a tool to be used in the inspection of real-world crops. We tested our tool with two datasets of water-sensitive cards; our experiments demonstrated that DropLeaf accurately tracks drops, being able to measure the pesticide coverage and the diameter of the drops. Furthermore, our mobile application detects overlapping drops, an important achievement because a finer precision provides not only better accuracy but also more information.

Acknowledgements

This research was partially supported by the Fundacao de Apoio ao Desenvolvimento do Ensino, Ciencia e Tecnologia do Estado de Mato Grosso do Sul (FUNDECT), by the National Council for Scientific and Technological Development (CNPq), and from the Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (Fapesp). We are grateful to Dr. Jose Raul, from Embrapa, for his assistance during the acquisition of cards and their validation.

References

  • [1] Food and Agriculture Organization. Feeding the world in 2050. Technical report, United Nations, 2009. World agricultural summit on food security.
  • [2] Hubert Bon, Joel Huat, Laurent Parrot, Antonio Sinzogan, Thibaud Martin, Eric Malezieux, and Jean-Francois Vayssieres. Pesticide risks from fruit and vegetable pest management by small farmers in sub-saharan africa. a review. Agronomy for Sustainable Development, 34(4):723–736, 2014.
  • [3] Jozsef Popp, Karoly Peto, and Janos Nagy. Pesticide productivity and food security. a review. Agronomy for Sustainable Development, 33(1):243–255, 2013.
  • [4] Yongbo Liu, Xubin Pan, and Junsheng Li. A 1961–2010 record of fertilizer use, pesticide application and cereal yields: a review. Agronomy for Sustainable Development, 35(1):83–93, 2015.
  • [5] Michael Renton, Roberto Busi, Paul Neve, David Thornby, and Martin Vila-Aiub. Herbicide resistance modelling: past, present and future. Pest Management Science, 70(9):1394–1404, 2014.
  • [6] Xavier Martini, Natalie Kincy, and Christian Nansen. Quantitative impact assessment of spray coverage and pest behavior on contact pesticide performance. Pest Management Science, 68(11):1471–1477, 2012.
  • [7] D. K. Giles and D. Downey. Quality control verification and mapping for chemical application. Precision Agriculture, 4(1):103–124, 2003.
  • [8] Karl Mierzejewski. Aerial spray technology: possibilities and limitations for control of pear thrips. Technical report, U.S. Department of Agriculture, Forest Service, 1991. Tech. Rep. NE-147.
  • [9] Hoffman W.C. and Hewitt A.J. Comparison of three imaging systems for water-sensitive papers. Applied Engineering In Agriculture, 21:961–964, 2005.
  • [10] Wolf R.E. Assessing the ability of dropletscan to analyze spray droplets from a ground operated sprayer. Applied Engineering In Agriculture, 19:525–530, 2003.
  • [11] Heping Zhu, Masoud Salyani, and Robert D. Fox. A portable scanning system for evaluation of spray deposit distribution. Computer and Electronics in Agriculture, 76(1):38–43, 2011.
  • [12] T. G. Crowe, D. Downey, and D. K. Giles. Digital device and technique for sensing distribution of spray deposition. Transactions of the American Society of Agricultural and Biological Engineers, 48(6):2085–2093, 2005.
  • [13] Rafael C Gonzalez and Richard E Woods. Image processing. Digital image processing, 2, 2007.
  • [14] Luc Vincent and Pierre Soille. Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE transactions on pattern analysis and machine intelligence, 13(6):583–598, 1991.
  • [15] Raffaele Gaetano, Giuseppe Masi, Giuseppe Scarpa, and Giovanni Poggi. A marker-controlled watershed segmentation: Edge, mark and fill. In Geoscience and Remote Sensing Symposium (IGARSS), 2012 IEEE International, pages 4315–4318. IEEE, 2012.
  • [16] Feng Xia, Ching-Hsien Hsu, Xiaojing Liu, Haifeng Liu, Fangwei Ding, and Wei Zhang. The power of smartphones. Multimedia Systems, 21(1):87–101, 2015.
  • [17] Cristiana Casanova, Annalisa Franco, Alessandra Lumini, and Dario Maio. Smartvisionapp - a framework for computer vision applications on mobile devices. Expert Systems with Applications, 40(15):5884–5894, 2013.
  • [18] Farinella Giovanni Maria, Ravì Daniele, Tomaselli Valeria, Guarnera Mirko, and Battiato Sebastiano. Representing scenes for real–time context classification on mobile devices. Pattern Recognition, 48(4):1086–1100, 2015.
  • [19] R.D. Fox, R.C. Derksen, J.A. Cooper, C.R. Krause, and H.E. Ozkan. Visual and image system measurement of spray deposits using water sensitive paper. Applied Engineering In Agriculture, 19(5):549–552, 2003.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
44855
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description