PosterPDF Available

A Modular Capture System for Automatic Weed Imaging & Analysis

  • Graphcore Ltd.


This work concerned the development of a suite of tools for the analysis of weed growth in maize fields. It features a multi-modal capture device, and a collection of algorithms primarily focused on the detection of broad-leafed weeds in maize, with localisation of the meristem to within 2mm.
A Modular Capture System for Automatic Weed
Imaging and Analysis
I. J. Hales, M. F. Hansen, L. Broadbent, M. L. Smith, S. Waneand S. Blackmore
University of the West of England, Bristol. {ian.hales, mark.hansen, laurence.broadbent, melvyn.smith}
Harper Adams University College, Newport. {swane, simon.blackmore}
It is estimated that today’s farming methods, provide sufficient global crop output to feed 5.5bn people [6]. Given
the current world population is over 7 billion and is expected to grow to 9bn by 2050, improving the efficiency of
arable land use is a pressing concern. Clearly, crop yield is affected by competition for resources from weeds growing
amongst crops. Current techniques for weed management often involve the wide-scale spraying of herbicides, which
is both economically and environmentally expensive. By making use of precision farming techniques, such as targeted
weed killing, the cost of crop management can be lowered and the crop yield increased.
To this end, we are developing a toolkit for the real-time identification and analysis of early growth-cycle weed
clusters in crop fields, particularly with respect to localisation of the meristems, thus giving guidance to a targeted weed
control device such as a laser applicator [1]. The proposed system uses a combination of tried-and-tested 2D image
processing methods, as well as 3D surface and depth information to locate and analyse weeds in real-time from a high
frame-rate video stream.The work presented here represents the current state of the system, in which the 2D feature
analysis element is complete and development has begun on the 3D shape and structure components.
(a) (b) (c)
Fig. 1: To find the location, size and orientation of the crop-rows, a Gaussian blur is applied to the vegetation segmentation
image before thresholding again. Remaining blobs of sufficient intensity are chosen as crop-rows and ellipses fitted to
them, giving the parameters of each row.
Before they can be analysed, the weeds within an image must first be segmented from the background. Several metrics
for vegetation segmentation have been examined in the literature with [2] and [4] offering good reviews. We empirically
determined that the Excess Green Index (ExG) [5], obtained by the following weighting of the R,Gand Bchannels,
provided the best segmentation: ExG = 2GRB. This can then be thresholded using an automated method to
produce a binary segmentation.
In many of our images, such as in figure 1a, we observe not only weeds, but also the crops themselves. To ensure these
are not included in the weed-control, we apply a strong Gaussian blur to the segmented binary image then threshold
again. Based on the assumption that crops will appear significantly larger than weed clusters, any blobs which remain
are considered crop rows as illustrated in figure 1b. If the automatic threshold value is low, we assume no crop-row
was present. For each remaining connected-component, we fit an ellipse, using its major-axis-orientation and minor-axis
width to define a crop-row region as per figure 1c. As we currently focus on out-of-row weeds, any vegetation within
these regions is labelled as “crop”.
Previous work has shown that under controlled conditions it is possible to determine the meristem of weed clusters
of known species using variational models [3]. Our system is intended to be used in a realistic, outdoor environment,
in which shape-models are likely to struggle due to the very high levels of variation between the weeds. Therefore,
rather than relying on a prior model of weed appearance, we examine the imaged structure of the segmented connected-
components in isolation.
As the system is intended run in real-time at high frame-rates, simple morphological operations with minimal
computational load are used to determine meristem location. For each vegetation component labelled as “crop”, we
extract the ExG patch enclosing it and resize it to 64px wide (the height varies proportionally). The appearance of
grassy weeds is very different to that of rounded-leaf weeds, requiring a differing method to calculate the growth-centre.
(a) (b) (c) (d) (e) (f) (g) (h)
Fig. 2: Two fast methods have been developed for meristem location. For grassy patches (a-d), the hough-lines in the
ExG image are detected and their intersection point estimated. For patches with rounded leaves (e-h), the thresholded
ExG patch is thinned, then branching-pixels are detemined. Those closest to the centroids of each component are labelled
as the meristem.
Therefore, the system must first determine which type of weed is visible within the patch. By performing Canny edge
detection followed by the Hough line transform on the patch, then examining the length and relative orientation of
extracted edges, we can reliably determine if a patch contains grass or rounded weeds grassy patches exhibit longer
edges with little directional variation, whilst rounded-leaf patches offer shorter edges with higher orientation variation.
To detect the growth centre of grassy patches, we simply calculate the least-squares point of intersection of all detected
Hough lines. The calculation for the rounded-leaf patches involves first thresholding the ExG patch using the Triangle
method as above, this time calculating the threshold on based on only the patch’s intensity histogram. This allows us
to compensate for any soil-colour or small lighting variations that may occur across the image and offers improved
separation of closely growing weeds. The resulting components then undergo morphological thinning giving a set of
pixel-width lines representing the component shape. From these lines we find the branching-pixels; that is, pixels at
which 3 or more lines of different direction meet. It is not unreasonable to assume that the meristem of the rounded
leaf plants appears near the centre of the patch and as such we take the branching-pixel closest to the central moment
of each component as our estimate for the meristem or meristems. In our test sequences, we have produced estimates
within 2mm of the hand-labelled location for 78% of grassy patches and 86% of rounded-leaf patches.
The 3D element of the system is currently under development. We hope to improve our estimates of meristem
location and crop-row identification by incorporating additional 3D shape and structure information into the estimation.
Two methods are being used to obtain this information: (1) Photometric Stereo (PS) for surface shape and (2) RGBD
data from a depth camera. The former provides an extremely detailed model of the surface shape of an object by
capturing images of it illuminated from different directions from the same viewpoint. Each additional direction provides
an additional constraint on the parameters of the surface normal for each pixel and its albedo. As few as two lights can
be used in simple cases where the texture and reflectivity are uniform. Such simplistic assumptions clearly do not apply
here and as such a four-light system is used, allowing not only variation in the surface albedo (which requires at least
three light sources), but also improved robustness to shadow artefacts. By examining the 3D shape of the vegetation, it
is hoped that issues such as the overlapping of leaves and identification of stems in cluttered environments will become
much easier to overcome. As monocular PS offers only surface information, the system also utilises a depth-camera to
allow the creation of depth-maps. As the crops are expected to grow significantly taller than the weed patches, this should
allow for a reliable method of crop-row detection as well as providing useful information about the weeds themselves.
By offering a fast, flexible and automated system for the recognition and localisation of weed clusters, our system will
offer a potential guidance solution for targeted weed-control systems as well as allowing visualisation of weed growth
at a field-wide level.
[1] S. Christensen, H. T. Søgaard, P. Kudsk, M. Nørremark, I. Lund, E. S. Nadimi, and R. Jørgensen. Site-specific weed control technologies. Weed
Research, 49(3):233–241, June 2009.
[2] George E. Meyer and Joo Camargo Neto. Verification of color vegetation indices for automated crop imaging applications. Computers and
Electronics in Agriculture, 63(2):282–293, October 2008.
[3] Julio C. Pastrana and Thomas Rath. Novel image processing approach for solving the overlapping problem in agriculture. Biosystems Engineering,
115(1):106–115, May 2013.
[4] J. Romeo, G. Pajares, M. Montalvo, J. M. Guerrero, M. Guijarro, and J. M. de la Cruz. A new expert system for greenness identification in
agricultural images. Expert Systems with Applications, 40(6):2275–2286, May 2013.
[5] D. M. Woebbecke, G. E. Meyer, K. Von Bargen, and D. A. Mortensen. Color Indices for Weed Identification Under Various Soil, Residue, and
Lighting Conditions. Transactions of the ASAE, 38(1):259–269, 1995.
[6] StephenL Young, GeorgeE Meyer, and WayneE Woldt. Future Directions for Automated Weed Management in Precision Agriculture. In Stephen L.
Young and Francis J. Pierce, editors, Automation: The Future of Weed Control in Cropping Systems, pages 249–259. Springer Netherlands, 2014.
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
Site-specific weed control technologies are defined as machinery or equipment embedded with technologies that detect weeds growing in a crop and, taking into account predefined factors such as economics, take action to maximise the chances of successfully controlling them. In this study, we describe the basic parts of site-specific weed control technologies, comprising weed sensing systems, weed management models and precision weed control implements. A review of state-of-the-art technologies shows that several weed sensing systems and precision implements have been developed over the last two decades, although barriers prevent their breakthrough. Most important among these is the lack of a truly robust weed recognition method, owing to mutual shading among plants and limitations in the capacity of highly accurate spraying and weeding apparatus. Another barrier is the lack of knowledge about the economic and environmental potential for increasing the resolution of weed control. The integration of site-specific information on weed distribution, weed species composition and density and the effect on crop yield, is decisive for successful site-specific weed management.
In cropping systems, integrated weed management is based on diversification. Rather than relying solely on one or two herbicides, a multiplicity of weed control strategies is employed. Yet, integrated weed management as currently practiced is far from integrated; every weed is still managed the same regardless of location or season. The recent development of precision application technology is now allowing for smaller treatment units by making applications according to site- specific demands. The automated systems of the future will have sensor and computer technologies that first categorize each and every plant in the field as either weed or crop and then identify the species of weed. Following identification, multiple weed control tools located on a single platform are applied at micro-rates to individual plants based on their biology. For example, if the system identified a weed resistant to Roundup, it could be spritzed with a different herbicide or nipped with an onboard cutter or singed with a burst of flame. This system and others like it will be capable of targeting different weed-killing tools to specific weeds. This chapter will discuss the challenges and tools of the future.
Color slide images of weeds among various soils and residues were digitized and analyzed for red, green, and blue (RGB) color content. Red, green, and blue chromatic coordinates (rgb) of plants were very different from those of background soils and residue. To distinguish living plant material from a nonplant background, several indices of chromatic coordinates were studied, tested, and were successful in identifying weeds. The indices included r-g, g-b, (g-b)||r-g|, and 2g-r-b. A modified hue was also used to distinguish weeds from non-plant surfaces. The modified hue, 2g-r-b index, and the green chromatic coordinate distinguished weeds from a nonplant background (0.05 level of significance) better than other indices. However, the modified hue was the most computationally intense. These indices worked well for both nonshaded and shaded sunlit conditions. These indices could be used for sensor design for detecting weeds for spot spraying control.
A general problem in computer vision is the detection of objects when they are partially occluded. This problem also extends to the identification of horticultural/agricultural products (e.g., plants and crops), where recognition can be very cumbersome due to the heavy overlapping situations that one can find. This paper presents a novel approach to solve the recognition of plantlets under such conditions. The methodology consists of two major steps: (1) The simplification of the complexity of leaf shapes by using ellipse approximation. (2) The clustering of the leaves (ellipses) found into plantlets using active shape models. Shape models of experimental plants with 2, 3 and 4 leaves were tested to analyse the ability of the method to overcome the overlapping problem. The results indicate that the presented technique is able to perform identification of individual plantlets under overlapping situations, by first decreasing the complexity of their form and then using these simplified characteristics in a statistical shape model. (c) 2013 IAgrE. Published by Elsevier Ltd. All rights reserved.
It is well-known that one important issue emerging strongly in agriculture is related with the automation of tasks, where camera-based sensors play an important role. They provide images that must be conveniently processed. The most relevant image processing procedures require the identification of green plants, in our experiments they comes from barley and maize fields including weeds, so that some type of action can be carried out, including site-specific treatments with chemical products or mechanical manipulations.The images come from outdoor environments, which are affected for a high variability of illumination conditions because of sunny or cloudy days or both with high rate of changes.Several indices have been proposed in the literature for greenness identification, but under adverse environmental conditions most of them fail or do not work properly. This is true even for camera devices with auto-image white balance.This paper proposes a new automatic and robust Expert System for greenness identification. It consists of two main modules: (1) decision making, based on image histogram analysis and (2) greenness identification, where two different strategies are proposed, the first based on classical greenness identification methods and the second inspired on the Fuzzy Clustering approach. The Expert System design as a whole makes a contribution, but the Fuzzy Clustering strategy makes the main finding of this paper. The system is tested for different images captured with several camera devices.
An accurate vegetation index is required to identify plant biomass versus soil and residue backgrounds for automated remote sensing and machine vision applications, plant ecological assessments, precision crop management, and weed control. An improved vegetation index, Excess Green minus Excess Red (ExG − ExR) was compared to the commonly used Excess Green (ExG), and the normalized difference (NDI) indices. The latter two indices used an Otsu threshold value to convert the index near-binary to a full-binary image. The indices were tested with digital color image sets of single plants grown and taken in a greenhouse and field images of young soybean plants. Vegetative index accuracies using a separation quality factor algorithm were compared to hand-extracted plant regions of interest. A quality factor of one represented a near perfect binary match of the computer extracted plant target compared to the hand-extracted plant region. The ExG − ExR index had the highest quality factor of 0.88 ± 0.12 for all three weeks and soil-residue backgrounds for the greenhouse set. The ExG + Otsu and NDI − Otsu indices had similar but lower quality factors of 0.53 ± 0.39 and 0.54 ± 0.33 for the same sets, respectively. Field images of young soybeans against bare soil gave quality factors for both ExG − ExR and ExG + Otsu around 0.88 ± 0.07. The quality factor of NDI + Otsu using the same field images was 0.25 ± 0.08. The ExG − ExR index has a fixed, built-in zero threshold, so it does not need Otsu or any user selected threshold value. The ExG − ExR index worked especially well for fresh wheat straw backgrounds, where it was generally 55% more accurate than the ExG + Otsu and NDI + Otsu indices. Once a binary plant region of interest is identified with a vegetation index, other advanced image processing operations may be applied, such as identification of plant species for strategic weed control.