Conference Paper

Elicitating Challenges and User Needs Associated with Annotation Software for Plant Phenotyping

Authors:
  • Barnard College
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This allowed numerous errors in the encoding or code-fixing studies to be addressed [35]& [36]. In addition, dividing the space, as well as other failures and defects alongside the mapping of the keypad occurrence, such as studies in capturing ambiguity and confusability in keypad operation and encoding algorithms from the data entry error analysis for a specific layout, were also unresolved [37]& [38]. This study systematically reviews 24 Scopusindexed articles on halal hotels, identifying themes: customer behavior, Sharia compliance, attributes, and marketing, offering insights and future research opportunities in the halal tourism industry [39].This paper discusses Arabic stemming algorithms, focusing on extracting word roots, comparing methods for accuracy and effectiveness, and analyzing strengths and weaknesses in handling Arabic text [40]. ...
Article
Full-text available
A new approach of encoding with help of Virtual Keypad Letter Substitutions has shown in the improved result of text classification. In this study, we focus on a text document classification dataset comprising 2225 documents distributed across five categories: Politics, sports technology entertainment business. However, using both of these vectors, we initially performed traditional machine learning models like Naive Bayes, Logistic Regression, SVM, and Random Forest over the dataset, which provided us with reasonable accuracy, precision, recall, and F1-Score. However, it is hypothesized that the proposed approach, which uses the encoding technique, Virtual Keypad Letter Substitutions, would improve the performance of these models. The encoding method simply converts the letters in the text data with symbols imprinted on a virtual keypad to enhance abstraction that might better capture such features of the text as semantically and syntactically. These findings attest that the models we propose exhibit massive enhancements in all the metrics under study when trained with encoded data. For example, in Naive Bayes, after encoding the datasets into new features, they recorded an accuracy of 95.14%, precision 95.16%, recall 95.14% and F1-score of 95.12% excluding, it revealed inferior performance to that of raw data. The same effects were observed in other models like: Logistic Regression, SVM, Random Forests; Their accuracies were increased by 28,5% to 41,8%. Based on these findings, the authors recommend the Virtual Keypad Letter Substitution encoding algorithm not only as a tool for increasing the accuracy of text classification but also as a tool for data preprocessing in general machine learning. This method is expected to be advantageous in situations where text data comprises of associated formats or noisy data as the encoding may assist in filtering the most appropriate feature for classification. This work provides helpful information for enhancing the dependent variable associated with each type of the predetermined ML model, including C-SVM and naive-bayes for document classification, although its findings are promising for various disciplines, including NLP, Information Retrieval, and Document Classification, where efficient and accurate text classification is crucial for data-driven decision-making.
Article
Full-text available
Convolutional neural networks (CNNs) are a powerful tool for plant image analysis, but challenges remain in making them more accessible to researchers without a machine‐learning background. We present RootPainter, an open‐source graphical user interface based software tool for the rapid training of deep neural networks for use in biological image analysis. We evaluate RootPainter by training models for root length extraction from chicory (Cichorium intybus L.) roots in soil, biopore counting, and root nodule counting. We also compare dense annotations with corrective ones that are added during the training process based on the weaknesses of the current model. Five out of six times the models trained using RootPainter with corrective annotations created within 2 h produced measurements strongly correlating with manual measurements. Model accuracy had a significant correlation with annotation duration, indicating further improvements could be obtained with extended annotation. Our results show that a deep‐learning model can be trained to a high accuracy for the three respective datasets of varying target objects, background, and image quality with < 2 h of annotation time. They indicate that, when using RootPainter, for many datasets it is possible to annotate, train, and complete data processing within 1 d.
Article
Full-text available
Background 3D imaging, such as X-ray CT and MRI, has been widely deployed to study plant root structures. Many computational tools exist to extract coarse-grained features from 3D root images, such as total volume, root number and total root length. However, methods that can accurately and efficiently compute fine-grained root traits, such as root number and geometry at each hierarchy level, are still lacking. These traits would allow biologists to gain deeper insights into the root system architecture. Results We present TopoRoot, a high-throughput computational method that computes fine-grained architectural traits from 3D images of maize root crowns or root systems. These traits include the number, length, thickness, angle, tortuosity, and number of children for the roots at each level of the hierarchy. TopoRoot combines state-of-the-art algorithms in computer graphics, such as topological simplification and geometric skeletonization, with customized heuristics for robustly obtaining the branching structure and hierarchical information. TopoRoot is validated on both CT scans of excavated field-grown root crowns and simulated images of root systems, and in both cases, it was shown to improve the accuracy of traits over existing methods. TopoRoot runs within a few minutes on a desktop workstation for images at the resolution range of 400^3, with minimal need for human intervention in the form of setting three intensity thresholds per image. Conclusions TopoRoot improves the state-of-the-art methods in obtaining more accurate and comprehensive fine-grained traits of maize roots from 3D imaging. The automation and efficiency make TopoRoot suitable for batch processing on large numbers of root images. Our method is thus useful for phenomic studies aimed at finding the genetic basis behind root system architecture and the subsequent development of more productive crops.
Article
Full-text available
We present a new large-scale three-fold annotated microscopy image dataset, aiming to advance the plant cell biology research by exploring different cell microstructures including cell size and shape, cell wall thickness, intercellular space, etc. in deep learning (DL) framework. This dataset includes 9,811 unstained and 6,127 stained (safranin-o, toluidine blue-o, and lugol’s-iodine) images with three-fold annotation including physical, morphological, and tissue grading based on weight, different section area, and tissue zone respectively. In addition, we prepared ground truth segmentation labels for three different tuber weights. We have validated the pertinence of annotations by performing multi-label cell classification, employing convolutional neural network (CNN), VGG16, for unstained and stained images. The accuracy has been achieved up to 0.94, while, F2-score reaches to 0.92. Furthermore, the ground truth labels have been verified by semantic segmentation algorithm using UNet architecture which presents the mean intersection of union up to 0.70. Hence, the overall results show that the data are very much efficient and could enrich the domain of microscopy plant cell analysis for DL-framework.
Article
Full-text available
In this paper, multiple instance learning (MIL) algorithms to automatically perform root detection and segmentation in minirhizotron imagery using only image-level labels are proposed. Root and soil characteristics vary from location to location, and thus, supervised machine learning approaches that are trained with local data provide the best ability to identify and segment roots in minirhizotron imagery. However, labeling roots for training data (or otherwise) is an extremely tedious and time-consuming task. This paper aims to address this problem by labeling data at the image level (rather than the individual root or root pixel level) and train algorithms to perform individual root pixel level segmentation using MIL strategies. Three MIL methods (multiple instance adaptive cosine coherence estimator, multiple instance support vector machine, multiple instance learning with randomized trees) were applied to root detection and compared to non-MIL approaches. The results show that MIL methods improve root segmentation in challenging minirhizotron imagery and reduce the labeling burden. In our results, multiple instance support vector machine outperformed other methods. The multiple instance adaptive cosine coherence estimator algorithm was a close second with an added advantage that it learned an interpretable root signature which identified the traits used to distinguish roots from soil and did not require parameter selection.
Conference Paper
Full-text available
India loses 35% of the annual crop yield due to plant diseases. Early detection of plant diseases remains difficult due to the lack of lab infrastructure and expertise. In this paper, we explore the possibility of computer vision approaches for scalable and early plant disease detection. The lack of availability of sufficiently large-scale non-lab data set remains a major challenge for enabling vision based plant disease detection. Against this background, we present PlantDoc: a dataset for visual plant disease detection. Our dataset contains 2,598 data points in total across 13 plant species and up to 17 classes of diseases, involving approximately 300 human hours of effort in annotating internet scraped images. To show the efficacy of our dataset, we learn 3 models for the task of plant disease classification. Our results show that modelling using our dataset can increase the classification accuracy by up to 31%. We believe that our dataset can help reduce the entry barrier of computer vision techniques in plant disease detection.
Article
Full-text available
Hyperspectral Maturity Peanut Seed quality Spectral un-mixing Seed maturity in peanut (Arachis hypogaea L.) determines economic return to a producer because of its impact on seed weight (yield), and critically influences seed vigour and other quality characteristics. During seed development, the inner mesocarp layer of the pericarp (hull) transitions in colour from white to black as the seed matures. The maturity assessment process involves the removal of the exocarp of the hull and visually categorizing the mesocarp colours into varying colour classes from immature (white, yellow, orange) to mature (brown, and black). This visual colour classification is time consuming because the exocarp must be manually removed. In addition, the visual classification process involves human assessment of colours, which leads to large variability of colour classification from observer to observer. A more objective, digital imaging approach to peanut maturity is needed, optimally without the requirement of removal of the hull's exocarp. This study examined the use of a hyperspectral imaging (HSI) process to determine pod maturity with intact pericarps. The HSI method leveraged spectral differences between mature and immature pods within a classification algorithm to identify the mature and immature pods. Therefore, there is no need to remove the exocarp nor is there a need for subjective colour assessment in the proposed process. The results showed a consistent high classification accuracy using samples from different years and cultivars. In addition, the proposed method was capable of estimating a continuous-valued, pixel-level maturity value for individual peanut pods, allowing for a valuable tool that can be utilized in seed quality research. This new method solves issues of labour intensity and subjective error that all current methods of peanut maturity determination have.
Article
Full-text available
We present a novel form of interactive object segmentation called Click Carving which enables accurate segmentation of objects in images and videos with only a few point clicks. Whereas conventional interactive pipelines take the user’s initialization as a starting point, we show the value in the system taking lead even in initialization. In particular, for a given image or a video frame, the system precomputes a ranked list of thousands of possible segmentation hypotheses (also referred to as object region proposals) using appearance and motion cues. Then, the user looks at the top ranked proposals, and clicks on the object boundary to carve away erroneous ones. This process iterates (typically 2–3 times), and each time the system revises the top ranked proposal set, until the user is satisfied with a resulting segmentation mask. In the case of images, this mask is considered as the final object segmentation. However in the case of videos, the object region proposals rely on motion as well, and the resulting segmentation mask in the first frame is further propagated across the video to obtain a complete spatio-temporal object tube. On six challenging image and video datasets, we provide extensive comparisons with both existing work and simpler alternative methods. In all, the proposed Click Carving approach strikes an excellent of accuracy and human effort. It outperforms all similarly fast methods, and is competitive or better than those requiring 2–12 times the effort.
Article
Full-text available
Tree species classification using hyperspectral imagery is a challenging task due to the high spectral similarity between species and large intra-species variability. This paper proposes a solution using the Multiple Instance Adaptive Cosine Estimator (MI-ACE) algorithm. MI-ACE estimates a discriminative target signature to differentiate between a pair of tree species while accounting for label uncertainty. Multi-class species classification is achieved by training a set of one-vs-one MI-ACE classifiers corresponding to the classification between each pair of tree species and a majority voting on the classification results from all classifiers. Additionally, the performance of MI-ACE does not rely on parameter settings that require tuning resulting in a method that is easy to use in application. Results presented are using training and testing data provided by a data analysis competition aimed at encouraging the development of methods for extracting ecological information through remote sensing obtained through participation in the competition. The experimental results using one-vs-one MI-ACE technique composed of a hierarchical classification, where a tree crown is first classified to one of the genus classes and one of the species classes. The species-level rank-1 classification accuracy is 86.4% and cross entropy is 0.9395 on the testing data, provided by the competition organizer, without the release of ground truth for testing data. Similarly, the same evaluation metrics are computed on the training data, where the rank-1 classification accuracy is 95.62% and the cross entropy is 0.2649. The results show that the presented approach can not only classify the majority species classes, but also classify the rare species classes.
Conference Paper
Full-text available
Despite the availability of software to support Affinity Diagramming (AD), practitioners still largely favor physical sticky-notes. Physical notes are easy to set-up, can be moved around in space and offer flexibility when clustering unstructured data. However, when working with mixed data sources such as surveys, designers often trade off the physicality of notes for analytical power. We propose Affinity Lens, a mobile-based augmented reality (AR) application for Data-Assisted Affinity Diagramming (DAAD). Our application provides just-in-time quantitative insights overlaid on physical notes. Affinity Lens uses several different types of AR overlays (called lenses) to help users find specific notes, cluster information, and summarize insights from clusters. Through a formative study of AD users, we developed design principles for data-assisted AD and an initial collection of lenses. Based on our prototype, we find that Affinity Lens supports easy switching between qualitative and quantitative ‘views’ of data, without surrendering the lightweight benefits of existing AD practice.
Article
Full-text available
Background Fine-grained recognition of plants from images is a challenging computer vision task, due to the diverse appearance and complex structure of plants, high intra-class variability and small inter-class differences. We review the state-of-the-art and discuss plant recognition tasks, from identification of plants from specific plant organs to general plant recognition “in the wild”. ResultsWe propose texture analysis and deep learning methods for different plant recognition tasks. The methods are evaluated and compared them to the state-of-the-art. Texture analysis is only applied to images with unambiguous segmentation (bark and leaf recognition), whereas CNNs are only applied when sufficiently large datasets are available. The results provide an insight in the complexity of different plant recognition tasks. The proposed methods outperform the state-of-the-art in leaf and bark classification and achieve very competitive results in plant recognition “in the wild”. Conclusions The results suggest that recognition of segmented leaves is practically a solved problem, when high volumes of training data are available. The generality and higher capacity of state-of-the-art CNNs makes them suitable for plant recognition “in the wild” where the views on plant organs or plants vary significantly and the difficulty is increased by occlusions and background clutter.
Article
Full-text available
Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.
Article
Full-text available
The application of Partial Membership Latent Dirichlet Allocation(PM-LDA) for hyperspectral endmember estimation and spectral unmixing is presented. PM-LDA provides a model for a hyperspectral image analysis that accounts for spectral variability and incorporates spatial information through the use of superpixel-based 'documents.' In our application of PM-LDA, we employ the Normal Compositional Model in which endmembers are represented as Normal distributions to account for spectral variability and proportion vectors are modeled as random variables governed by a Dirichlet distribution. The use of the Dirichlet distribution enforces positivity and sum-to-one constraints on the proportion values. Algorithm results on real hyperspectral data indicate that PM-LDA produces endmember distributions that represent the ground truth classes and their associated variability.
Article
Full-text available
We present a novel image analysis tool that allows the semi-automated quantification of complex root system architectures in a range of plant species, grown and imaged in a variety of ways. The automatic component of RootNav takes a top-down approach, utilising the powerful Expectation-Maximisation classification algorithm to examine regions of the input image, calculating the likelihood that given pixels correspond to roots. This information is used as the basis for an optimisation approach to root detection and quantification, which effectively fits a root model to the image data. The resulting user experience is akin to defining routes on a motorist&rsquo's satellite navigation system: RootNav makes an initial optimised estimate of paths from the seed point to root apices, and the user is able to easily and intuitively refine the results using a visual approach. The proposed method is evaluated on winter wheat images (and demonstrated on Arabidopsis thaliana, Brassica napus and Oryza sativa), and results compared to manual analysis. Four exemplar traits are calculated, and show clear illustrative differences between some of the wheat accessions. RootNav, however, provides the structural information needed to support extraction of a wider variety of biologically relevant measures. A separate Viewer tool is provided to recover a rich set of architectural traits from RootNav's core representation.
Conference Paper
Full-text available
We present a new, interactive tool called Intelligent Scissors which we use for image segmentation and composition. Fully auto- mated segmentation is an unsolved problem, while manual tracing is inaccurate and laboriously unacceptable. However, Intelligent Scissors allow objects within digital images to be extracted quickly and accurately using simple gesture motions with a mouse. When the gestured mouse position comes in proximity to an object edge, a live-wire boundary "snaps" to, and wraps around the object of interest. Live-wire boundary detection formulates discrete dynamic pro- gramming (DP) as a two-dimensional graph searching problem. DP provides mathematically optimal boundaries while greatly reducing sensitivity to local noise or other intervening structures. Robustness is further enhanced with on-the-fly training which causes the boundary to adhere to the specific type of edge currently being followed, rather than simply the strongest edge in the neigh- borhood. Boundary cooling automatically freezes unchanging seg- ments and automates input of additional seed points. Cooling also allows the user to be much more free with the gesture path, thereby increasing the efficiency and finesse with which boundaries can be extracted. Extracted objects can be scaled, rotated, and composited using live-wire masks and spatial frequency equivalencing. Frequency equivalencing is performed by applying a Butterworth filter which matches the lowest frequency spectra to all other image compo- nents. Intelligent Scissors allow creation of convincing composi- tions from existing images while dramatically increasing the speed and precision with which objects can be extracted.
Article
Full-text available
The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.
Article
Full-text available
The problem of efficient, interactive foreground/background segmentation in still images is of great practical importance in image editing. Classical image segmentation tools use either texture (colour) information, e.g. Magic Wand, or edge (contrast) information, e.g. Intelligent Scissors. Recently, an approach based on optimization by graph-cut has been developed which successfully combines both types of information. In this paper we extend the graph-cut approach in three respects. First, we have developed a more powerful, iterative version of the optimisation. Secondly, the power of the iterative algorithm is used to simplify substantially the user interaction needed for a given quality of result. Thirdly, a robust algorithm for "border matting" has been developed to estimate simultaneously the alpha-matte around an object boundary and the colours of foreground pixels. We show that for moderately difficult examples the proposed method outperforms competitive tools.
Article
Advances in remote sensing imagery and machine learning applications unlock the potential for developing algorithms for species classification at the level of individual tree crowns at unprecedented scales. However, most approaches to date focus on site-specific applications and a small number of taxonomic groups. Little is known about how well these approaches generalize across broader geographic areas and ecosystems. Leveraging field surveys and hyperspectral remote sensing data from the National Ecological Observatory Network (NEON), we developed a continental-extent model for tree species classification that can be applied to the network, including a wide range of US terrestrial ecosystems. We compared the performance of a model trained with data from 27 NEON sites to models trained with data from each individual site, evaluating advantages and challenges posed by training species classifiers at the US scale. We evaluated the effect of geographic location, topography, and ecological conditions on the accuracy and precision of species predictions (72 out of 77 species). On average, the general model resulted in good overall classification accuracy (micro-F1 score), with better accuracy than site-specific classifiers (average individual tree level accuracy of 0.77 for the general model and 0.70 for site-specific models). Aggregating species to the genus-level increased accuracy to 0.83. Regions with more species exhibited lower classification accuracy. Predicted species were more likely to be confused with congeneric and co-occurring species and confusion was highest for trees with structural damage and in complex closed-canopy forests. The model produced accurate estimates of uncertainty, correctly identifying trees where confusion was likely. Using only data from NEON, this single integrated classifier can make predictions for 20% of all tree species found in forest ecosystems across the entire US, which make up to roughly 90% of the upper canopy of the studied ecosystems. This suggests the potential for integrating information from multiple datasets and locations to develop broad scale general models for species classification from hyperspectral imaging.
Article
Minirhizotron technology is widely used to study root growth and development. Yet, standard approaches for tracing roots in minirhiztron imagery is extremely tedious and time consuming. Machine learning approaches can help to automate this task. However, lack of enough annotated training data is a major limitation for the application of machine learning methods. Transfer learning is a useful technique to help with training when available datasets are limited. In this paper, we investigated the effect of pre-trained features from the massive-scale, irrelevant ImageNet dataset and a relatively moderate-scale, but relevant peanut root dataset on switchgrass root imagery segmentation applications. We compiled two minirhizotron image datasets to accomplish this study: one with 17,550 peanut root images and another with 28 switchgrass root images. Both datasets were paired with manually labeled ground truth masks. Deep neural networks based on the U-net architecture were used with different pre-trained features as initialization for automated, precise pixel-wise root segmentation in minirhizotron imagery. We observed that features pre-trained on a closely related but relatively moderate size dataset like our peanut dataset were more effective than features pre-trained on the large but unrelated ImageNet dataset. We achieved high quality segmentation on peanut root dataset with 99.04% accuracy at the pixel-level and overcame errors in human-labeled ground truth masks. By applying transfer learning technique on limited switchgrass dataset with features pre-trained on peanut dataset, we obtained 99% segmentation accuracy in switchgrass imagery using only 21 images for training (fine tuning). Furthermore, the peanut pre-trained features can help the model converge faster and have much more stable performance. We presented a demo of plant root segmentation for all models under https://github.com/GatorSense/PlantRootSeg.
Conference Paper
In this paper, we introduce a simple and standalone manual annotation tool for images, audio and video: the VGG Image Annotator (VIA). This is a light weight, standalone and offline software package that does not require any installation or setup and runs solely in a web browser. The VIA software allows human annotators to define and describe spatial regions in images or video frames, and temporal segments in audio or video. These manual annotations can be exported to plain text data formats such as JSON and CSV and therefore are amenable to further processing by other software tools. VIA also supports collaborative annotation of a large dataset by a group of human annotators. The BSD open source license of this software allows it to be used in any academic project or commercial application.
Article
The measurement of root growth over time, without destructive excavation from soil is important to understanding the development of plants, communities and ecosystems. However, analyzing root images from the soil is difficult because the contrast between soil particles and roots often presents challenges to segmenting for root extraction. In this paper, we proposed a fully automated method based on convolutional neural networks, called SegRoot, adapted for segmenting root from complex soil background. Our method eliminates the need for delicate feature designing which requires significant expert knowledge. The trained SegRoot networks learned morphological features with different abstraction levels directly from root images. Thus, the generalization and adaptation of the proposed method was expected across different root images. Using images of soybean roots, high performance of segmentation results were obtained by our benchmark SegRoot with testing dice score of 0.6441 (where 1 is a perfect score). When compared with human traced root lengths, an excellent correlation in root total length estimation was achieved with R² of 0.9791. We also applied SegRoot to images with entirely different soil type collected from a forest ecosystem. Even without training on those images, a good generalization capability was obtained. Additionally, the impact of network capacity was also studied in order to find a cost-effective network suitable for in field application. We believe this automated segmentation method will revolutionize the measurement of plant roots in soil by dramatically increasing the ability to extract data from minirhizotron images.
Article
Image analysis has become a powerful technique for most plant scientists. In recent years dozens of image analysis tools have been published in plant science journals. These tools cover the full spectrum of plant scales, from single cells to organs and canopies. However, the field of plant image analysis remains in its infancy. It still has to overcome important challenges, such as the lack of robust validation practices or the absence of long-term support. In this Opinion article, I: (i) present the current state of the field, based on data from the plant-image-analysis.org database; (ii) identify the challenges faced by its community; and (iii) propose workable ways of improvement.
Conference Paper
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in understanding an object's precise 2D location. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old along with per-instance segmentation masks. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
Article
In this paper we describe a new technique for general purpose interactive segmentation of N-dimensional images . The user marks certain pixels as "object" or "background" to provide hard constraints for segmentation. Additional soft constraints incorporate both boundary and region in- formation. Graph cuts are used to find the globally optimal segmentation of the N-dimensional image. The obtained so- lution gives the best balance of boundary and region prop- erties among all segmentations satisfying the constraints . The topology of our segmentation is unrestricted and both "object" and "background" segments may consist of sev- eral isolated parts. Some experimental results are present ed in the context of photo/video editing and medical image seg- mentation. We also demonstrate an interesting Gestalt ex- ample. A fast implementation of our segmentation method is possible via a new max-flow algorithm in (2).
Article
Applications and limitations of the minirhizotron technique (non-destructive) in relation to two frequently used destructive methods (soil coreing and ingrowth cores) is discussed. Sequential coreing provides data on standing crop but it is difficult to obtain data on root biomass production. Ingrowth cores can provide a quick estimate of relative fine-root growth when root growth is rapid. One limitation of the ingrowth core is that no information on the time of ingrowth and mortality is obtained. The minirhizotron method, in contrast to the destructive methods permits simultaneous calculation of fine-root length production and mortality and turnover. The same fine-root segment in the same soil space can be monitored for its life time, and stored in a database for processing. The methodological difficulties of separating excavated fine roots into living and dead vitality classes are avoided, since it is possible to judge directly the successive ageing of individual roots from the images. It is concluded that the minirhizotron technique is capable of quantifying root dynamics (root-length production, mortality and longevity) and fine-root decomposition. Additionally, by combining soil core data (biomass, root length and nutrient content) and minirhizotron data (length production and mortality), biomass production and nutrient input into the soil via root mortality and decomposition can be estimated.
Article
Research in object detection and recognition in cluttered scenes requires large image collections with ground truth labels. The labels should provide information about the object classes present in each image, as well as their shape and locations, and possibly other attributes such as pose. Such data is useful for testing, as well as for supervised learning. This project provides a web-based annotation tool that makes it easy to annotate images, and to instantly sharesuch annotations with the community. This tool, plus an initial set of 10,000 images (3000 of which have been labeled), can be found at http://www.csail.mit.edu/\simbrussell/research/LabelMe/intro.html
Article
Minirhizotrons provide detailed information on the production, life history and mortality of fine roots. However, manual processing of minirhizotron images is time-consuming, limiting the number and size of experiments that can reasonably be analysed. Previously, an algorithm was developed to automatically detect and measure individual roots in minirhizotron images. Here, species-specific root classifiers were developed to discriminate detected roots from bright background artifacts. Classifiers were developed from training images of peach (Prunus persica), freeman maple (Acer x freemanii) and sweetbay magnolia (Magnolia virginiana) using the Adaboost algorithm. True- and false-positive rates for classifiers were estimated using receiver operating characteristic curves. Classifiers gave true positive rates of 89-94% and false positive rates of 3-7% when applied to nontraining images of the species for which they were developed. The application of a classifier trained on one species to images from another species resulted in little or no reduction in accuracy. These results suggest that a single root classifier can be used to distinguish roots from background objects across multiple minirhizotron experiments. By incorporating root detection and discrimination algorithms into an open-source minirhizotron image analysis application, many analysis tasks that are currently performed by hand can be automated.
Image processing with ImageJ
  • Paulo J Michael D Abràmoff
  • Magalhães
  • J Sunanda
  • Ram
  • Abràmoff D
GLO-Roots: an imaging platform enabling multidimensional characterization of soil-grown root systems
  • Guillaume Rubén Rellán-Álvarez
  • Heike Lobet
  • Pierre-Luc Lindner
  • Jose Pradier
  • Muh-Ching Sebastian
  • Yu Yee
  • Charlotte Geng
  • Therese Trontin
  • Amanda Larue
  • Schrager-Lavelle
  • Rellán-Álvarez Rubén
Evaluation of Postharvest Senescence of Broccoli via Hyperspectral Imaging
  • Xiaolei Guo
  • K Yogesh
  • Tie Ahlawat
  • Alina Liu
  • Zare
  • Guo Xiaolei
The cityscapes dataset
  • Marius Cordts
  • Mohamed Omran
  • Sebastian Ramos
  • Timo Scharwächter
  • Markus Enzweiler
  • Rodrigo Benenson
  • Uwe Franke
  • Stefan Roth
  • Bernt Schiele
  • Cordts Marius
Remarkable similarity in timing of absorptive fine-root production across 11 diverse temperate tree species in a common garden
  • Marc Jennifer M Withington
  • Bartosz Goebel
  • Jacek Bułaj
  • Oleksyn
  • B Peter
  • David M Reich
  • Eissenstat
  • Withington M
Computer vision annotation tool: a universal approach to data annotation
  • Boris Sekachev
  • A Manovich
  • Zhavoronkov
  • Sekachev Boris
Roser Matamala, 2021. PRMI: A Dataset of Minirhizotron Images for Diverse Plant Root Studies
  • Weihuang Xu
  • Guohao Yu
  • Yiming Cui
  • Romain Gloaguen
  • Alina Zare
  • Jason Bonnette
  • Joel Reyes-Cabrera
  • Ashish B Rajurkar
  • Diane Rowland
  • Xu Weihuang