On the use of the overlapping area matrix for image segmentation evaluation: A survey and new performance measures

University of the Balearic Islands, Department of Mathematics and Computer Science, Palma de Mallorca, Spain
Pattern Recognition Letters (Impact Factor: 1.27). 01/2006; DOI: 10.1016/j.patrec.2006.05.002
Source: DBLP

ABSTRACT The development of common and reasonable criteria for evaluating and comparing the performance of segmentation algorithms has always been a concern for researchers in the area. As it is discussed in the paper, some of the measures proposed are not adequate for general images (i.e. images of any sort of scene, without any assumption about the features of the scene objects or the illumination distribution) because they assume a certain distribution of pixel gray-level or colour values for the interior of the regions. This paper reviews performance measures not performing such an assumption and proposes a set of new performance measures in the same line, called the percentage of correctly grouped pixels (CG), the percentage of over-segmentation (OS) and the percentage of under-segmentation (US). Apart from accounting for misclassified pixels, the proposed set of new measures are intended to compute the level of fragmentation of reference regions into output regions and vice versa. A comparison involving similar measures is provided at the end of the paper.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Crop identification on specific parcels and the assessment of soil management practices are important for agro-ecological studies, greenhouse gas modeling, and agrarian policy development. Traditional pixel-based analysis of remotely sensed data results in inaccurate identification of some crops due to pixel heterogeneity, mixed pixels, spectral similarity, and crop pattern variability. These problems can be overcome using object-based image analysis (OBIA) techniques, which incorporate new spectral, textural and hierarchical features after segmentation of imagery. We combined OBIA and decision tree (DT) algorithms to develop a methodology, named Object-based Crop Identification and Mapping (OCIM), for a multi-seasonal assessment of a large number of crop types and field status.In our approach, we explored several vegetation indices (VIs) and textural features derived from visible, near-infrared and short-wave infrared (SWIR) bands of ASTER satellite scenes collected during three distinct growing-season periods (mid-spring, early-summer and late-summer). OCIM was developed for 13 major crops cultivated in the agricultural area of Yolo County in California, USA. The model design was built in four different scenarios (combinations of three or two periods) by using two independent training and validation datasets and the best DTs resulted in an error rate of 9% for the three-period model and between 12 and 16% for the two-period models. Next, the selected DT was used for the thematic classification of the entire cropland area and mapping was then evaluated applying the confusion matrix method to the independent testing dataset that reported 79% overall accuracy. OCIM detected intra-class variations in most crops attributed to variability from local crop calendars, tree-orchard structures and land management operations. Spectral variables (based on VIs) contributed around 90% to the models, although textural variables were necessary to discriminate between most of the permanent crop-fields (orchards, vineyard, alfalfa and meadow). Features extracted from late-summer imagery contributed around 60% in classification model development, whereas mid-spring and early-summer imagery contributed around 30 and 10%, respectively. The Normalized Difference Vegetation Index (NDVI) was used to identify the main groups of crops based on the presence and vigor of green vegetation within the fields, contributing around 50% to the models. In addition, other VIs based on SWIR bands were also crucial to crop identification because of their potential to detect field properties like moisture, vegetation vigor, non-photosynthetic vegetation and bare soil. The OCIM method was built using interpretable rules based on physical properties of the crops studied and it was successful for object-based feature selection and crop identification.Research Highlights► Decision tree modeling is suitable to identify crops at different field conditions. ► Consideration of intra-class variations is required to improve classifications. ► Textural features improve discrimination among heterogeneous permanent crops. ► Information from NIR and SWIR bands is needed for detailed crop identification. ► Crop identification requires the study of field status in distinct growing seasons.
    Remote Sensing of Environment 06/2011; 115(6):1301-1316. · 5.10 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a clustering-based color segmentation method where the desired object is focused on. As classical methods suffer from a lack of robustness, salient colors appearing in the object are used to intuitively tune the algorithm. These salient colors are extracted according to a psychovisual scheme and a peak-finding step. Results on various test sequences, covering a representative set of outdoor real videos, show the improvement when compared to a simple implementation of the same K-means oriented segmentation algorithm with ad hoc parameter setting strategy and with the well-known mean-shift algorithm.
    EURASIP Journal on Image and Video Processing 01/2008; · 0.57 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: There are close relationships between three popular approaches to image thresholding, namely Ridler and Calvard’s iterative-selection (IS) method, Kittler and Illingworth’s minimum-error-thresholding (MET) method and Otsu’s method. The relationships can be briefly described as: the IS method is an iterative version of Otsu’s method; Otsu’s method can be regarded as a special case of the MET method. The purpose of this correspondence is to provide a comprehensive clarification, some practical implications and further discussions of these relationships.
    Pattern Recognition Letters 04/2012; 33(6):793–797. · 1.27 Impact Factor