added an update
Despite numerous recent efforts, 3D object retrieval based on partial shape queries remains a challenging problem, far from being solved. The problem can be defined as: given a partial view of a shape as query, retrieve all partially similar 3D models from a repository. The objective of this track is to evaluate the performance of partial 3D object retrieval methods, for partial shape queries of various scan qualities and degrees of partiality. The retrieval problem is often found in cultural heritage applications, for which partial scans of objects query a dataset of geometrically distinct classes.
Performance evaluation is one of the main research topics in information retrieval. Evaluation metrics are used to quantify various performance aspects of a retrieval method. These metrics assist in identifying the optimum method for a specific retrieval challenge but also to allow its parameters finetuning in order to achieve a robust operation for a given set of requirements specification. In this work, we present RETRIEVAL, a Web-based integrated information retrieval performance evaluation platform. It offers a number of popular, within the scientific community, metrics so as to compose an efficient framework for implementing performance evaluation. We discuss on the functionality of RETRIEVAL by citing important aspects such as the data input approaches, the user-level performance metrics parameterisation, the evaluation scenarios, the interactive plots andtheperformancereportsrepositorythatoffersbotharchiving and download functionalities.
The continuous evolution of 3D computer graphics and the progress of 3D digitization systems resulted in a continuous increase in the available 3D content. The widespread use of 3D objects in diverse domains contributed on forming 3D object retrieval as an active research field. In order to objectively evaluate the performance of retrieval methodologies there is a need for objective benchmarking schemes. In this work, we provide a comprehensive overview of the state-of-the-art evaluation methodologies including not only the performance measures but also the corresponding benchmark datasets. Meaningful benchmark datasets are discussed while a detailed list of publicly available 3D model repositories is given organized in terms of application domains, content magnitude and data types.
Performance benchmarking is an absolute necessity when attempting to objectively quantify the performance of content-based retrieval methods. For many years now, a number of plot-based and scalar-based measures in combination with benchmark datasets have already been used in order to provide objective results. In this work, we present the first version of an integrated on-line content-based retrieval evaluation tool, named RETRIEVAL 3D, which can be used in order to quantify the performance of a retrieval method. The current version of the system offers a set of popular performance measures that can be accessed through a dynamic visualisation environment. The user is able to upload retrieval results using different input data structures (e.g. binary ranked lists, floating point ranked lists, dissimilarity matrices and groundtruth data) that are already encountered in the literature including the SHREC competition series. Moreover, the system is able to provide evaluation mechanisms for known within the retrieval research community benchmark datasets. It offers performance measures parameterisation that enables the user to determine specific aspects of the evaluated retrieval method. Performance reports archiving and downloading are some of the system's user-oriented functionalities.