Article

Localization of furniture parts by integrating range and intensity data robust against depths with low signal-to-noise ratio

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In prior work, a sensor environment consisting of stereo cameras, magnetic field trackers and datagloves was developed. The sensor environment was extended with two robot heads, each with pan-tilt-unit, stereo and RGB-D (depth) camera, see Figure 1. 3. In order to detect and localize objects in the learning environment, the 2D-vision library IVT [4] and 2D/3D object localization [74] were incorporated. Additionally, tactile sensors to measure forces applied with the fingertips were developed, which will be described in Section 4.1. ...
... In [4] the focus is on indoor mobile robotics, and a TOF camera is used to track parts of an object (e.g., the legs of a chair) over subsequent range images using a particle filter: the volumetric representation of parts is made with bounding boxes. Similar approaches have been proposed in [38], describing an approach to recognize furniture based on local features, as well as in [39], integrating grey scale values and depth information for furniture recognition. The authors of [40] propose once again to represent objects through their component parts, but focus on the problem of generating novel views when it is not possible to store in the memory a complete set of views for every object. ...
Article
Full-text available
A new approach enabling a mobile robot to recognize and classify furniture-like objects composed of assembled parts using a Microsoft Kinect is presented. Starting from considerations about the structure of furniture-like objects, i.e., objects which can play a role in the course of a mobile robot mission, the 3D point cloud returned by the Kinect is first segmented into a set of “almost convex” clusters. Objects are then represented by means of a graph expressing mutual relationships between such clusters. Off-line, snapshots of the same object taken from different positions are processed and merged, in order to produce multiple-view models that are used to populate a database. On-line, as soon as a new object is observed, a run-time window of subsequent snapshots is used to search for a correspondence in the database. Experiments validating the approach with a set of objects (i.e., chairs, tables, but also other robots) are reported and discussed in detail.
Article
The ability of fast and automatic volume measurement of merchandise is of paramount importance in logistics. In this paper, we address the problem of volume estimation of goods stacked on pallets and transported in pallet trucks. Practical requirements of this industrial application are that the load of the moving pallet truck has to be measured in real-time, and that the measurement system should be non-invasive and non-contact, as well as robust and accurate. The main contribution of this paper is the design of simple, flexible, fast and robust algorithms for volume estimation. A significant feature of these algorithms is that they can be used in industrial environments and that they perform properly even when they use the information provided by different range devices working simultaneously. In addition, we propose a novel perception system for volume measurement consisting of a heterogeneous set of range sensors based on different technologies, such as time of flight and structured light, working simultaneously. Another key point of our proposal is the investigation of the performance of these sensors in terms of precision and accuracy under a diverse set of conditions. We also analyse their interferences and performance when they operate at the same time. Then, the analysis of this study is used to determine the final configuration of the cameras for the perception system. Real experiments proof the performance and reliability of the approach and demonstrate its validity for the industrial application considered.
Article
Full-text available
Range sensors for assisted backup and parking have po-tential for saving human lives and for facilitating parking in challenging situations. However, important features such as curbs and ramps are difficult to detect using ultrasonic or microwave sensors. TOF imaging range sensors may be used successfully for this purpose. In this paper we present a study concerning the use of the Canesta TOF cam-era for recognition of curbs and ramps. Our approach is based on the detection of individual planar patches using CC-RANSAC, a modified version of the classic RANSAC robust regression algorithm. Whereas RANSAC uses the whole set of inliers to evaluate the fitness of a candidate plane, CC-RANSAC only considers the largest connected components of inliers. We provide experimental evidence that CC-RANSAC provides a more accurate estimation of the dominant plane than RANSAC with a smaller number of iterations.
Article
Full-text available
We discuss the joint calibration of novel 3D range cameras based on the time-of-flight principle with the Photonic Mixing Device (PMD) and standard 2D CCD cameras. Due to the small field-of-view (fov) and low pixel resolution, PMD-cameras are difficult to calibrate with traditional calibration methods. In addition, the 3D range data contains systematic errors that need to be compensated. Therefore, a calibration method is developed that can estimate full intrinsic calibration of the PMD-camera including optical lens distortions and systematic range errors, and is able to calibrate the external orientation together with multiple 2D cameras that are rigidly coupled to the PMD-camera. The calibration approach is based on a planar checkerboard pattern as calibration reference, viewed from multiple angles and distances. By combining the PMD-camera with standard CCD-cameras the internal camera parameters can be estimated more precisely and the limitations of the small fov can be overcome. Furthermore we use the additional cameras to calibrate the systematic depth measurement error of the PMD-camera. We show that the correlation between rotation and translation estimation is significantly reduced with our method.
Conference Paper
Full-text available
Current research in service robotics is more and more aimed at applications in real home environments. In such context, the ability to track and understand human movements is very important for a robot, for human-robot-interaction as well as other purposes, e.g. proactive behavior, gestures and motions are an important channel of information about the humans intentions. Before actual motion tracking can take place, it is necessary to initialize the tracking system with a hypothesis about the position and pose of the person who shall be tracked. For collaboration with humans in an unknown environment, the system should perform this step automatically. Therefore, we propose an approach to initialize a usable model of a human standing in front of the system by determining the position and height of a human from its silhouette with a cascade of simple metrics, e.g. compactness and position of the neck.
Conference Paper
Full-text available
This paper presents a new method for extracting object edges from range images obtained by a 3D range imaging sensor the SwissRanger SR-3000. In range image preprocessing stage, the method enhances object edges by using surface normal information; and it employs the Hough Transform to detect straight line features in the Normal-Enhanced Range Image (NERI). Due to the noise in the sensor's range data, a NERI contains corrupted object surfaces that may result in unwanted edges and greatly encumber the extraction of linear features. To alleviate this problem, a Singular Value Decomposition (SVD) filter is developed to smooth object surfaces. The efficacy of the edge extraction method is validated by experiments in various environments.
Article
Full-text available
This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.
Conference Paper
Full-text available
A fast but nevertheless accurate approach for surface extraction from noisy 3D point clouds is presented. It consists of two parts, namely a plane fitting and a polygonalization step. Both exploit the sequential nature of 3D data acquisition on mobile robots in form of range images. For the plane fitting, this is used to revise the standard mathematical formulation to an incremental version, which allows a linear computation. For the polygonalization, the neighborhood relation in range images is exploited. Experiments are presented using a time-of-flight range camera in form of a Swissranger SR-3000. Results include lab scenes as well as data from two runs of the rescue robot league at the RoboCup German Open 2007 with 1,414, respectively 2,343 sensor snapshots. The 36 ldr 10<sup>6</sup>, respectively 59 ldr 10<sup>6</sup> points from the two point clouds are reduced to about 14ldr10<sup>3</sup>, respectively 23 ldr 10<sup>3</sup> planes with only about 0.2 sec of total computation time per snapshot while the robot moves along. Uncertainty analysis of the computed plane parameters is presented as well.
Conference Paper
Full-text available
Supervision of long-lasting extensive botanic experiments is a promising robotic application that some recent technological advances have made feasible. Plant modelling for this application has strong demands, particularly in what concerns 3D information gathering and speed. This paper shows that Time-of-Flight (ToF) cameras achieve a good compromise between both demands, providing a suitable complement to color vision. A new method is proposed to segment plant images into their composite surface patches by combining hierarchical color segmentation with quadratic surface fitting using ToF depth data. Experimentation shows that the interpolated depth maps derived from the obtained surfaces fit well the original scenes. Moreover, candidate leaves to be approached by a measuring instrument are ranked, and then robot-mounted cameras move closer to them to validate their suitability to being sampled. Some ambiguities arising from leaves overlap or occlusions are cleared up in this way. The work is a proof-of-concept that dense color data combined with sparse depth as provided by a ToF camera yields a good enough 3D approximation for automated plant measuring at the high throughput imposed by the application.
Conference Paper
Full-text available
This paper proposes a decision making and con- trol supervision system for a multi-modal service robot. With partially observable Markov decision processes (POMDPs) uti- lized for scenario level decision making, the robot is able to deal with uncertainty in both observation and environment dynamics and can balance multiple, conflicting goals. By us- ing a flexible task sequencing system for fine grained robot component coordination, complex sub-activities, beyond the scope of current POMDP solutions, can be performed. The sequencer bridges the gap of abstraction between abstract POMDP models and the physical world concerning actions, and in the other direction multi-modal perception is filtered while preserving measurement uncertainty and model-soundness. A realistic scenario for an autonomous, anthropomorphic service robot, including the modalities of mobility, multi-modal human- robot interaction and object grasping, has been performed robustly by the system for several hours. The proposed filter- POMDP reasoner is compared with classic POMDP as well as MDP decision making and a baseline finite state machine controller on the physical service robot, and the experiments exhibit the characteristics of the different algorithms. I. INTRODUCTION Service robots are meant to act autonomously and robustly in real world environments. Yet, observations of the physical world by robots are limited and noisy, thus the environment is partially observable. Also, the course of events in the real world is never completely deterministic but stochastic. Both aspects of uncertainty need to be regarded by decision making of an autonomous service robot. In general, decision making of a multi-modal robot uses perceptions of multiple sensors together with background knowledge to choose one of the available actions which will contribute most likely to mission success. The chosen action is performed by coordinating available actuators. This paper introduces a decision making and supervision system considering uncertainty, which utilizes partially ob- servable Markov decision processes (POMDPs) for sym- bolic, scenario-level decisions. A main focus is bridging the gap of abstraction between symbolic POMDPs and multi- modal, real world perception as well as multiple actuators. Sensor information, including uncertainty, is filtered and fused into belief states. Abstract POMDP decisions are executed by processing sequential task programs to execute more complex and deterministic sub-tasks. The presented approach is evaluated on a physical, au- tonomous, anthropomorphic service robot within a realistic waiter cup-serving scenario.
Conference Paper
Full-text available
This paper presents an incremental object part detection algorithm using a particle filter. The method infers object parts from 3D data acquired with a range camera. The range information is quantized and enhanced by local structure to partially cope with considerable measurement noise and distortion. The augmented voxel representation allows the adaptation of known track-before-detect algorithms to infer multiple object parts in a range image sequence even when each single observation does not contain enough information to do the detection. The appropriateness of the method is successfully demonstrated by two experiments for chair legs.
Conference Paper
Full-text available
In Programming by Demonstration, a flexible representation of manipulation motions is necessary to learn and generalize from human demonstrations. In contrast to subsymbolic representations of trajectories, e.g. based on a Gaussian Mixture Model, a partially symbolic representation of manipulation strategies based on a temporal satisfaction problem with domain constraints is developed. By using constrained motion planning and a geometric constraint representation, generalization to different robot systems and new environments is achieved. In order to plan learned manipulation strategies the RRT-based algorithm by Stilman et al. is extended to consider, that multiple sets of constraints are possible during the extension of the search tree.
Conference Paper
Full-text available
In this paper we address the topic of feature extraction in 3D point cloud data for object recognition and pose identification. We present a novel interest keypoint extraction method that operates on range images generated from arbitrary 3D point clouds, which explicitly considers the borders of the objects identified by transitions from foreground to background. We furthermore present a feature descriptor that takes the same information into account. We have implemented our approach and present rigorous experiments in which we analyze the individual components with respect to their repeatability and matching capabilities and evaluate the usefulness for point feature based object detection methods.
Conference Paper
Full-text available
Our research is focused on the development of robust machine vision algorithms for pattern recognition. We want to provide robotic systems the ability to understand more on the external real world. In this paper, we describe a method for detecting ellipses in real world images using the randomized Hough transform with result clustering. A preprocessing phase is used in which real world images are transformed - noise reduction, greyscale transform, edge detection and final binarization - in order to be processed by the actual ellipse detector. The ellipse detector filters out false ellipses that may interfere with the final results. Due to the fact that usually more "virtual" ellipses are detected for one "real" ellipse, a data clustering scheme is used, the clustering method, classifies all detected "virtual" ellipses into their corresponding "real" ellipses. The post processing phase is VQ similar and it also finds the actual number of classes unknown a priori
Chapter
When learning abstract probabilistic decision making models for multi-modal service robots from human demonstrations, alternative courses of events may be missed by human teachers during demonstrations. We present an active model space exploration approach with generalization of observed action effect knowledge leading to interactive requests of new demonstrations to verify generalizations. At first, the robot observes several user demonstrations of interacting humans, including dialog, object poses and human body movement. Discretization and analysis then lead to a symbolic-causal model of a demonstrated task in the form of a preliminary Partially observable Markov decision process. Based on the transition model generated from demonstrations, new hypotheses of unobserved action effects, generalized transitions, can be derived along with a generalization confidence estimate. To validate generalized transitions which have a strong impact on a decision policy, a request generator proposes further demonstrations to human teachers, used in turn to implicitly verify hypotheses. The system has been evaluated on a multi-modal service robot with realistic tasks, including furniture manipulation and execution-time interacting humans.
Article
Model-based vision is firmly established as a robust approach to recognizing and locating known rigid objects in the presence of noise, clutter, and occlusion. It is more problematic to apply model-based methods to images of objects whose appearance can vary, though a number of approaches based on the use of flexible templates have been proposed. The problem with existing methods is that they sacrifice model specificity in order to accommodate variability, thereby compromising robustness during image interpretation. We argue that a model should only be able to deform in ways characteristic of the class of objects it represents. We describe a method for building models by learning patterns of variability from a training set of correctly annotated images. These models can be used for image search in an iterative refinement algorithm analogous to that employed by Active Contour Models (Snakes). The key difference is that our Active Shape Models can only deform to fit the data in ways consistent with the training set. We show several practical examples where we have built such models and used them to locate partially occluded objects in noisy, cluttered images.
Article
We are interested in PDE's (Partial Differential Equations) in order to smooth multi-valued images in an anisotropic manner. Starting from a review of existing anisotropic regularization schemes based on diffusion PDE's, we point out the pros and cons of the different equations proposed in the literature. Then, we introduce a new tensor-driven PDE, regularizing images while taking the curvatures of specific integral curves into account. We show that this constraint is particularly well suited for the preservation of thin structures in an image restoration process. A direct link is made between our proposed equation and a continuous formulation of the LIC's (Line Integral Convolutions by Cabral and Leedom (1993). It leads to the design of a very fast and stable algorithm that implements our regularization method, by successive integrations of pixel values along curved integral lines. Besides, the scheme numerically performs with a sub-pixel accuracy and preserves then thin image structures better than classical finite-differences discretizations. Finally, we illustrate the efficiency of our generic curvature-preserving approach – in terms of speed and visual quality – with different comparisons and various applications requiring image smoothing : color images denoising, inpainting and image resizing by nonlinear interpolation.
Conference Paper
This paper presents a robust real-time algorithm to automatically detect and accurately locate ellipse objects in digital images. The algorithm consists of three steps. First the edge pixels are extracted using Canny edge detection algorithm and then a noise removal process is run to remove the non-ellipse edgepoints. In the second step, a direct least-square-fitting algorithm is used to calculate the ellipse parameters from each cluster of pixels. In the third step, a robust criterion is developed to identify valid ellipses. The algorithm is implemented in Visual C++ and tested on a laptop powered by an Intel Centrino Duo CPU at 1.8 GHz. The preliminary experiment shows the algorithmpsilas speed is 212 ms/images on average for image size of 640X480.
Article
This paper introduces an integrated local surface descriptor for surface representation and 3D object recognition. A local surface descriptor is characterized by its centroid, its local surface type and a 2D histogram. The 2D histogram shows the frequency of occurrence of shape index values vs. the angles between the normal of reference feature point and that of its neighbors. Instead of calculating local surface descriptors for all the 3D surface points, they are calculated only for feature points that are in areas with large shape variation. In order to speed up the retrieval of surface descriptors and to deal with a large set of objects, the local surface patches of models are indexed into a hash table. Given a set of test local surface patches, votes are cast for models containing similar surface descriptors. Based on potential corresponding local surface patches candidate models are hypothesized. Verification is performed by running the Iterative Closest Point (ICP) algorithm to align models with the test data for the most likely models occurring in a scene. Experimental results with real range data are presented to demonstrate and compare the effectiveness and efficiency of the proposed approach with the spin image and the spherical spin image representations.
Article
This article investigates the problem of acquiring 3D object maps of indoor household environments, in particular kitchens. The objects modeled in these maps include cupboards, tables, drawers and shelves, which are of particular importance for a household robotic assistant. Our mapping approach is based on PCD (point cloud data) representations. Sophisticated interpretation methods operating on these representations eliminate noise and resample the data without deleting the important details, and interpret the improved point clouds in terms of rectangular planes and 3D geometric shapes. We detail the steps of our mapping approach and explain the key techniques that make it work. The novel techniques include statistical analysis, persistent histogram features estimation that allows for a consistent registration, resampling with additional robust fitting techniques, and segmentation of the environment into meaningful regions.
Article
The Hough transform is a method for detecting curves by exploiting the duality between points on a curve and parameters of that curve. The initial work showed how to detect both analytic curves(1,2) and non-analytic curves,(3) but these methods were restricted to binary edge images. This work was generalized to the detection of some analytic curves in grey level images, specifically lines,(4) circles(5) and parabolas.(6) The line detection case is the best known of these and has been ingeniously exploited in several applications.(7,8,9)We show how the boundaries of an arbitrary non-analytic shape can be used to construct a mapping between image space and Hough transform space. Such a mapping can be exploited to detect instances of that particular shape in an image. Furthermore, variations in the shape such as rotations, scale changes or figure ground reversals correspond to straightforward transformations of this mapping. However, the most remarkable property is that such mappings can be composed to build mappings for complex shapes from the mappings of simpler component shapes. This makes the generalized Hough transform a kind of universal transform which can be used to find arbitrarily complex shapes.
Conference Paper
This paper describes a highly successful application of MRFs to the prob- lem of generating high-resolution range images. A new generation of range sensors combines the capture of low-resolution range images with the acquisition of registered high-resolution camera images. The MRF in this paper exploits the fact that discontinuities in range and coloring tend to co-align. This enables it to generate high-resolution, low-noise range images by integrating regular camera images into the range data. We show that by using such an MRF, we can substantially improve over existing range imaging technology.
Conference Paper
This paper describes a fast plane extraction al- gorithm for 3D range data. Taking advantage of the point neighborhood structure in data acquired from 3D sensors like range cameras, laser range finders and Microsoft Kinect, it divides the plane-segment extraction task into three steps. The first step is a 2D line segment extraction from raw sensor data, interpreted as 2D data, followed by a line segment based connected component search. The final step finds planes based on connected segment component sets. The first step inspects 2D sub spaces only, leading to a line segment representation of the 3D scan. Connected components of segments represent candidate sets of coplanar segments. Line segment representa- tion and connected components vastly reduce the search space for the plane-extraction step. A region growing algorithm is utilized to find coplanar segments and their optimal (least square error) plane approximation. Region growing contains a fast plane update technique in its core, which combines sets of co-planar segments to form planar elements. Experiments are performed on real world data from different sensors.
Conference Paper
In Programming by Demonstration (PbD), one of the key problems for autonomous learning is to automatically extract the relevant features of a manipulation task, which has a significant impact on the generalization capabilities. In this paper, task features are encoded as constraints of a learned planning model. In order to extract the relevant constraints, the human teacher demonstrates a set of tests, e.g. a scene with different objects, and the robot tries to execute the planning model on each test using constrained motion planning. Based on statistics about which constraints failed during the planning process multiple hypotheses about a maximal subset of constraints, which allows to find a solution in all tests, are refined in parallel using an evolutionary algorithm. The algorithm was tested on 7 experiments and two robot systems.
Book
Fuzzy sets were introduced by Zadeh [9] in 1965 to represent/manipu-late data and information possessing nonstatistical uncertainties. Fuzzy sets serve as a means of representing and manipulating data that are not precise, but rather fuzzy.
Article
Hough has proposed an interesting and computationally efficient procedure for detecting lines in pictures. This paper points out that the use of angle-radius rather than slope-intercept parameters simplifies the computation further. It also shows how the method can be used for more general curve fitting, and gives alternative interpretations that explain the source of its efficiency.
Article
A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. The authors describe the application of RANSAC to the Location Determination Problem (LDP): given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing and analysis conditions. Implementation details and computational examples are also presented
Article
The problem of object detection and recognition is a notoriously difficult one, and one that has been the focus of much work in the computer vision and robotics communities. Most work has concentrated on systems that operate purely on visual inputs (i.e., images) and largely ignores other sensor modalities. However, despite the great progress made down this track, the goal of high accuracy object detection for robotic platforms in cluttered real-world environments remains elusive. Instead of relying on information from the image alone, we present a method that exploits the multiple sensor modalities available on a robotic platform. In particular, our method augments a 2-d object detector with 3-d information from a depth sensor to produce a “multi-modal object detector.” We demonstrate our method on a working robotic system and evaluate its performance on a number of common household/office objects.
Conference Paper
The main advantage of using the Hough Transform to detect ellipses is its robustness against missing data points. However, the storage and computational requirements of the Hough Transform preclude practical applications. Although there are many modifications to the Hough Transform, these modifications still demand significant storage requirement. In this paper, we present a novel ellipse detection algorithm which retains the original advantages of the Hough Transform while minimizing the storage and computation complexity. More specifically, we use an accumulator that is only one dimensional. As such, our algorithm is more effective in terms of storage requirement. In addition, our algorithm can be easily parallelized to achieve good execution time. Experimental results on both synthetic and real images demonstrate the robustness and effectiveness of our algorithm in which both complete and incomplete ellipses can be extracted.
Conference Paper
A method for the recognition of parallelograms in a binary image is presented. The peaks of the Hough transform are used to generate a hypothesis about the existence of a parallelogram. Geometric relations that hold between the lengths of the edges and the heights of the parallelogram are used for the hypothesis testing. Results are shown to indicate how the new method deals with noise and the existence of other objects in the image
Article
We present a 3D shape-based object recognition system for simultaneous recognition of multiple objects in scenes containing clutter and occlusion. Recognition is based on matching surfaces by matching points using the spin image representation. The spin image is a data level shape descriptor that is used to match surfaces represented as surface meshes. We present a compression scheme for spin images that results in efficient multiple object recognition which we verify with results showing the simultaneous recognition of multiple objects from a library of 20 models. Furthermore, we demonstrate the robust performance of recognition in the presence of clutter and occlusion through analysis of recognition trials on 100 scenes
Article
In this paper, we introduce a new method for ellipse detection. This method takes the advantages of major axis of an ellipse to find ellipse parameter fast and efficiently. It only needs an one-dimensional accumulator array to accumulate the length information for minor axis of the ellipse. This method does not require the evaluation of the tangents or curvatures of the edge contours, which are generally very sensitive to noise working conditions. No complicated mathematical computation is involved in the implementation and the required computational storage space is much cheaper compared to the current methods. Experiments with both synthetic and real images indicate the effectiveness of the proposed method.
Article
We present a 3-D shape-based object recognition system for simultaneous recognition of multiple objects in scenes containing clutter and occlusion. Recognition is based on matching surfaces by matching points using the spin-image representation. The spin-image is a data level shape descriptor that is used to match surfaces represented as surface meshes. We present a compression scheme for spin-images that results in efficient multiple object recognition which we verify with results showing the simultaneous recognition of multiple objects from a library of 20 models. Furthermore, we demonstrate the robust performance of recognition in the presence of clutter and occlusion through analysis of recognition trials on 100 scenes. This research was performed at Carnegie Mellon University and was supported by the US Department of Energy under contract DE-AC21-92MC29104. 1 1 Introduction Surface matching is a technique from 3-D computer vision that has many applications in the area of robot...
Calibration of a PMD camera using a planar calibration object together with a multi-camera setup, in: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
  • I Schiller
  • C Beder
  • R Koch
I. Schiller, C. Beder, R. Koch, Calibration of a PMD camera using a planar calibration object together with a multi-camera setup, in: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2008.
Computer Vision: Principles and Practice
  • P Azad
  • T Gockel
  • R Dillmann
P. Azad, T. Gockel, R. Dillmann, Computer Vision: Principles and Practice, Elektor Electronics Publishing, 2008.
Integrating visual and range data for robotic object detection
  • S Gould
  • P Baumstarck
  • M Quigley
  • A Ng
  • D Koller
S. Gould, P. Baumstarck, M. Quigley, A. Ng, D. Koller, Integrating visual and range data for robotic object detection, in: ECCV Workshop on Multi-Camera and Multi-Modal Sensor Fusion Algorithms and Applications, 2008.
Project albert service experiments (PASE) stages I-III
  • S R Schmidt-Rohr
S.R. Schmidt-Rohr, et al. Project albert service experiments (PASE) stages I-III, Technical Report, Karlsruhe Institute of Technology (KIT), IFA, 2012.
Bridging the gap of abstraction for probabilistic decision making on a multi-modal service robot
  • S R Schmidt-Rohr
  • S Knoop
  • M Lösch
  • R Dillmann
S.R. Schmidt-Rohr, S. Knoop, M. Lösch, R. Dillmann, Bridging the gap of abstraction for probabilistic decision making on a multi-modal service robot, Robotics: Science and Systems IV (2008).