Arne Suppé's research while affiliated with Carnegie Mellon University and other places

Publications (16)

Conference Paper
With recent advances in robotics technologies and autonomous systems, the idea of human-robot teams is gaining ever-increasing attention. In this context, our research focuses on developing an intelligent robot that can autonomously perform non-trivial, but specific tasks conveyed through natural language. Toward this goal, a consortium of research...
Technical Report
Full-text available
This report highlights the capabilities demonstrated during the US Army Research Laboratory Robotics Collaborative Technology Alliance Capstone Experiment that took place during October 2014. The report succinctly presents the activities of the event and provides references for further reading on the specifics of those activities. Four capabilities...
Conference Paper
Full-text available
Robot perception is generally viewed as the interpretation of data from various types of sensors such as cameras. In this paper, we study indirect perception where a robot can perceive new information by making inferences from non-visual observations of human teammates. As a proof-of-concept study, we specifically focus on a door detection problem...
Article
Robots are increasingly becoming key players in human-robot teams. To become effective teammates, robots must possess profound understanding of an environment, be able to reason about the desired commands and goals within a specific context, and be able to communicate with human teammates in a clear and natural way. To address these challenges, we...
Conference Paper
Full-text available
Robots are increasingly becoming key players in human-robot teams. To become effective teammates, robots must possess profound understanding of an environment , be able to reason about the desired commands and goals within a specific context, and be able to communicate with human teammates in a clear and natural way. To address these challenges, we...
Conference Paper
We describe an architecture to provide online semantic labeling capabilities to field robots operating in urban environments. At the core of our system is the stacked hierarchical classifier developed by Munoz et al., which classifies regions in monocular color images using models derived from hand labeled training data. The classifier is trained t...
Conference Paper
In robotics research, perception is one of the most challenging tasks. In contrast to existing approaches that rely only on computer vision, we propose an alternative method for improving perception by learning from human teammates. To evaluate, we apply this idea to a door detection problem. A set of preliminary experiments has been completed usin...
Article
The detection and tracking of moving objects is an essential task in robotics. The CMU-RI Navlab group has developed such a system that uses a laser scanner as its primary sensor. We will describe our algorithm and its use in several applications. Our system worked successfully on indoor and outdoor platforms and with several different kinds and co...
Conference Paper
Full-text available
Despite regulations specifying parking spots that support wheelchair vans, it is not uncommon for end users to encounter problems with clearance for van ramps. Even if a driver elects to park in the far reaches of a parking lot as a precautionary measure, there is no guarantee that the spot next to their van will be empty when they return. Likewise...
Article
Full-text available
Providing drivers with comprehensive assistance systems has long been a goal for the automotive industry. The challenge is on many fronts, from building sensors, analyzing sensor data, automated understanding of traffic situations and appropriate interaction with the driver. These issues are discussed with the example of a collision warning system...
Conference Paper
Full-text available
This paper describes the development activities leading up to field testing of the transit integrated collision warning system, with special attention to the side component. Two buses, one each in California and Pennsylvania, have been outfitted with sensors, cameras, computers, and driver-vehicle interfaces in order to detect threats and generate...
Conference Paper
Detection and tracking of moving objects (DATMO) in crowded urban areas from a ground vehicle at high speeds is difficult because of a wide variety of targets and uncertain pose estimation from odometry and GPS/DGPS. In this paper we present a solution of the simultaneous localization and mapping (SLAM) with DATMO problem to accomplish this task us...
Conference Paper
Full-text available
The Navlab group at Carnegie Mellon University has a long history of development of automated vehicles and intelligent systems for driver assistance. The earlier work of the group concentrated on road following, cross-country driving, and obstacle detection. The new focus is on short-range sensing, to look all around the vehicle for safe driving. T...
Article
Intelligent vehicles are beginning to appear on the market, but so far their sensing and warning functions only work on the open road. Functions such as runoff-road warning or adaptive cruise control are designed for the uncluttered environments of open highways. We are working on the much more difficult problem of sensing and driver interfaces for...

Citations

... Using deep learning, recognition can bridge the gap between perception and intelligence. For example, perception missions can collect lots of data, but much of it was discarded to simplify interpretation by higher level tasks [20], e.g. a 3D object can become a point in space. In Arne's paper [20], it used a deep learning framework to replace these interfaces with learned interfaces. ...
... Biologically inspired multi-legged robots hold the promise for robustly traversing cluttered terrain, with inherent advantage of legs in overcoming large obstacles (Raibert, 2008) and higher stability offered by multiple legs (Full et al., 2006;Ting et al., 1994). By exploiting natural dynamics of leg-ground interaction (Childers et al., 2016;De and Koditschek, 2018;Garcia Bermudez et al., 2012;Li et al., 2009;Qian and Koditschek, 2020;Raibert, 2008;Saranli et al., 2001;Spagna et al., 2007) and body-terrain interaction Li et al., 2015), multi-legged robots have achieved improved locomotor performance and capabilities in complex terrain compared to those using quasi-static planning and control (although still far from robust (Arslan and Saranli, 2012)). For example, a recent study empirically discovered that an ellipsoidal body shape helps traverse cluttered obstacles by inducing body rolling through gaps narrower than the body width (Li et al., 2015). ...
... The arrival rate measures how often an agent reaches its a goal that is a primary metric in general robot navigation [167] and has also been used in social navigation [231]. In general, some assumptions are added to measure the arrival rate. ...
... In our prior work, we have demonstrated approaches to reduce the time and effort required to assign labels to visual data and adapt visual classifiers to novel environments (Wigness et al. 2016; Wigness and Rogers III 2017). This paper describes the combination of these techniques with a method for learning robot control parameters through inverse optimal control (IOC). ...
... They typically have to figure out where they are, navigate through foreign areas, and ask people for directions to their destination. State-of-the-art artificial-intelligence and humanrobot-interaction research enables robots to solve such naturallanguage-driven navigation tasks [3,5,15,16,19,21,22,22,30]. However, these systems suffer from certain limitations such as requiring a specific syntactic structure for naturallanguage commands or requiring a map of the environment. ...
... An earlier work on the idea of using human teammates in robot perception was briefly introduced in [16]. The temporal update method used, however, was too domain-specific to represent general dependence relationships among variables, thus lacking flexibility of generalization. ...
... The importance of curb/step detection for autonomous mobility is well established in literature, with a diverse set of sensors (such as in [12]) being used for the purpose. In [13], an active range-finder employing a laser and camera system is described that uses Jump-Markov modeling to track distances to the ground plane and detect steps. ...
... They only look at targets inside the regions of interest and use the detections to keep a safe distance to the obstacles. Finally, Mertz et al. [172] proposed a detection and tracking framework relying on multiple lidar sources that can be 2D or 3D. Using a combination of segments and features in the point cloud, they associated the measurements to obstacles and keep updated a list of current obstacles. ...
... Scanning LADARs have traditionally been used for mapping and analyzing stationary objects, (see [2], [3], [4], [5], [6], and [7]). Recently, there have been a number of extensions to detect moving targets using techniques such as global segmentation, [8], cluster similarity, [9], feature detection, [10], [11], [12], [13], [14], [15], [16], and model fitting, [17]. Of the techniques that explicitly seek to detect vehicles, most rely on finding straight-edge features in the data and infer vehicle positions from one or more of these edges (see [11], [12], [13], [14] and [15]). ...
... To date, research on parking systems, a type of advanced driver assistance system (ADAS), spans varying degrees of parking assistance including attentional cueing [1], multimodal displays [2]- [5], parking spot identification [6], and fully autonomous parking [7]- [9]. Automated parking is a necessary partially automated safety feature for moving towards a high level of vehicle automation in the future (i.e., Level 4 or Level 5 automation). ...