Arne Suppé's research while affiliated with Carnegie Mellon University and other places
What is this page?
This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
Publications (16)
With recent advances in robotics technologies and autonomous systems, the idea of human-robot teams is gaining ever-increasing attention. In this context, our research focuses on developing an intelligent robot that can autonomously perform non-trivial, but specific tasks conveyed through natural language. Toward this goal, a consortium of research...
This report highlights the capabilities demonstrated during the US Army Research Laboratory Robotics Collaborative Technology Alliance Capstone Experiment that took place during October 2014. The report succinctly presents the activities of the event and provides references for further reading on the specifics of those activities. Four capabilities...
Robot perception is generally viewed as the interpretation of data from various types of sensors such as cameras. In this paper, we study indirect perception where a robot can perceive new information by making inferences from non-visual observations of human teammates. As a proof-of-concept study, we specifically focus on a door detection problem...
Robots are increasingly becoming key players in human-robot teams. To become effective teammates, robots must possess profound understanding of an environment, be able to reason about the desired commands and goals within a specific context, and be able to communicate with human teammates in a clear and natural way. To address these challenges, we...
Robots are increasingly becoming key players in human-robot teams. To become effective teammates, robots must possess profound understanding of an environment , be able to reason about the desired commands and goals within a specific context, and be able to communicate with human teammates in a clear and natural way. To address these challenges, we...
We describe an architecture to provide online semantic labeling capabilities to field robots operating in urban environments. At the core of our system is the stacked hierarchical classifier developed by Munoz et al., which classifies regions in monocular color images using models derived from hand labeled training data. The classifier is trained t...
In robotics research, perception is one of the most challenging tasks. In contrast to existing approaches that rely only on computer vision, we propose an alternative method for improving perception by learning from human teammates. To evaluate, we apply this idea to a door detection problem. A set of preliminary experiments has been completed usin...
The detection and tracking of moving objects is an essential task in robotics. The CMU-RI Navlab group has developed such a system that uses a laser scanner as its primary sensor. We will describe our algorithm and its use in several applications. Our system worked successfully on indoor and outdoor platforms and with several different kinds and co...
Despite regulations specifying parking spots that support wheelchair vans, it is not uncommon for end users to encounter problems with clearance for van ramps. Even if a driver elects to park in the far reaches of a parking lot as a precautionary measure, there is no guarantee that the spot next to their van will be empty when they return. Likewise...
Providing drivers with comprehensive assistance systems has long been a goal for the automotive industry. The challenge is on many fronts, from building sensors, analyzing sensor data, automated understanding of traffic situations and appropriate interaction with the driver. These issues are discussed with the example of a collision warning system...
This paper describes the development activities leading up to field testing of the transit integrated collision warning system, with special attention to the side component. Two buses, one each in California and Pennsylvania, have been outfitted with sensors, cameras, computers, and driver-vehicle interfaces in order to detect threats and generate...
Detection and tracking of moving objects (DATMO) in crowded urban areas from a ground vehicle at high speeds is difficult because of a wide variety of targets and uncertain pose estimation from odometry and GPS/DGPS. In this paper we present a solution of the simultaneous localization and mapping (SLAM) with DATMO problem to accomplish this task us...
The Navlab group at Carnegie Mellon University has a long history of development of automated vehicles and intelligent systems for driver assistance. The earlier work of the group concentrated on road following, cross-country driving, and obstacle detection. The new focus is on short-range sensing, to look all around the vehicle for safe driving. T...
Intelligent vehicles are beginning to appear on the market, but so far their sensing and warning functions only work on the open road. Functions such as runoff-road warning or adaptive cruise control are designed for the uncluttered environments of open highways. We are working on the much more difficult problem of sensing and driver interfaces for...
Citations
... Using deep learning, recognition can bridge the gap between perception and intelligence. For example, perception missions can collect lots of data, but much of it was discarded to simplify interpretation by higher level tasks [20], e.g. a 3D object can become a point in space. In Arne's paper [20], it used a deep learning framework to replace these interfaces with learned interfaces. ...
... Biologically inspired multi-legged robots hold the promise for robustly traversing cluttered terrain, with inherent advantage of legs in overcoming large obstacles (Raibert, 2008) and higher stability offered by multiple legs (Full et al., 2006;Ting et al., 1994). By exploiting natural dynamics of leg-ground interaction (Childers et al., 2016;De and Koditschek, 2018;Garcia Bermudez et al., 2012;Li et al., 2009;Qian and Koditschek, 2020;Raibert, 2008;Saranli et al., 2001;Spagna et al., 2007) and body-terrain interaction Li et al., 2015), multi-legged robots have achieved improved locomotor performance and capabilities in complex terrain compared to those using quasi-static planning and control (although still far from robust (Arslan and Saranli, 2012)). For example, a recent study empirically discovered that an ellipsoidal body shape helps traverse cluttered obstacles by inducing body rolling through gaps narrower than the body width (Li et al., 2015). ...
... The arrival rate measures how often an agent reaches its a goal that is a primary metric in general robot navigation [167] and has also been used in social navigation [231]. In general, some assumptions are added to measure the arrival rate. ...
... In our prior work, we have demonstrated approaches to reduce the time and effort required to assign labels to visual data and adapt visual classifiers to novel environments (Wigness et al. 2016; Wigness and Rogers III 2017). This paper describes the combination of these techniques with a method for learning robot control parameters through inverse optimal control (IOC). ...
... They typically have to figure out where they are, navigate through foreign areas, and ask people for directions to their destination. State-of-the-art artificial-intelligence and humanrobot-interaction research enables robots to solve such naturallanguage-driven navigation tasks [3,5,15,16,19,21,22,22,30]. However, these systems suffer from certain limitations such as requiring a specific syntactic structure for naturallanguage commands or requiring a map of the environment. ...
Reference: The Amazing Race TM: Robot Edition
... An earlier work on the idea of using human teammates in robot perception was briefly introduced in [16]. The temporal update method used, however, was too domain-specific to represent general dependence relationships among variables, thus lacking flexibility of generalization. ...
... The importance of curb/step detection for autonomous mobility is well established in literature, with a diverse set of sensors (such as in [12]) being used for the purpose. In [13], an active range-finder employing a laser and camera system is described that uses Jump-Markov modeling to track distances to the ground plane and detect steps. ...
... They only look at targets inside the regions of interest and use the detections to keep a safe distance to the obstacles. Finally, Mertz et al. [172] proposed a detection and tracking framework relying on multiple lidar sources that can be 2D or 3D. Using a combination of segments and features in the point cloud, they associated the measurements to obstacles and keep updated a list of current obstacles. ...
... Scanning LADARs have traditionally been used for mapping and analyzing stationary objects, (see [2], [3], [4], [5], [6], and [7]). Recently, there have been a number of extensions to detect moving targets using techniques such as global segmentation, [8], cluster similarity, [9], feature detection, [10], [11], [12], [13], [14], [15], [16], and model fitting, [17]. Of the techniques that explicitly seek to detect vehicles, most rely on finding straight-edge features in the data and infer vehicle positions from one or more of these edges (see [11], [12], [13], [14] and [15]). ...
... To date, research on parking systems, a type of advanced driver assistance system (ADAS), spans varying degrees of parking assistance including attentional cueing [1], multimodal displays [2]- [5], parking spot identification [6], and fully autonomous parking [7]- [9]. Automated parking is a necessary partially automated safety feature for moving towards a high level of vehicle automation in the future (i.e., Level 4 or Level 5 automation). ...
Reference: Trust in Automated Parking Systems