Chris Burbridge’s research while affiliated with University of Birmingham and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (15)


Figure 1: Two of the STRANDS MetraLabs SCITOS A5s in their application environments. On the left is the robot Bob at G4S's Challenge House in Tewkesbury, UK. On the right is the robot Henry in the reception of Haus der Barmherzigkeit, Vienna.  
Figure 3: A plot of the tasks performed by the robot during the 2015 security deployment.  
Figure 5: The map of the deployment area in Challenge House, Tewkesbury with the topological map superimposed. Also displayed are the locations of places where the robot successfully recovered from a navigation-related failure. Locations where the bumper was triggered are red. The robot asked humans for help at these locations. It also did this at locations marked with green (for non-bumper fails). Places where recoveries were autonomously performed by reversing along the previous path are marked in yellow, and by simply sleeping then retrying in blue.
Figure 7: The results of the robot selecting interaction times and locations using FreMEn models learnt during the 2015 care deployment.
Figure 8: Top: The manually-created semantic map from the 2015 security deployment at Challenge House, Tewkesbury. Botton: Example human trajectories from the 2015 security deployment at Challenge House, Tewkesbury. These are trajectories with length close to the average trajectory length of 2.44m. Also pictured are the manually annotated room regions we used for task planning.  
The STRANDS Project: Long-Term Autonomy in Everyday Environments
  • Article
  • Full-text available

April 2016

·

497 Reads

·

160 Citations

IEEE Robotics & Automation Magazine

Nick Hawes

·

Chris Burbridge

·

·

[...]

·

Thanks to the efforts of our community, autonomous robots are becoming capable of ever more complex and impressive feats. There is also an increasing demand for, perhaps even an expectation of, autonomous capabilities from end-users. However, much research into autonomous robots rarely makes it past the stage of a demonstration or experimental system in a controlled environment. If we don't confront the challenges presented by the complexity and dynamics of real end-user environments, we run the risk of our research becoming irrelevant or ignored by the industries who will ultimately drive its uptake. In the STRANDS project we are tackling this challenge head-on. We are creating novel autonomous systems, integrating state-of-the-art research in artificial intelligence and robotics into robust mobile service robots, and deploying these systems for long-term installations in security and care environments. To date, over four deployments, our robots have been operational for a combined duration of 2545 hours (or a little over 106 days), covering 116km while autonomously performing end-user defined tasks. In this article we present an overview of the motivation and approach of the STRANDS project, describe the technology we use to enable long, robust autonomous runs in challenging environments, and describe how our robots are able to use these long runs to improve their own performance through various forms of learning.

Download

The STRANDS Project: Long-Term Autonomy in Everyday Environments

April 2016

·

3 Reads

Thanks to the efforts of the robotics and autonomous systems community, robots are becoming ever more capable. There is also an increasing demand from end-users for autonomous service robots that can operate in real environments for extended periods. In the STRANDS project we are tackling this demand head-on by integrating state-of-the-art artificial intelligence and robotics research into mobile service robots, and deploying these systems for long-term installations in security and care environments. Over four deployments, our robots have been operational for a combined duration of 104 days autonomously performing end-user defined tasks, covering 116km in the process. In this article we describe the approach we have used to enable long-term autonomous operation in everyday environments, and how our robots are able to use their long run times to improve their own performance.


A Comparison of Qualitative and Metric Spatial Relation Models for Scene Understanding

February 2015

·

5 Reads

·

12 Citations

Proceedings of the AAAI Conference on Artificial Intelligence

Object recognition systems can be unreliable when run in isolation depending on only image based features, but their performance can be improved when taking scene context into account. In this paper, we present techniques to model and infer object labels in real scenes based on a variety of spatial relations — geometric features which capture how objects co-occur — and compare their efficacy in the context of augmenting perception based object classification in real-world table-top scenes. We utilise a long-term dataset of office table-tops for qualitatively comparing the performances of these techniques. On this dataset, we show that more intricate techniques, have a superior performance but do not generalise well on small training data. We also show that techniques using coarser information perform crudely but sufficiently well in standalone scenarios and generalise well on small training data. We conclude the paper, expanding on the insights we have gained through these comparisons and comment on a few fundamental topics with respect to long-term autonomous robots.


Figure 2: Column-1 (L to R) shows the robot ROSIE (SCITOS G5 platform with additional 3D sensors) that is being used for deploying and testing our current systems. Column-2 shows the table of the same person with changes in arrangement between the morning and evening on the same day. Column-3 shows the table configurations of a different person 12 days apart (Thippur et al. 2014). 
A Comparison of Qualitative and Metric Spatial Relation Models for Scene Understanding

February 2015

·

82 Reads

·

11 Citations

Object recognition systems can be unreliable when run in isolation depending on only image based features, but their performance can be improved when taking scene context into account. In this paper, we present techniques to model and infer object labels in real scenes based on a variety of spatial relations – geometric features which capture how objects co-occur – and compare their efficacy in the context of augmenting perception based object classification in real-world table-top scenes. We utilise a long-term dataset of office tabletops for qualitatively comparing the performances of these techniques. On this dataset, we show that more intricate techniques, have a superior performance but do not generalise well on small training data. We also show that techniques using coarser information perform crudely but sufficiently well in standalone scenarios and generalise well on small training data. We conclude the paper, expanding on the insights we have gained through these comparisons and comment on a few fundamental topics with respect to long-term autonomous robots.


Fig. 1. Robot perceives a scene of an office desk. 
Fig. 2. Segmented clusters on an office desk.
Fig. 3. Localised robot perceives objects on office desk. 
Fig. 4. Individual fold results for the LOOF experiment (with classification) 
Fig. 5. Object bounding box sizes. Top: 3783 human-segmented objects from a large dataset. Bottom: 303 BUP-segmented objects from our data. 
Combining top-down spatial reasoning and bottom-up object class recognition for scene understanding

September 2014

·

217 Reads

·

39 Citations

Many robot perception systems are built to only consider intrinsic object features to recognise the class of an object. By integrating both top-down spatial relational reasoning and bottom-up object class recognition the overall performance of a perception system can be improved. In this paper we present a unified framework that combines a 3D object class recognition system with learned, spatial models of object relations. In robot experiments we show that our combined approach improves the classification results on real world office desks compared to pure bottom-up perception. Hence, by using spatial knowledge during object class recognition perception becomes more efficient and robust and robots can understand scenes more effectively.


An Approach for Efficient Planning of Robotic Manipulation Tasks

June 2013

·

10 Reads

·

9 Citations

Proceedings of the International Conference on Automated Planning and Scheduling

Robot manipulation is a challenging task for planning as it involves a mixture of symbolic planning and geometric planning. We would like to express goals and many action effects symbolically, for example specifying a goal such as for all x, if x is a cup, then x should be on the tray, but to accomplish this we may need to plan the geometry of fitting all the cups on the tray and how to grasp, move and release the cups to achieve that geometry. In the ideal case, this could be accomplished by a fully hybrid planner that alternates between geometric and symbolic reasoning to generate a solution. However, in practice this is very complex, and the full power of this approach may only be required for a small subset of problems. Instead, we plan completely symbolically, and then attempt to generate a geometric plan by translating the symoblic predicates into geometric relationships. We then execute this plan in simulation, and if it fails, we backtrack, first in geometric space, and then if necessary in symbolic. We show that this approach, while not complete, solves a number of challenging manipulation problems, and demonstrate it running on a robotic platform. Copyright © 2013, Association for the Advancement of Artificial Intelligence. All rights reserved.


Manipulation planning using learned symbolic state abstractions

January 2013

·

33 Reads

·

26 Citations

Robotics and Autonomous Systems

We present an approach for planning robotic manipulation tasks that uses a learned mapping between geometric states and logical predicates. Manipulation planning, because it requires task-level and geometric reasoning, requires such a mapping to convert between the two. Consider a robot tasked with putting several cups on a tray. The robot needs to find positions for all the objects, and may need to nest one cup inside another to get them all on the tray. This requires translating back and forth between symbolic states that the planner uses, such as stacked (cup1,cup2), and geometric states representing the positions and poses of the objects. We learn the mapping from labelled examples, and importantly learn a representation that can be used in both the forward (from geometric to symbolic) and reverse directions. This enables us to build symbolic representations of scenes the robot observes, but also to translate a desired symbolic state from a plan into a geometric state that the robot can achieve through manipulation. We also show how such a mapping can be used for efficient manipulation planning: the planner first plans symbolically, then applies the mapping to generate geometric positions that are then sent to a path planner.


Fig. 1. Behaviour of a node's habituation for different values of the habituation constant τ  
Fig. 2. Schematic of the expandable bag-of-words (further details in [28])
Biologically inspired intrinsically motivated learning for service robots based on novelty detection and habituation

December 2012

·

55 Reads

·

2 Citations

The effective operation of service robots relies on developmental programs that allow the robot to expand its knowledge about its dynamic operating environment. Motivation theories from neuroscience and neuropsychology study the underlying mechanisms that drive the engagement of biological creatures to certain activities, such as learning. This research uses a physical Willow Garage PR2 robot, which is equipped with a cumulative learning mechanism driven by the intrinsic motivation of novelty detection based on computational models of biological habituation. It cumulatively learns the 360° appearance of novel real-world objects by picking them up. This paper discusses the theoretical motivations and background information on intrinsic motivations as novelty detection. The results and conclusions from the experimental study are presented.


Learning the Geometric Meaning of Symbolic Abstractions for Manipulation Planning

August 2012

·

4 Reads

·

3 Citations

Lecture Notes in Computer Science

We present an approach for learning a mapping between geometric states and logical predicates. This mapping is a necessary part of any robotic system that requires task-level reasoning and path planning. Consider a robot tasked with putting a number of cups on a tray. To achieve the goal the robot needs to find positions for all the objects, and if necessary may need to stack one cup inside another to get them all on the tray. This requires translating back and forth between symbolic states that the planner uses such as “stacked(cup1,cup2)” and geometric states representing the positions and poses of the objects. The mapping we learn in this paper achieves this translation. We learn it from labelled examples, and significantly, learn a representation that can be used in both the forward (from geometric to symbolic) and reverse directions. This enables us to build symbolic representations of scenes the robot observes, and also to translate a desired symbolic state from a plan into a geometric state that the robot can actually achieve through manipulation. We also show how the approach can be used to generate significantly different geometric solutions to support backtracking. We evaluate the work both in simulation and on a robot arm.


Online unsupervised cumulative learning for life-long robot operation

December 2011

·

26 Reads

·

8 Citations

The effective life-long operation of service robots and assistive companions depends on the robust ability of the system to learn cumulatively and in an unsupervised manner. For a cumulative learning robot there are particular characteristics that the system should have, such as being able to detect new perceptions, being able to learn online and without supervision, expand when required, etc. Bag-of-Words is a generic and compact representation of visual perceptions which has commonly and successfully been used in object recognition problems. However in its original form, it is unable to operate online and expand its vocabulary when required. This paper describes a novel method for cumulative unsupervised learning of objects by visual inspection, using an online and expanding when required Bag-of-Words. We present a set of experiments with a real-world robot, which cumulatively learns a series of objects. The results show that the system is able to learn cumulatively and recall correctly the objects it was trained on.


Citations (13)


... Broadly speaking, spatial relations can be represented qualitatively -e.g., A contains B -or quantitatively -e.g., the angle between A and B is θ (Thippur et al., 2015). Following Borrmann and Rank (2010), Qualitative Spatial Relations (QSR) can be further characterised as (i) metric, i.e., based on the metric distance between objects (ii) topological, i.e., describing the neighbourhood of objects, and (iii) directional, i.e., relative to the direction of different axes in a reference coordinate system. ...

Reference:

AChiatti Electronic Thesis Deposited
A Comparison of Qualitative and Metric Spatial Relation Models for Scene Understanding
  • Citing Article
  • February 2015

Proceedings of the AAAI Conference on Artificial Intelligence

... Việc nghiên cứu thiết kế robot di động tích hợp hệ tay máy được thực hiện để gắp vật trong môi trường bán cấu trúc (Štibinger et al., 2021), trong môi trường trong nhà (Haviland et al., 2022) và thiết kế gắp vật có tích hợp hệ thống máy ảnh số dẫn đường (Chen et al., 2019). Bên cạnh đó, việc thiết kế điều khiển gắp thả có thể được ứng dụng vào kho hàng để lưu trữ và truy xuất tự động (Bogue, 2016), ứng dụng đối với các hộ gia đình và trong văn phòng (Hawes et al., 2017;Triebel et al., 2016). ...

The STRANDS Project: Long-Term Autonomy in Everyday Environments

IEEE Robotics & Automation Magazine

... An indirect search uses spatial relations to predict from the known poses of objects those of searched objects. An indirect search can be classified according to the type ( [53]) of relations used to predict the poses. Refs. ...

A Comparison of Qualitative and Metric Spatial Relation Models for Scene Understanding

... keyboards can be found in front of monitors) and more specific models (i.e. the keyboard in Room 133 is often to the right of the monitor). To create such models we can draw on existing work which quantifies qualitative knowledge making it appropriate for our approach (e.g.spatial models [17], [18] or conceptual knowledge [2], [19]), or learns metric object location predictors from experience [16], [20]. ...

Learning the Geometric Meaning of Symbolic Abstractions for Manipulation Planning
  • Citing Conference Paper
  • August 2012

Lecture Notes in Computer Science

... The aim of spatial classification is to infer the labels of spatial objects that have strong correlation with the location in the real world. This is an important technology in spatial data mining, which facilitates numerous geospatial applications such as spatial analysis [6], spatial cognition [42] and spatial reasoning [19]. Traditional classification methods focus on explicit and independent items (e.g., image data), while spatial classification task has to consider the dependent relations between spatial objects. ...

Combining top-down spatial reasoning and bottom-up object class recognition for scene understanding

... The formal planning languages such as STRIPS [3] or its successor PDDL [4] are widely used to specify the domain, including a set of predicates to represent the environment states and operators with defined preconditions and effects. To generate plans and achieve its goals by symbolic planners in the real world, an agent must instantiate a PDDL planning problem with the objects and their states in the environment, typically assumed a priori fixed [5], [6], [7]. ...

An Approach for Efficient Planning of Robotic Manipulation Tasks
  • Citing Article
  • June 2013

Proceedings of the International Conference on Automated Planning and Scheduling

... Autonomy refers to the degree to which a service robot can sense and perform tasks without direct human intervention . The autonomy of robots mainly depends on their control capacity, interaction accessibility and situation awareness (Riano et al., 2011) to independently sense and cope with changes in the environment (Jia et al., 2021). Service robots are usually equipped with autonomous control systems and communication capabilities to navigate the workplace and understand consumer needs (Tung and Au, 2018). ...

A Study of Enhanced Robot Autonomy in Telepresence
  • Citing Article

... According to motion camouflage theory, the shadower follows a trajectory such that it appears stationary from the shadowee's point of view [2]. Thus at each time 0 t t , the shadower must follow the Camouflage Constraint Line (CCL) which joins the instantaneous position of the shadowee with a fixed focal point. ...

Motion camouflage for unicycle robots using optimal control

... Affordances Affordances Skills Skills Tasks Tasks In previous work of ours we have focussed in perceptual learning [1], [2] and in skill composition [3]. In this paper we focus on learning the affordances, the link between perceptual learning and skill learning, as shown from the diagram in Figure 1. ...

Biologically inspired intrinsically motivated learning for service robots based on novelty detection and habituation

... Often, geometric constraints are also represented with symbols in order to have preliminary feasibility checks at the symbolic level. For instance, [12] translates geometric poses into symbols for manipulation planning problems. The symbols are translated to be compliant with the Planning Domain Definition Language (PDDL, [13]), widely used by the planning community. ...

Manipulation planning using learned symbolic state abstractions
  • Citing Article
  • January 2013

Robotics and Autonomous Systems