Robert Babuska

Robert Babuska
Delft University of Technology | TU · Cognitive Robotics

Prof. Dr.

About

492
Publications
174,977
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
21,064
Citations
Introduction
Robert Babuska is full professor at the Department of Cognitive Robotics, Faculty of 3mE, Delft University of Technology. His research interests include reinforcement learning, nonlinear identification and state-estimation, model-based and adaptive control and dynamic multi-agent systems. He has been involved in the applications of these techniques in robotics, mechatronics, and aerospace.
Additional affiliations
July 2017 - June 2020
Delft University of Technology
Position
  • Professor
May 2003 - June 2017
Delft University of Technology
Position
  • Head of Department
January 2000 - present
University of Coimbra

Publications

Publications (492)
Preprint
Full-text available
Planning methods struggle with computational intractability in solving task-level problems in large-scale environments. This work explores leveraging the commonsense knowledge encoded in LLMs to empower planning techniques to deal with these complex scenarios. We achieve this by efficiently using LLMs to prune irrelevant components from the plannin...
Preprint
Full-text available
Sim2real, that is, the transfer of learned control policies from simulation to real world, is an area of growing interest in robotics due to its potential to efficiently handle complex tasks. The sim2real approach faces challenges due to mismatches between simulation and reality. These discrepancies arise from inaccuracies in modeling physical phen...
Preprint
Full-text available
To efficiently deploy robotic systems in society, mobile robots need to autonomously and safely move through complex environments. Nonlinear model predictive control (MPC) methods provide a natural way to find a dynamically feasible trajectory through the environment without colliding with nearby obstacles. However, the limited computation power av...
Article
Full-text available
Advancements in accelerated physics simulations have greatly reduced training times for reinforcement learning policies, yet the conventional step-by-step agent-simulator interaction undermines simulation accuracy. In the real world, interactions are asynchronous, with sensing, acting and processing happening simultaneously. Failing to capture this...
Article
Full-text available
Many real-world systems can be naturally described by mathematical formulas. The task of automatically constructing formulas to fit observed data is called symbolic regression. Evolutionary methods such as genetic programming have been commonly used to solve symbolic regression tasks, but they have significant drawbacks, such as high computational...
Article
Sim2real, that is, the transfer of learned control policies from simulation to the real world, is an area of growing interest in robotics because of its potential to efficiently handle complex tasks. The sim2real approach faces challenges because of mismatches between simulation and reality. These discrepancies arise from inaccuracies in modeling p...
Article
Full-text available
As large language models (LLMs) permeate more and more applications, an assessment of their associated security risks becomes increasingly necessary. The potential for exploitation by malicious actors, ranging from disinformation to data breaches and reputation damage, is substantial. This paper addresses a gap in current research by specifically f...
Article
This paper introduces a dataset for training and evaluating methods for 6D pose estimation of hand-held tools in task demonstrations captured by a standard RGB camera. Despite the significant progress of 6D pose estimation methods, their performance is usually limited for heavily occluded objects, which is a common case in imitation learning, where...
Preprint
Many real-world systems can be described by mathematical formulas that are human-comprehensible, easy to analyze and can be helpful in explaining the system's behaviour. Symbolic regression is a method that generates nonlinear models from data in the form of analytic expressions. Historically, symbolic regression has been predominantly realized usi...
Article
Full-text available
Many real-world systems can be described by mathematical models that are human-comprehensible, easy to analyze and can be helpful in explaining the system’s behavior. Symbolic regression is a method that can automatically generate such models from data. Historically, symbolic regression has been predominantly realized by genetic programming, a meth...
Article
Full-text available
Sensing the shape of continuum soft robots without obstructing their movements and modifying their natural softness requires innovative solutions. This letter proposes to use magnetic sensors fully integrated into the robot to achieve proprioception. Magnetic sensors are compact, sensitive, and easy to integrate into a soft robot. We also propose a...
Article
Full-text available
Sensing the shape of continuum soft robots without obstructing their movements and modifying their natural softness requires innovative solutions. This letter proposes to use magnetic sensors fully integrated into the robot to achieve proprioception. Magnetic sensors are compact, sensitive, and easy to integrate into a soft robot. We also propose a...
Chapter
Novel view synthesis is a long-standing problem. In this work, we consider a variant of the problem where we are given only a few context views sparsely covering a scene or an object. The goal is to predict novel viewpoints in the scene, which requires learning priors. The current state of the art is based on Neural Radiance Field (NeRF), and while...
Preprint
Full-text available
This paper introduces a dataset for training and evaluating methods for 6D pose estimation of hand-held tools in task demonstrations captured by a standard RGB camera. Despite the significant progress of 6D pose estimation methods, their performance is usually limited for heavily occluded objects, which is a common case in imitation learning where...
Preprint
Full-text available
Novel view synthesis is a long-standing problem. In this work, we consider a variant of the problem where we are given only a few context views sparsely covering a scene or an object. The goal is to predict novel viewpoints in the scene, which requires learning priors. The current state of the art is based on Neural Radiance Fields (NeRFs), and whi...
Preprint
Full-text available
Novel view synthesis is a long-standing problem. In this work, we consider a variant of the problem where we are given only a few context views sparsely covering a scene or an object. The goal is to predict novel viewpoints in the scene, which requires learning priors. The current state of the art is based on Neural Radiance Fields (NeRFs), and whi...
Preprint
Full-text available
Search missions require motion planning and navigation methods for information gathering that continuously replan based on new observations of the robot's surroundings. Current methods for information gathering, such as Monte Carlo Tree Search, are capable of reasoning over long horizons, but they are computationally expensive. An alternative for f...
Preprint
Full-text available
Existing Deep Learning (DL) frameworks typically do not provide ready-to-use solutions for robotics, where very specific learning, reasoning, and embodiment problems exist. Their relatively steep learning curve and the different methodologies employed by DL compared to traditional approaches, along with the high complexity of DL models, which often...
Article
Presents information on the RAS 2021 IROS conference.
Article
Full-text available
Reinforcement learning (RL) agents can learn to control a nonlinear system without using a model of the system. However, having a model brings benefits, mainly in terms of a reduced number of unsuccessful trials before achieving acceptable control performance. Several modelling approaches have been used in the RL domain, such as neural networks, lo...
Preprint
Full-text available
This Supporting information includes interactive plots, videos, and data captured while performing evaluation and validation experiments for our paper.
Conference Paper
Full-text available
This paper presents DeepKoCo, a novel model based agent that learns a latent Koopman representation from images. This representation allows DeepKoCo to plan efficiently using linear control methods, such as linear model predictive control. Compared to traditional agents, DeepKoCo learns task relevant dynamics, thanks to the use of a tailored lossy...
Article
Full-text available
Adapting to uncertainties is essential yet challenging for robots while conducting assembly tasks in real‐world scenarios. Reinforcement learning (RL) methods provide a promising solution for these cases. However, training robots with RL can be a data‐extensive, time‐consuming, and potentially unsafe process. In contrast, classical control strategi...
Preprint
This Supporting information includes interactive plots, videos, and data captured while performing evaluation and validation experiments for our paper.
Article
Deep neural networks designed for vision tasks are often prone to failure when they encounter environmental conditions not covered by the training data. Efficient fusion strategies for multi-sensor configurations can enhance the robustness of the detection algorithms by exploiting redundancy from different sensor streams. In this paper, we propose...
Article
Virtually all dynamic system control methods benefit from the availability of an accurate mathemjThis includes also methods like reinforcement learning, which can be vastly sped up and made safer by using a dynamic system model. However, obtaining a sufficient amount of informative data for constructing dynamic models can be difficult. Consequently...
Chapter
We consider the problem of estimating an object’s pose in the absence of visual feedback after contact with robotic fingers during grasping has been made. Information about the object’s pose facilitates precise placement of the object after a successful grasp. If the grasp fails, then knowing the pose of the object after the grasping attempt is mad...
Article
Visual navigation is essential for many applications in robotics, from manipulation, through mobile robotics to automated driving. Deep reinforcement learning (DRL) provides an elegant map-free approach integrating image processing, localization, and planning in one module, which can be trained and therefore optimized for a given environment. Howev...
Preprint
Full-text available
Landing a quadrotor on an inclined surface is a challenging manoeuvre. The final state of any inclined landing trajectory is not an equilibrium, which precludes the use of most conventional control methods. We propose a deep reinforcement learning approach to design an autonomous landing controller for inclined surfaces. Using the proximal policy o...
Preprint
Full-text available
We propose a geometry-based grasping method for vine tomatoes. It relies on a computer-vision pipeline to identify the required geometric features of the tomatoes and of the truss stem. The grasping method then uses a geometric model of the robotic hand and the truss to determine a suitable grasping location on the stem. This approach allows for gr...
Preprint
Full-text available
Deep neural networks designed for vision tasks are often prone to failure when they encounter environmental conditions not covered by the training data. Efficient fusion strategies for multi-sensor configurations can enhance the robustness of the detection algorithms by exploiting redundancy from different sensor streams. In this paper, we propose...
Article
Autonomous mobile robots are becoming increasingly important in many industrial and domestic environments. Dealing with unforeseen situations is a difficult problem that must be tackled to achieve long-term robot autonomy. In vision-based localization and navigation methods, one of the major issues is the scene dynamics. The autonomous operation of...
Article
Full-text available
Continual model learning for nonlinear dynamic systems, such as autonomous robots, presents several challenges. First, it tends to be computationally expensive as the amount of data collected by the robot quickly grows in time. Second, the model accuracy is impaired when data from repetitive motions prevail in the training set and outweigh scarcer...
Article
Full-text available
Reinforcement learning algorithms can solve dynamic decision-making and optimal control problems. With continuous-valued state and input variables, reinforcement learning algorithms must rely on function approximators to represent the value function and policy mappings. Commonly used numerical approximators, such as neural networks or basis functio...
Preprint
Full-text available
This paper presents DeepKoCo, a novel model based agent that learns a latent Koopman representation from images. This representation allows DeepKoCo to plan efficiently using linear control methods, such as linear model predictive control. Compared to traditional agents, DeepKoCo learns task relevant dynamics, thanks to the use of a tailored lossy...
Preprint
Visual navigation is essential for many applications in robotics, from manipulation, through mobile robotics to automated driving. Deep reinforcement learning (DRL) provides an elegant map-free approach integrating image processing, localization, and planning in one module, which can be trained and therefore optimized for a given environment. Howev...
Conference Paper
Full-text available
The ability to search for objects is a precondition for various robotic tasks. In this paper, we address the problem of finding objects in partially known indoor environments. Using the knowledge of the floor plan and the mapped objects, we consider object-object and object-room co-occurrences as prior information for identifying promising location...
Article
Full-text available
Relying on static representations of the environment limits the use of mapping methods in most real-world tasks. Real-world environments are dynamic and undergo changes that need to be handled through map adaptation. In this work, an object-based pose graph is proposed to solve the problem of mapping in indoor dynamic environments. In contrast to s...
Article
Developing mathematical models of dynamic systems is central to many disciplines of engineering and science. Models facilitate simulations, analysis of the system’s behavior, decision making and design of automatic control algorithms. Even inherently model-free control techniques such as reinforcement learning (RL) have been shown to benefit from t...
Preprint
In symbolic regression, the search for analytic models is typically driven purely by the prediction error observed on the training data samples. However, when the data samples do not sufficiently cover the input space, the prediction error does not provide sufficient guidance toward desired models. Standard symbolic regression techniques then yield...
Article
Deep reinforcement learning makes it possible to train control policies that map high-dimensional observations to actions. These methods typically use gradient-based optimization techniques to enable relatively efficient learning, but are notoriously sensitive to hyperparameter choices and do not have good convergence properties. Gradient-free opti...
Preprint
Deep reinforcement learning (RL) has been successfully applied to a variety of game-like environments. However, the application of deep RL to visual navigation with realistic environments is a challenging task. We propose a novel learning architecture capable of navigating an agent, e.g. a mobile robot, to a target given by an image. To achieve thi...
Conference Paper
Reinforcement Learning (RL) algorithms can be used to optimally solve dynamic decision-making and control problems. With continuous-valued state and input variables, RL algorithms must rely on function approximators to represent the value function and policy mappings. Commonly used numerical approximators, such as neural networks or basis function...
Article
The twisted and coiled polymer muscle (TCPM) has two major benefits: low weight and low cost. Therefore, this new type of actuator is increasingly used in robotic applications where these benefits are relevant. Closed-loop control of these muscles, however, requires additional sensors that add weight and cost, negating the muscles' intrinsic benefi...
Preprint
Full text of the preprint available at https://arxiv.org/abs/1903.11483 | Reinforcement learning (RL) is a widely used approach for controlling systems with unknown or time-varying dynamics. Even though RL does not require a model of the system, it is known to be faster and safer when using models learned online. We propose to employ symbolic regre...
Preprint
Full-text available
Full text of the preprint available at https://arxiv.org/abs/1903.09688 | Reinforcement learning algorithms can be used to optimally solve dynamic decision-making and control problems. With continuous-valued state and input variables, reinforcement learning algorithms must rely on function approximators to represent the value function and policy ma...
Article
Smart robotics will be a core feature while migrating from Industry 3.0 (i.e., mass manufacturing) to Industry 4.0 (i.e., customized or social manufacturing). A key characteristic of a smart system is its ability to learn. For smart manufacturing, this means incorporating learning capabilities into the current �xed, repetitive, task-oriented indust...
Preprint
Full-text available
In this paper, we consider the problem of learning object manipulation tasks from human demonstration using RGB or RGB-D cameras. We highlight the key challenges in capturing sufficiently good data with no tracking devices - starting from sensor selection and accurate 6DoF pose estimation to natural language processing. In particular, we focus on t...
Article
Approximate Reinforcement Learning (RL) is a method to solve sequential decisionmaking and dynamic control problems in an optimal way. This paper addresses RL for continuous state spaces which derive the control policy by using an approximate value function (V-function). The standard approach to derive a policy through the V-function is analogous t...
Article
Control systems designed via learning methods, aiming at quasi-optimal solutions, typically lack stability and performance guarantees. We propose a method to construct a near-optimal control law by means of model-based reinforcement learning and subsequently verifying the reachability and safety of the closed-loop control system through an automati...
Article
Full-text available
Experience replay is a technique that allows off-policy reinforcement-learning methods to reuse past experiences. The stability and speed of convergence of reinforcement learning, as well as the eventual performance of the learned policy, are strongly dependent on the experiences being replayed. Which experiences are replayed depends on two importa...
Article
Numerous prognostic methods have been developed, aiming at predicting future system reliability with the highest possible accuracy. It is striking that the relation with the subsequent maintenance optimization process is generally overlooked, while it is important in practice. Additionally, almost all existing methods are based on a single degradat...
Article
This paper addresses the problem of deriving a policy from the value function in the context of critic-only reinforcement learning (RL) in continuous state and action spaces. With continuous-valued states, RL algorithms have to rely on a numerical approximator to represent the value function. Numerical approximation due to its nature virtually alwa...
Article
Full-text available
Learning-based approaches are suitable for the control of systems with unknown dynamics.However, learning from scratch involves many trials with exploratory actions until a good control policy is discovered. Real robots usually cannot withstand the exploratory actions and suffer damage.This problem can be circumvented by combining learning with mod...
Article
Most deep reinforcement learning techniques are unsuitable for robotics, as they require too much interaction time to learn useful, general control policies. This problem can be largely attributed to the fact that a state representation needs to be learned as a part of learning control policies, which can only be done through fitting expected retur...
Article
A multi-agent methodology is proposed for Decentralized Reinforcement Learning (DRL) of individual behaviors in problems where multi-dimensional action spaces are involved. When using this methodology, sub-tasks are learned in parallel by individual agents working toward a common goal. In addition to proposing this methodology, three specific multi...
Conference Paper
Genetic programming (GP) is a technique widely used in a range of symbolic regression problems, in particular when there is no prior knowledge about the symbolic function sought. In this paper, we present a GP extension introducing a new concept of local transformed variables, based on a locally applied affine transformation of the original variabl...
Conference Paper
In this paper, decentralized reinforcement learning is applied to a control problem with a multidimensional action space. We propose a decentralized reinforcement learning architecture for a mobile robot, where the individual components of the commanded velocity vector are learned in parallel by separate agents. We empirically demonstrate that the...
Article
Model-based reinforcement learning (RL) algorithms can be used to derive optimal control laws for nonlinear dynamic systems. With continuous-valued state and input variables, RL algorithms have to rely on function approximators to represent the value function and policy mappings. This paper addresses the problem of finding a smooth policy based on...
Article
In this paper, a method for the identification of distributed-parameter systems is proposed, based on finite-difference discretization on a grid in space and time. The method is suitable for the case when the partial differential equation describing the system is not known. The sensor locations are given and fixed, but not all grid points contain s...
Article
In this paper, a novel adaptive Takagi-Sugeno (TS) fuzzy observer-based controller is proposed. The closed-loop stability and the boundedness of all the signals are proven by Lyapunov stability analysis. The proposed controller is applied to a flexible-transmission experimental setup. The performance for constant payload in the presence of noisy me...
Conference Paper
Full-text available
Reinforcement learning techniques enable robots to deal with their own dynamics and with unknown environments without using explicit models or preprogrammed behaviors. However, reinforcement learning relies on intrinsically risky exploration, which is often damaging for physical systems. In the case of the bipedal walking robot Leo, which is studie...
Article
Full-text available
Railway infrastructure monitoring is a vital task to ensure rail transportation safety. A rail failure could result in not only a considerable impact on train delays and maintenance costs, but also on safety of passengers. In this article, the aim is to assess the risk of a rail failure by analyzing a type of rail surface defect called squats that...
Article
Even though various frameworks exist for reasoning under uncertainty, a realistic fault diagnosis task does not fit into any of them in a straightforward way. For each framework, only part of the available data and knowledge is in the desired format. Moreover, additional criteria, like clarity of inference and computational efficiency, require trad...
Article
Interdependencies among system components and the existence of multiple operating modes present a challenge for fault diagnosis of Heating, Ventilation, and Air Conditioning (HVAC) systems. Reliable and timely diagnosis can only be ensured when it is performed in all operating modes, and at the system level, rather than at the level of the individu...
Article
Full-text available
Model-free reinforcement learning and nonlinear model predictive control are two different approaches for controlling a dynamic system in an optimal way according to a prescribed cost function. Reinforcement learning acquires a control policy through exploratory interaction with the system, while nonlinear model predictive control exploits an expli...