Content uploaded by Prasad Talasila
Author content
All content in this area was uploaded by Prasad Talasila on Dec 16, 2024
Content may be subject to copyright.
Digital Twin Enabled Runtime Verification for
Autonomous Mobile Robots under Uncertainty
Joakim Schack Betzer, Jalil Boudjadar, Mirgita Frasheri, Prasad Talasila
Department of Electrical and Computer Engineering
Aarhus University
Denmark
Abstract—As autonomous robots increasingly navigate com-
plex and unpredictable environments, ensuring their reliable
behavior under uncertainty becomes a critical challenge. This
paper introduces a digital twin-based runtime verification for
an autonomous mobile robot to mitigate the impact posed by
uncertainty in the deployment environment. The safety and
performance properties are specified and synthesized as runtime
monitors using TeSSLa. The integration of the executable digital
twin, via the MQTT protocol, enables continuous monitoring
and validation of the robot’s behavior in real-time. We explore
the sources of uncertainties, including sensor noise and environ-
ment variations, and analyze their impact on the robot safety
and performance. Equipped with high computation resources,
the cloud-located digital twin serves as a watch-dog model to
estimate the actual state, check the consistency of the robot’s
actuations and intervene to override such actuations if a safety or
performance property is about to be violated. The experimental
analysis demonstrated high efficiency of the proposed approach
in ensuring the reliability and robustness of the autonomous robot
behavior in uncertain environments and securing high alignment
between the actual and expected speeds where the difference is
reduced by up to 41% compared to the default robot navigation
control.
Index Terms—Runtime verification, Uncertainty, Digital Twins,
Autonomous Robots, State Monitoring, Simulation.
I. INTRODUCTION
Autonomous robots represent a transformative technology
that revolutionized various industries and applications in our
daily life such as access to hazardous and unsafe environments
[24], [1], [19], [32], [51]. Such robots can perform complex
tasks with precision and reliability, reducing human interven-
tion and minimizing the risk of errors. An autonomous robot
relies on a integrated control system to deliver the expected
functionality, by sampling the deployment environment via
sensors, analyzing data to estimate the environment state,
and compute and execute optimal actuations following the
actual state and robot mission [30]. Actuation computation is
programmed at design stage for a range of known states and
configurations, where the robot will be able to autonomously
operate and maintain the mission requirements (functionality,
performance, safety) [35]. However, changes in the deploy-
ment environment can lead to uncertain conditions (to be
considered as unknown states for the robot). Uncertainty arises
from numerous sources, including sensor noise, environment
variability and incomplete or inaccurate information about the
robot’s surroundings [9], [45]. This uncertainty reduces the
robot’s ability to recognize its environment state accurately,
leading thus to suboptimal actuation or unsafe behavior [7].
Self-adaptivity has been introduced to enable autonomous
robots to operate effectively and safely in dynamic and un-
certain environments [22], [51]. It relies on adaptive con-
trol strategies to learn and reason about uncertainty [6], but
this has many barriers in various robotic applications due
to standardization requirements where changes to the robot
functionality and performance via self-adaptivity may require
a new approval, lack of data related to uncertainty cases,
high complexity and computation cost to run self-adaptive
algorithms [6], [47].
Runtime verification amounts to having an observer (soft-
ware or model) to monitor and validate the conformity of
systems behavior with respect to a set of functional and non-
functional properties, written in a temporal logic or as a
state machine, in real-time [27]. However, executing runtime
monitors on the robot is challenging given that robots are
usually equipped with limited computation resources [5]. One
way of tackling the challenge of runtime verification and
mitigation of uncertainty for autonomous robots is via the use
of digital twins (DTs) [8], [44].
DTs are having an emerging role in robotics where, as high
fidelity executable models to physical systems, they operate
via synchronization with the physical system data, actuations
and environment [15]. Thus, performing runtime verification
for an autonomous robot on a cloud-located DT enables
on-the-fly monitoring, analysis, mitigation and validation of
the robot actuations with respect to actual state, mission
requirements and environment uncertainty [23].
This paper proposes a digital twin-based runtime monitoring
and verification of an autonomous mobile robot behavior under
different uncertainty sources, enabling simulation, real-time
monitoring, validation and runtime correction of the robot
state with respect to safety and performance requirements.
Validation and efficiency analysis are conducted using real-
world experiments.
The rest of the paper is structured as follows: Section II de-
scribes the robot and uncertainty sources we consider. Section
III cites relevant related work. The architecture, behavior and
operation of the proposed digital twin are specified in Section
IV. Section V presents the proposed runtime monitors and
verification. Analysis and results are provided in Section VI.
Finally, Section VII concludes the paper.
arXiv:2412.09913v1 [cs.RO] 13 Dec 2024
II. TURTLEBOT ROB OT AND UNCERTAINTY
This section describes the functionality of Turlebot robots
and the uncertainties we consider in this paper.
A. Turtlebot3 Burger Robots
Turtlebotbot3 Burger (T3B) is a highly versatile autonomous
(mobile) robot developed by ROBOTIS in collaboration with
Open Robotics, an open-source platform that runs on the
Robot Operating System (ROS), providing a robust framework
for developing and testing robotic applications. It is a two-
wheeled differential drive robot empowered with a set of
capabilities to deliver a range of missions autonomously (such
as reaching a destination, obstacle avoidance, localizing tags,
etc) in variable environments and can be precisely controlled
with velocity, torque and position [41].
T3B is battery-powered and relies on a Light Detection
and Ranging sensor, called Lidar, to sense the environment;
a control system deployed on Raspberry PI to analyze data
and performance and compute runtime actuations; and 2 wheel
motors to navigate the robot. Using Lidar, the T3B reads a 360
degrees scan of the robot surroundings (l1, .., l360) to identify
obstacles, by measuring the distance on each sampling angle,
and building a map of the robot environment [46].
A robot actuation is given in terms of the expected driving
speed, composed of linear (el) and angular (ea) velocity. The
expected driving speed is the wheels rotation speed produced
by the motors following an actuation command from the
control system. T3B turns left and right by trading-off the
linear and angular velocity where the higher the angular
velocity is the sharper the turn will be.
Furthermore, we define the actual speed (al, aa) to be the
speed at which T3B is physically moving at a specific time
point. In fact, actual and expected speeds can differ due to a
variety of factors such as floor friction level, ground density
conditions (e.g. mud, sand, etc) [10].
Maintaining an alignment of the expect and actual robot
states is of high interest to fulfill the desired performance
criteria and safety requirements [32], [50]. For example, when
the robot is stuck in low density ground (such as mud or sand)
the expected speed can be exponential to the actual speed
which may lead to drain the robot battery and violate the
mission delivery time [18]. The complexity of the problem
can become intractable, especially when navigating through
challenging scenarios, such as dynamic environments [1].
Beside the sophisticated control algorithms [12], combining
runtime monitoring and mitigation [17], [36] is one of the
different techniques proposed in the literature to cope with
inaccurate state estimation for autonomous mobile robots and
divergence of the expected and actual performance [50].
B. Uncertainty Sources
Autonomous robots increasingly navigate complex and un-
predictable environments, where operation conditions may
vary to the scale that is beyond the configurations such robots
are engineered for [28]. These dynamic contexts are often
accompanied by the pervasive presence of uncertainty, that
poses many challenges to ensuring reliable behavior and high
performance of autonomous robots [4].
The uncertainty sources we consider in this paper are the
following:
•Faulty Lidar readings due to noise, dust (or any airborne
particulates) or other obscurant media in the deployment
environment, leading thus to erroneous data, incomplete
or inconsistent image of the robot environment [9].
•Dynamic environment conditions related to variable floor
friction and density due to presence of mud, sand or
highly lubricated surfaces that degrade the friction level
and the locomotion efficiency of the robot maneuvers
[45].
Uncertainty impacts the robot decision making in a way
that the environment state cannot be recognized upon which
proper actuations can be defined efficiently. Moreover, it is by
far not possible to examine the sources and manifestations of
uncertainty and their impact on the robot at design stage [19].
Thus, runtime mechanisms are needed to accommodate the
rule-based control of autonomous robots to capture, understand
and mitigate uncertainties [51].
To capture the uncertainty related to faulty Lidar readings,
we propose thorough data analysis and proper correction
actions at runtime. Since those operations are computationally
expensive to execute in T3B, we develop runtime monitors,
deployed as cloud-located digital twins, to investigate the
Lidar data consistency. Moreover, we augment T3B with
Simultaneous Localization and Mapping (SLAM) [43] and
couple it with digital twins to achieve runtime monitoring and
mitigation of the uncertainty related to dynamic floor friction
and density. In fact, SLAM enables to estimate the actual robot
speed that is highly dependent on the floor friction, density and
collisions. The runtime monitors enable to impose constraints
on the robot behavior to mitigate the uncertainty impact and
maintain high performance at runtime, e.g. actual speed must
not differ from expected speed with certain threshold.
III. RELATED WORK
Autonomous robots are expected to reduce human efforts
in a variety of domains, from replacing heavy manual labor
(e.g. in agriculture), to being deployed in remote and harsh
environments dangerous to humans (e.g. de-mining missions,
search and rescue). While there are examples of such robots
already available in the market, the reader is pointed to the
Robotti field robot [16], their capabilities are still quite limited
in the face of unforeseen circumstances. This is due to the
real world environments being unstructured, partially or fully
unknown, and possibly uncontrollable. To be able to fulfil their
missions successfully and safely, autonomous robots have to
adapt to unforeseen issues and events [29], [2].
Different factors can be a source of uncertainty: faulty
components, noisy sensors and actuators, malicious attacks,
uncertainties in the environment itself, accrued model er-
rors [52]. Designing good models is not a trivial task due
to the complexity of the involved software and hardware
components, environment, and the resource and computational
limitations of the embedded devices typically used in robotics.
Additionally, as revealed by a recent survey, planning seems
decoupled from sensors such as lidars and cameras, which if
incorporated properly could help dealing with uncertainty [32].
Reinforcement learning has sparked interest in the robotics
community [39], for manipulators, trajectory tracking, nav-
igation and path planning [49]. While great success was
shown in simulation, applicability in the real world remains
challenging [11], [40]. This is a result of the limited number
of uncertainty samples that can be used for learning, high
training costs and uncertain models [2], [49]. In addition, it is
necessary to provide real-time inference, while simultaneously
dealing with large or unknown delays in sensors, actuators, and
rewards [11].
DT technologies have garnered quite the traction in recent
years, as they promise to optimise the development and
deployment of robots, and cyber-physical systems (CPSs) in
general, enabling services like runtime monitoring, adaptation,
and system reconfiguration to adapt to changes in the CPSs
themselves as well as their environment [13]. The DT could
be run in the cloud, thus alleviating problems regarding
resource and computational limitations, and extending the
capabilities of its physical counterpart. Dobaj et al. take a
DevOps approach in order to support adaptations and the
verification thereof at the CPS level [8]. Specifically, run-time
verification is performed to ensure that a service B, that is
to replace an existing service A, provides reasonable output
before it can affect the CPS. Allamaa et al. have proposed an
approach to adapt controllers created in simulation to a real
system (e.g. an ECU) [3], by effectively transferring control
parameters estimated in simulation to the real system while
taking into account noise and edge cases. In other scenarios,
such as those including Cyber-physical Production Systems
(CPPSs), the DT can be adopted to monitor and generate
new strategies to optimize the production process when new
orders are issued [26]. The results from the DT deliberation
are thereafter verified and validated regarding which actuation
strategy to apply to actuate the system. Rivera et al. propose
a DT-based reference architecture for the development of
what they call smart CPSs (SCPSs) [37], where an efficient
actuation is derived from the multiple outcomes of multi-DT
systems for CPSs. Additionally, they adopt the concept of
viability zones, where reference signals used in the DT are
updated as the divergence to the corresponding real signals
increases.
IV. DIG ITAL TWIN SPECIFICATION
A DT has been widely defined as a digital representation
of a physical object, that through data exchange, reflects
the evolution of the physical twin (PT) over time and in
turn influences the future behaviour of the PT [20]. This
definition highlights the bidirectional data exchange over a
communication infrastructure and the linked evolution of DT
vis-a-vis PT. In addition, the need for multi-domain, multi-
scale modeling approaches became apparent in the creation
and evolution of DTs [34], [48]. The simulators are required
for models execution. A pair consisting of a model and
simulator gives raise to behaviors which simulate one aspect
(part in common terms) of existing or desirable behavior of
the PT [31]. It is often desirable to have a catalogue of suitable
models, simulators and behaviors to be selected and used in
the DT. Safety is a must in autonomous systems and monitors
play significant role in maintaining it [13], [29]. Both monitors
and their safety properties are general and thus belong to a
catalogue of DT assets.
In addition, the data whether live or historic significantly
impacts the performance of DT in fulfilling its obligations
to PT. It is often the case that the data itself is collected and
reused, and can either come from DT or its environment. Such
data is helpful in planning the optimal behavior for PTs like
autonomous robots. Thus, the data itself can be put into a
catalogue of assets from which many DTs can be built [42].
Given that PTs like Turtlebot supports many protocols -
MQTT, RabbitMQ, HTTP etc. - it is often desirable to have
implementations of the data interfaces in the catalogue itself
under a form of linked data. Moreover, turtlebot robots have
their well-defined Application Programming Interface (API)
over which the remote software including DT can interact with
PTs. The software packages providing access to these APIs are
often needed but are not integral to the DT itself. In addition,
it is often necessary to operate at an abstract level, e.g,. goal
versus concrete instructions. The operations provide a way to
convert these DT-level goals to concrete PT-level instructions.
Figure 1a shows an architecture of the DT we propose.
The catalogue of DT assets from which the DTs are created
is highlighted. The DTs created are templates (classes) from
which many DT instances (objects) can be created. In this
paper, the discussion is oriented towards one DT instance for
T3B and this instance is referred to as DT itself. Figure 1a
also shows the sequence of operations taking place between
T3B and its DT. One cycle of operations is enumerated in
this figure. The data is collected from both T3B and its
environment and communicated to DT. The collected data
contains the relevant state of PT. In the present case, the
expected linear and angular velocity, actual speed, Lidar data
and the current actuation. The data is sent over the MQTT
broker to two end points, namely DT (step-1a) and data
storage (step-1b). Both the MQTT broker and data storage
constitute the data part of the DT. The data is received by the
operations interface of the DT (using the Telegraf Connector
to be explained in Section V.C) and becomes a part of DT
state (step-1c). It is pertinent to note that the data (there by
state) received from the PT is only a subset of the DT state
which contains the state derived from behaviors. The changes
in the DT state triggers both monitors and behaviors (step-
2). The monitors check for compliance with safety properties
while behaviors simulate models with the existing state as
input. The monitors can potentially influence simulations by
communicating the safety compliance status of the PT (step-
3a). After conclusions of (potentially multiple) simulations,
the behaviors change the DT state and set new goals for
PT (steps-3b and 4). The operations then convert these goals
Digital Twin
State
Monitors
Operations
Behaviors
2) track changes
3b) change state
4) set goals
2) track changes
1c) save data
6)send directives
3a) inform
Monitors
1a) receive data
5) receive goals
select
and
run
MQTT
Broker 1b) save data
Common Services
EnvTurtlebot
1b) save DT
3c) inform
Data
Catalogue of Digital
Twin Assets
Models
Operations
Simulators
Behaviors
Data
monitors
(a) DT Architecture. The sequence of interactions leading to monitoring results are
labeled with numbers.
Digital Twin
monitor
service
monitor
properties
Proxy
Digital Twin monitor
monitor
properties synthesizer
Digital Twin monitor
service
monitor
properties
(b) Different possibilities in
integration of monitors into
the DT
Fig. 1: Digital Twin architecture and monitor integration.
to concrete directives and send them to PT (step-5).
Digital Twin platforms supporting and executing many DTs
often provide a service layer on top of DTs. The service
layer is fed by DTs based on their internal states. The
users can interact with service layer to perform monitoring,
visualisation, analysis, global planning and decision making
tasks [14], [33]. It is also possible to place monitors as a
common service on DT platforms as well in which case
multiple DTs can use them. The communication of other parts
of the DT with monitors can follow either push or pull patterns.
The pull communication pattern indicates the active checking
and copying of the DT state by the monitor. This pattern is
most appropriate for monitors integrated into the DT. The push
communication pattern indicates the activation of monitors by
sending updated state. In this case, the monitor does not need
access to DT state and thus this communication pattern is most
appropriate for monitors placed in common services (step-3c).
There are multiple patterns in the integration of monitors
into DTs. These patterns are illustrated in Figure 1b.
a) Synthesizer: The run-time monitoring tool uses mon-
itor properties to synthesise the monitor which then is used
inside DT. A just-in-time synthesis is advantageous over the
of monitor executables. This paper implements a synthesized
monitor.
b) Private Service: The monitor is reusable and is in-
tegrated as a service into the DT. This service is exclusively
used by one DT. Given the private scope of the monitor, the
implementation and integration are less complex.
c) Public Service: The monitor is placed as a common
service and is reusable across many DTs. However, the moni-
tor must have the ability to simultaneously service over many
DTs.
A common thread across all three integration patterns is
the externalisation of monitor properties. These must not be
baked into the monitors themselves. It is also advantageous to
have the ability to make dynamic changes to the monitored
properties of autonomous robotic systems but supporting such
a flexibility in monitoring tools is not trivial.
V. DIGITAL TWIN-BASED RUNTIME VERIFICATION
FRAMEWORK
The proposed runtime monitors amount at observing the
robot and environment state, that could be partial due to
uncertainties, and validate the robot runtime actuations with
respect to a set of safety and performance properties.
Figure 2 depicts the overall workflow of the proposed DT-
enabled runtime verification for T3B robots under uncertainty.
In fact, the robot senses its environment (Sense), analyzes
such data in correlation with the actual speed and computes
the corresponding actuation command (but does not execute it
yet ) (Analyze). The state formed by sensor data, computed
actuation and both actual and expected velocities is sent to
DT via MQTT protocol to enable runtime monitoring and
verification of the consistency of the actual actuation command
with respect to different properties. Upon receiving the valida-
tion outcome (Validate), the robot executes the actuation
command if approved by the DT runtime monitors. Otherwise,
the robot samples again the environment to estimate a new
state and compute new actuations.
A. Safety and Validation Properties
Safety is an essential consideration to be mindful of both
at design and during runtime as malfunction (safety violation)
Physical World Digital World
Control
Monitor1
Robot Model
Runtime data (state)
Runtime validation (T/F)
MQTT
Protocol
Sense
Analyze
Validate
Execute
Monitor2
Monitor3
Command
Data
State
Fig. 2: Proposed framework for DT-enabled runtime verifica-
tion of T3B.
could result in equipment-related or environmental damages.
The safety property we consider aims to prevent robot colli-
sions by maintaining a braking distance to any obstacles. The
braking distance is dependent on the actual speed, and must
be larger than the distance to any given obstacle, to a degree
that is considered acceptable in a given scenario. As such, the
runtime actuation of T3B must account for the actual speed
and the distance to obstacles in the robot trajectory. Formally,
such a safety property is specified as follows:
P1 :∀i Bdist(si)≤Ldist(si)
Where siis a runtime state of T3B, Ldist(si) =
min(si.l330, .., si.l30 )is the distance to the obstacles located
in the current heading angle of T3B, and Bdist(si)is the
distance needed at state sito bring the robot to a full stop.
In fact, the braking distance depends on the actual speed and
computed from linear and angular velocities [38].
Due to the environment uncertainty, the likelihood of the
actual speed (terrain traversing speed) to deviate from the
expected speed (wheels spinning speed) increases. This gap
becomes evident when the robot operates in terrains with
variable friction and density. To prevent drastic divergence,
we impose a tolerance property to limit how far actual speed
can deviate from expected speed at runtime as follows:
P2 :∀i(si.e −si.a)≤δ
Where si.e and si.e are the expected and actual speeds
respectively, and δis the maximum tolerance. Integrating this
runtime property ensures that the robot can receive appropriate
commands to correct any discrepancies between the actual and
expected speeds.
The Lidar can occasionally produce inaccurate and flawed
readings, and imposing a runtime monitor to ensure that the
robot does not make misguided navigation decisions due to
such faulty readings is of high significance. We specify a
validation property for the Lidar readings as follows:
Fig. 3: TeSSLa monitor specification for property P2.
P3 :∀i j, (si.lj−si.lj+1)≤γ∧(si.lj−si.lj−1)≤γ
This property enables to detect erroneous readings where the
value of a Lidar angle ljof a given state sideviates drastically
from both adjacent angles. Such cases are identified as not
obstacles given that an obstacle cannot be captured only by a
single angle, thus either a dust particle or a faulty reading.
B. Synthesis of Runtime Monitors
To impose the satisfaction of the aforementioned properties
at runtime, we synthesize a runtime monitor for each property
using the TeSSLa tool [25]. In fact, TeSSLa is a frame-
work that enables instrumentation and streaming of programs
execution using observers for runtime verification purposes.
Endowed with back-end synthesizers [21] and a friendly
instrumentation language, TeSSLa synthesizes monitors for
behavior and property specifications.
Figure 3 depicts the TeSSLa specification of the runtime
monitor implementing the tolerance property P2. The specifi-
cation takes two input streams, expectedSpeed and actualSpeed
computed from el,ea,al, and aa. The input streams are used to
calculate the speed difference diff (could be negative), which
forms a baseline to compute the appropriate actuations. For
example, diff can be added to expectedSpeed to make the
actual speed closer to the expected speed. The output from
this TeSSLa specification is a Boolean value about whether
the expected speed resulting from the execution of the current
actuation would violate this property or not, beside to the
action representing the adjusted expected speed, which then
can be used for further process or monitoring.
The runtime monitors synthesized using TeSSLa ensure that
T3B behavior adheres to the specified properties P1, P2 and
P3. For example, the runtime monitor in 3 ensures that at
every state, the absolute difference between these two does not
exceed the tolerance δ. To illustrate the data flow synthesis
and test of the monitor specification, we utilize TeSSLa’s
playground. Using actual states data, we create a trace file,
serving as a set of input events to the monitor, we can
observe the output from the monitor specification as depicted
in Listing 1.
Fig. 4: TeSSLa output stream visualization.
Listing 1: Input Trace to P2 monitor.
0 : a c t u a l S p e e d = 0
0: expectedSpeed = 1
2 : a c t u a l S p e e d = 5
2: expectedSpeed = 1
4 : a c t u a l S p e e d = 2
4: expectedSpeed = 5
6 : a c t u a l S p e e d = 3
6: expectedSpeed = 3
8 : a c t u a l S p e e d = 1
8: expectedSpeed = 4
The trace file is a sequence of timestamped events that
represent the values of the input streams at specific points in
time. Processing the events from the actual trace file at runtime
produces the output streams shown in Figure 4. TeSSLa
provides us with the output from processing the events against
the monitors specification, effectively testing and validating
the robot behavior.
T3B controller relies thereafter on combining the validation
outputs from the different runtime monitors together with a
simple control logic to decide the runtime actuations. However,
in this paper we considered the case where DT runtime
monitors only validate whether the newly computed actuations
by the robot itself are going to maintain the different properties
on the actual state.
C. Integration of Runtime Monitors into DT
The integration amounts at mediating the outcomes of the
runtime monitors to the control module, establish communi-
cation between the DT and the physical T3B and develop an
orchestrator that invokes the runtime monitors execution. We
use the MQTT protocol to transfer data and the validation out-
comes between T3B and the DT. Furthermore, we utilize the
Telegraf open-source agent to collect and analyze events from
the MQTT streams. In fact, Telegraf provides the functionality
to create pipelines for listening to events from the robot, parse
the data through a TeSSLa specification file, and send back the
TeSSLa output stream to the robot.
Listing 2: MQTT listener
[ [ i n p u t s . mq t t_ c on s um e r ] ]
s e r v e r s = [ " t c p : / / t e s t . m o s q u i t t o . o r g : 1 8 8 3 " ]
t o p i c s = [ " t e s s l a " ]
d a t a _ f o r m a t = " j s o n "
j s o n _ s t r i n g _ f i e l d s = [ " a c tu al Sp e e d " ]
[ [ o u t p u t s . m q tt ] ]
s e r v e r s = [ " t c p : / / t e s t . m o s q u i t t o . o r g : 1 8 8 3 " ]
t o p i c = " a c t i o n "
d a t a _ f o r m a t = " j s o n "
As illustrated in Listing 2, we specify the topic tessla that
Telegraf will be listening to, the format of the data, and which
JSON string field to look at. We also specify the topic where
Telegraf will output the data. The input data is processed
and forwarded to specific outputs, which in this case, will be
TeSSLa for temporal monitoring and analysis. To be able to
do this, Telegraf has created the TeSSLa Telegraf Connector,
which takes a TeSSLa specification and uses the TeSSLa
compiler to convert it into a Rust project. This results in a Rust
program that can interact with Telegraf, enabling the seamless
integration and real-time processing of data streams based on
the specified TeSSLa properties. Furthermore, we include the
Telegraf.tessla file into the TeSSLa specification as follows:
def @TelegrafIn(id: String, tags: String, field:String) def
@TelegrafOut(name: String)
The TelegrafIn marks the input streams and TelegrafOut
marks the output streams, as we previously specified in Listing
2. This is then tied to the events in the specification file for
the input streams, and the output for the output streams.
The last configuration step of the TeSSLa Telegraf Connec-
tor is to create communication between the generated Rust
project and Telegraf. The Rust program connects to Telegraf
via UDP, and based on the incoming data points from Telegraf,
the input functions of the TeSSLa monitor are called.
The output streams from the TeSSLa monitor are translated
into InfluxDB format and sent to Telegraf via UDP, providing
thus a comprehensive and complete solution for real-time mon-
itoring, analysis, and management of complex data streams.
VI. EXPERIMENT AND VALIDATION
To validate the proposed DT-enabled runtime monitoring
and verification for T3B under uncertainty, we implemented
the DT, runtime monitors, the MQTT protocol and Telegraf
interface. We conducted different experiments to demonstrate
the efficiency of using runtime monitors. Furthermore, we
collected actual (time-stamped) robot data to provide the
possibility to run different simulations in the DT without the
need to synchronize with the physical robot.
A. Experiment setup
The experiment setup involves a T3B running in an un-
controlled environment and publishes its data through ROS
topics. The data is then processed through an MQTT broker,
which acts as an intermediary between the ROS system and
the DT both ways. From T3B to DT, the broker facilitates
communication by subscribing to the ROS topics and pub-
lishing the topics data to specific MQTT topics, which the
DT then subscribes to. Similarly, the DT publishes data back
to a different MQTT topic, which the broker subscribes to,
and publishes it to ROS topics which T3B subscribes to. The
setup of having the communication between T3B and DT done
through MQTT ensures a decoupled architecture, where both
ends can operate independently. Additionally, this architecture
simplifies the creation of a mock version of the physical robot.
By publishing data in the same format as T3B to the ROS
Fig. 5: Code snippet of the script to simulate stored data.
Fig. 6: Code snippet of DT to call the tolerance monitor using
data.
topics, the MQTT broker gathers data and forward it to DT,
seamlessly integrating the mock data into DT.
Figure 5 presents a code snippet demonstrating how to
publish mock data from a CSV file to a ROS topic. As long as
the data is structured in the Lidar data format, a custom ROS
message comprising two arrays (one with 360 float values and
the other with 7 string values), it is straightforward to simulate
the physical robot in DT. In Figure 6, we depict how the arrays
from the Lidar data are used by DT .
In this experiment, we focused only on the tolerance runtime
monitor (synthesized for P2), which is why only the actual
speed and linear speed variables are defined. Furthermore, we
enable the runtime monitor to propose corrective actions to
the speed actuation, beside to validation. The DT calls the
Fig. 7: Code snippet of the tolerance monitor implemented in
Python.
optimizeActualSpeed function, depicted in Figure 7, which is
an implementation of the corresponding TeSSLa monitor.
The speed difference is a proportional gain of 0.5 ensuring
that the corrective action taken to adjust the robot’s speed
is proportional to the difference. This ensures that larger
errors result in stronger corrective actions, improving the
T3B’s ability to reach the desired speed more accurately.
Furthermore, we also check that the proposed speed actuation
does not exceed 0.22, as this is the maximum speed for T3B.
Finally, we are also returning a boolean value, which is used
to visualize which data is augmented and to check when we
need to publish an action for the robot to execute.
B. Use of SLAM methods
Before presenting the results of the experiment on runtime
monitors, it is essential to discuss the importance of choosing
the appropriate SLAM methods. In this experiment, the actual
speed is derived using SLAM’s ability to publish the current
position of the robot. However, there is a problem when using
generic SLAM gmapping, as it relies on the wheel encoders
of the robot. This means that any movement of the wheels
is reflected in the SLAM’s reported position, even if the
physical robot itself hasn’t actually moved. While this might
not be an issue in ideal conditions, our experiment involves
running the robot through rough terrain to examine the runtime
monitor’s ability to correct the difference between actual and
expected speed, which can occur in such conditions. Therefore
for this experiment, we are using the hector SLAM method.
The advantage of using hector is obvious looking at Figure
8. Unlike gmapping, which closely follows the wheel encoder
data and can falsely indicate movement even when the robot is
stationary (due to wheels slippage), hector SLAM relies solely
on Lidar sensor data. This makes it more accurate in rough
terrains, although it may still be susceptible to noise.
C. Experiment Results
T3B robot is run on a yoga mat with various things located
underneath it, providing a bumpy low-density terrain. The
robot is programmed to test different linear speeds, ranging
from 0.015 to 0.1. This helps in visualizing when the robot
struggles to overcome various challenges in the terrain.
Fig. 8: Plots showing a stationary T3B publishing its actual
and expected speed using SLAM gmapping (top figure) and
SLAM hector (bottom figure).
Figure 9 shows the actual and expected speeds when using
the default robot navigation. As one can see, the robot achieves
(almost) the expected speed, with a small delay, in the first 100
seconds. Afterward, it meets the first small hill in the terrain
and is ultimately stuck until the expected speed reaches 0.1
again at around 250 seconds making the robot unstuck. Lastly,
the robot gets stuck again and never exits before the execution
is halted, making thus the difference between the actual and
expected speed drastic.
Figure 10 depicts the speed for the same experiment where
the robot is augmented with the DT runtime monitors. Again,
Fig. 9: Plot of actual and expected speed using default robot
navigation.
Fig. 10: Plot of the actual and expected speeds when the robot
is augmented with the DT runtime monitor for speed tolerance.
we observe that it drives smoothly until around 100 seconds
when it meets the same small hill affecting the actual speed.
The plot also shows where the runtime monitor is augmenting
the expected speed of the robot to overcome these challenges
showcased with the orange data points. We see one data point
experiencing this at around 125, ultimately getting the robot
over the small hill and achieving an actual speed close to
the expected speed. The rest of the plot further demonstrates
the robot encountering two additional challenges, affecting the
actual speed, with the runtime monitor aiding the robot in
overcoming these obstacles in the terrain. It can be observed
that actual and expected speeds follow each other, albeit with a
small delay, thanks to the effective intervention of the runtime
monitor.
The comparison between the two figures clearly demon-
strates the benefits of using the DT runtime monitor. With the
default navigation, the robot struggles to maintain the expected
speed over uneven terrain. In contrast, the DT runtime monitor
significantly improves the robot’s ability to handle challenging
terrain by dynamically adjusting its speed.
To quantify the performance improvement, we calculate the
Mean Squared Error (MSE) between the actual and expected
speed which can be seen in Table I.
TABLE I: MSE for Default Robot Navigation and Augmented
Robot Navigation
Navigation Mode MSE
Default Robot Navigation 0.0017
Augmented Robot Navigation 0.0010
The numerical difference between 0.0017 and 0.0010 might
seem minor, but the relative improvement of approximately
41% indicates a significant enhancement in the robot’s per-
formance. The augmented navigation system demonstrates
better speed tracking accuracy, increased robustness, and more
reliable operation, particularly in challenging conditions.
VII. CONCLUSION
This paper proposed a digital twin empowered runtime ver-
ification framework for autonomous mobile robots, turtlebot 3
Burger, operating in uncertain environments. The uncertainty
comes from the terrain conditions (density, elevation, etc) and
faulty sensor data (due to noise, dust, occlusion, etc).
The robot behavior constraints and safety properties are
synthesized as runtime monitors in TeSSLA, implemented
in Python and integrated in the (cloud-located) digital twin
platform we designed for T3B robots. The synchronization of
the executable digital twin, via MQTT protocol and Telegraf
agent, with the robot enables continuous monitoring and
validation of the robot’s behavior in real-time.
We have conducted different experiments to analyze the
efficiency and time accuracy of the proposed runtime monitors
using a physical robot and real-world scenarios. The experi-
ment results demonstrate a high effectiveness of the proposed
runtime monitoring and verification in ensuring the reliability
and robustness of the autonomous robot behavior in uncertain
environments.
As a future work, we plan to augment the functionality
of the runtime monitors to incorporate computation and co-
ordination of the proper control actuations, and enhance the
synchronization efficiency to reduce the time delays between
the robot and its digital twin.
REFERENCES
[1] A. Abbadi and R. Matousek. Hybrid rule-based motion planner for
mobile robot in cluttered workspace: A combination of rrt and cell
decomposition approaches. Soft Computing, 22(6):1815–1831, 2018.
[2] S. M. Ahmadi and M. M. Fateh. Robust control of electrically driven
robots using adaptive uncertainty estimation. Computers & Electrical
Engineering, 56:674–687, 2016.
[3] J. P. Allamaa, P. Patrinos, H. Van der Auweraer, and T. D. Son. Sim2real
for autonomous vehicle control using executable digital twin. IFAC-
PapersOnLine, 55(24):385–391, 2022.
[4] O. Andersson. Learning to make safe real-time decisions under un-
certainty for autonomous robots. Linköping Studies in Science and
Technology. Dissertations, 2020.
[5] E. Bartocci, R. Grosu, A. Karmarkar, S. A. Smolka, S. D. Stoller,
E. Zadok, and J. Seyster. Adaptive runtime verification. In Runtime
Verification: Third International Conference, RV 2012, Istanbul, Turkey,
September 25-28, 2012, Revised Selected Papers 3, pages 168–182,
2013.
[6] F. Cuevas, O. Castillo, and P. Cortes-Antonio. Towards an adaptive
control strategy based on type-2 fuzzy logic for autonomous mobile
robots. In 2019 IEEE International Conference on Fuzzy Systems
(FUZZ-IEEE), pages 1–6, 2019.
[7] J. G. N. De Carvalho Filho, E. Á. N. Carvalho, L. Molina, and
E. O. Freire. The impact of parametric uncertainties on mobile robots
velocities and pose estimation. IEEE Access, 7:69070–69086, 2019.
[8] J. Dobaj, A. Riel, T. Krug, M. Seidl, G. Macher, and M. Egretzberger.
Towards digital twin-enabled devops for cps providing architecture-
based service adaptation & verification at runtime. In Proceedings of
the 17th Symposium on Software Engineering for Adaptive and Self-
Managing Systems, pages 132–143, 2022.
[9] M. Dreissig, D. Scheuble, F. Piewak, and J. Boedecker. Survey on lidar
perception in adverse weather conditions. 05 2023.
[10] B. Dugarjav, S.-G. Lee, D. Kim, J. H. Kim, and N. Y. Chong. Scan
matching online cell decomposition for coverage path planning in an
unknown environment. International Journal of Precision Engineering
and Manufacturing, 14:1551–1558, 2013.
[11] G. Dulac-Arnold, D. Mankowitz, and T. Hester. Challenges of real-world
reinforcement learning. arXiv preprint arXiv:1904.12901, 2019.
[12] L. Erickson and S. LaValle. A simple, but np-hard, motion planning
problem. In Proceedings of the AAAI Conference on Artificial Intelli-
gence, volume 27, pages 1388–1393, 2013.
[13] H. Feng, C. Gomes, C. Thule, K. Lausdahl, A. Iosifidis, and P. G. Larsen.
Introduction to digital twin engineering. In 2021 Annual Modeling and
Simulation Conference (ANNSIM), pages 1–12. IEEE, 2021.
[14] E. Ferko, A. Bucaioni, and M. Behnam. Architecting digital twins. IEEE
Access, 10:50335–50350, 2022.
[15] J. S. Fitzgerald, P. G. Larsen, T. Margaria, J. Woodcock, and C. Gomes.
Engineering of digital twins for cyber-physical systems. In Leverag-
ing Applications of Formal Methods, Verification and Validation. 11th
International Symposium, ISoLA, 2022.
[16] F. Foldager, O. Balling, M. Boel, C. Gamble, P. G. Larsen, and O. Green.
Design space exploration in the development of agricultural robots.
In Book of Abstracts of the European Conference on Agricultural
Engineering: AgEng2018, pages 60–61. Wageningen University, 2018.
[17] C. D. Franco and N. Bezzo. Interpretable run-time monitoring and
replanning for safe autonomous systems operations. IEEE Robotics and
Automation Letters, 5(2):2427–2434, 2020.
[18] P. Gia Luan and N. T. Thinh. Real-time hybrid navigation system-
based path planning and obstacle avoidance for mobile robots. Applied
Sciences, 10(10), 2020.
[19] L. González-Rodríguez and A. Plasencia-Salgueiro. Uncertainty-Aware
Autonomous Mobile Robot Navigation with Deep Reinforcement Learn-
ing, pages 225–257. Springer International Publishing, Cham, 2021.
[20] M. Grieves and J. Vickers. Digital Twin: Mitigating Unpredictable,
Undesirable Emergent Behavior in Complex Systems. In F.-J. Kahlen,
S. Flumerfelt, and A. Alves, editors, Transdisciplinary Perspectives
on Complex Systems, pages 85–113. Springer International Publishing
Switzerland, August 2017.
[21] K. Havelund and G. Ro¸su. Synthesizing monitors for safety properties.
In J.-P. Katoen and P. Stevens, editors, Tools and Algorithms for the
Construction and Analysis of Systems, 2002.
[22] C. Hernández, J. Bermejo-Alonso, and R. Sanz. A self-adaptation
framework based on functional knowledge for augmented autonomy in
robots. Integrated Computer-Aided Engineering, 25(2):157–172, 2018.
[23] T. Hoebert, W. Lepuschitz, E. List, and M. Merdan. Cloud-based digital
twin for industrial robotics. In Industrial Applications of Holonic and
Multi-Agent Systems: 9th International Conference, 2019.
[24] M. Javaid, A. Haleem, R. P. Singh, and R. Suman. Substantial capa-
bilities of robotics in enhancing industry 4.0 implementation. Cognitive
Robotics, 1:58–75, 2021.
[25] H. Kallwies, M. Leucker, M. Schmitz, A. Schulz, D. Thoma, and
A. Weiss. Tessla – an ecosystem for runtime verification. In T. Dang
and V. Stolz, editors, Runtime Verification, 2022.
[26] S. Kang, I. Chun, and H.-S. Kim. Design and implementation of runtime
verification framework for cyber-physical production systems. Journal
of Engineering, 2019(1):2875236, 2019.
[27] K. J. Kristoffersen, C. Pedersen, and H. R. Andersen. Runtime
verification of timed ltl using disjunctive normalized equation systems.
Electronic Notes in Theoretical Computer Science, 89(2):210–225, 2003.
[28] N. Laxman, C. H. Koo, and P. Liggesmeyer. U-map: A reference map
for safe handling of runtime uncertainties. In M. Zeller and K. Höfig,
editors, Model-Based Safety and Assessment, 2020.
[29] J. Lee, H. Dallali, M. Jin, D. G. Caldwell, and N. G. Tsagarakis. Robust
and adaptive dynamic controller for fully-actuated robots in operational
space under uncertainties. Autonomous Robots, 43:1023–1040, 2019.
[30] F. L. Lewis and S. S. Ge. Autonomous mobile robots: sensing, control,
decision making and applications. CRC Press, 2018.
[31] M. Liu, S. Fang, H. Dong, and C. Xu. Review of digital twin
about concepts, technologies, and industrial applications. Journal of
manufacturing systems, 58:346–361, 2021.
[32] A. Loganathan and N. S. Ahmad. A systematic review on recent
advances in autonomous mobile robot navigation. Engineering Science
and Technology, an International Journal, 40:101343, 2023.
[33] G. Lumer-Klabbers, J. O. Hausted, J. L. Kvistgaard, H. D. Macedo,
M. Frasheri, and P. G. Larsen. Towards a digital twin framework for
autonomous robots. In SESS: The 5th IEEE International Workshop on
Software Engineering for Smart Systems. COMPSAC 2021, IEEE, July
2021.
[34] D. McKee. Platform stack architectural framework: An introductory
guide. A Digital Twin Consortium White Paper. Digital Twin Consor-
tium, 2023.
[35] M. A. Niloy, A. Shama, R. K. Chakrabortty, M. J. Ryan, F. R. Badal,
Z. Tasneem, M. H. Ahamed, S. I. Moyeen, S. K. Das, M. F. Ali, et al.
Critical design and control issues of indoor autonomous mobile robots:
A review. IEEE Access, 9:35338–35370, 2021.
[36] A. Petrovska, M. Neuss, I. Gerostathopoulos, and A. Pretschner. Run-
time reasoning from uncertain observations with subjective logic in
multi-agent self-adaptive cyber-physical systems. In 2021 International
Symposium on Software Engineering for Adaptive and Self-Managing
Systems (SEAMS), 2021.
[37] L. F. Rivera, M. Jiménez, G. Tamura, N. M. Villegas, and H. A. Müller.
Designing run-time evolution for dependable and resilient cyber-physical
systems using digital twins. Journal of Integrated Design and Process
Science, 25(2):48–79, 2021.
[38] A. Salimi Lafmejani, H. Farivarnejad, and S. Berman. Adaptation of
gradient-based navigation control for holonomic robots to nonholonomic
robots. IEEE Robotics and Automation Letters, 6, 11 2020.
[39] B. Singh, R. Kumar, and V. P. Singh. Reinforcement learning in robotic
applications: a comprehensive survey. Artificial Intelligence Review,
55(2):945–990, 2022.
[40] M. Smyrnakis, H. Qu, D. Bauso, and S. M. Veres. Multi-model adaptive
learning for robots under uncertainty. In ICAART (1), pages 50–61, 2020.
[41] W. A. Syaqur, A. S. Ali Yeon, A. H. Abdullah, K. Kamarudin, R. Vis-
vanathan, A. H. Ismail, S. M. Mamduh, and A. Zakaria. Mobile robot
based simultaneous localization and mapping in unimap’s unknown
environment. In 2018 International Conference on Computational
Approach in Smart Systems Design and Applications (ICASSDA), 2018.
[42] P. Talasila, C. Gomes, P. H. Mikkelsen, S. G. Arboleda, E. Kamburjan,
and P. G. Larsen. Digital twin as a service (dtaas): A platform for digital
twin developers and users. In 2023 IEEE Smart World Congress (SWC),
2023.
[43] H. Temeltas and D. Kayak. Slam for robot navigation. IEEE Aerospace
and Electronic Systems Magazine, 23(12):16–19, 2008.
[44] A. Temperekidis, N. Kekatos, P. Katsaros, W. He, S. Bensalem,
H. AbdElSabour, M. AbdElSalam, and A. Salem. Towards a digital
twin architecture with formal analysis capabilities for learning-enabled
autonomous systems. In International Conference on Modelling and
Simulation for Autonomous Systems, pages 163–181, 2022.
[45] H. T. Tramsen, L. Heepe, J. Homchanthanakul, F. Wörgötter, S. N. Gorb,
and P. Manoonpong. Getting grip in changing environments: the effect
of friction anisotropy inversion on robot locomotion. Applied Physics
A, 127, 2021.
[46] M. Weigl, B. Siemiaatkowska, K. A. Sikorski, and A. Borkowski. Grid-
based mapping for autonomous mobile robot. Robotics and Autonomous
Systems, 11(1):13–21, 1993.
[47] K. Yang, X. Tang, J. Li, H. Wang, G. Zhong, J. Chen, and D. Cao.
Uncertainties in onboard algorithms for autonomous vehicles: Chal-
lenges, mitigation, and perspectives. IEEE Transactions on Intelligent
Transportation Systems, 2023.
[48] V. Zambrano, J. Mueller-Roemer, M. Sandberg, P. Talasila, D. Zanin,
P. G. Larsen, E. Loeschner, W. Thronicke, D. Pietraroia, G. Landolfi,
A. Fontana, M. Laspalas, J. Antony, V. Poser, T. Kiss, S. Bergweiler, S. P.
Serna, S. Izquierdo, I. Viejo, A. Juan, F. Serrano, and A. Stork. Industrial
digitalization in the industry 4.0 era: Classification, reuse and authoring
of digital models on digital twin platforms. Array, page 100176, 2022.
[49] T. Zhang and H. Mo. Reinforcement learning for robot research:
A comprehensive review and open issues. International Journal of
Advanced Robotic Systems, 18(3):17298814211007305, 2021.
[50] J. Zhong, C. Ling, A. Cangelosi, A. Lotfi, and X. Liu. On the gap
between domestic robotic applications and computational intelligence.
Electronics, 10(7):793, 2021.
[51] Q. Zhu, W. Li, H. Kim, Y. Xiang, K. Wardega, Z. Wang, Y. Wang,
H. Liang, C. Huang, J. Fan, et al. Know the unknowns: Addressing
disturbances and uncertainties in autonomous systems. In Proceedings
of the 39th International Conference on Computer-Aided Design, pages
1–9, 2020.
[52] Q. Zhu, W. Li, H. Kim, Y. Xiang, K. Wardega, Z. Wang, Y. Wang,
H. Liang, C. Huang, J. Fan, et al. Know the unknowns: Addressing
disturbances and uncertainties in autonomous systems. In Proceedings
of the 39th International Conference on Computer-Aided Design, pages
1–9, 2020.