Table 4 - uploaded by Paul Gainer
Content may be subject to copyright.

Model Size and Model Checking Times for Different Temporal Granularities.
Source publication
The Care-O-bot is an autonomous robotic assistant that can
support people in domestic and other environments. The behaviour of the
robot can be defined by a set of high level control rules. The adoption
and further development of such robotic assistants is inhibited by the
absence of assurances about their safety. In previous work, formal models
of...
Context in source publication
Context 1
... that CRutoN accepts parameters to allow modulation of the tempo- ral granularity of the model, and associates a fixed length of time (duration) in seconds with every state in the formal model. Table 4 shows the effect of tempo- ral granularity on the size of the models, and the time taken to perform model checking. The results correspond to the initial set of 31 behaviours. ...
Similar publications
For assistive robots and virtual agents to achieve ubiquity, machines will need to anticipate the needs of their human counterparts. The field of Learning from Demonstration (LfD) has sought to enable machines to infer predictive models of human behavior for autonomous robot control. However, humans exhibit heterogeneity in decision-making, which t...
Citations
... Model checking (Clarke et al., 1999;Fisher, 2011), a formal method used for V&V, is exhaustive over the state space of a model but requires abstraction of the full system (e.g., the high-level control algorithms, low-level control, and mechanical behavior, and the code that runs in the robot) into a finite number of states. For this reason, formal verification can be applied to the analysis of high-level decision-making engines for safety and liveness purposes, exemplified by our previous work in HRI scenarios (Bordini et al., 2009;Dixon et al., 2014;Gainer et al., 2017;Webster et al., 2015). Reasoning and high-level control algorithms have been verified through formal verification and model checking for other kinds of autonomous robots, such as ground robots (Mitsch et al., 2017), unmanned aircraft (Webster et al., 2013), and multi-robot swarm systems Konur et al., 2012). ...
We present an approach for the verification and validation (V&V) of robot assistants in the context of human–robot interactions, to demonstrate their trustworthiness through corroborative evidence of their safety and functional correctness. Key challenges include the complex and unpredictable nature of the real world in which assistant and service robots operate, the limitations on available V&V techniques when used individually, and the consequent lack of confidence in the V&V results. Our approach, called corroborative V&V, addresses these challenges by combining several different V&V techniques; in this paper we use formal verification (model checking), simulation-based testing, and user validation in experiments with a real robot. This combination of approaches allows V&V of the human–robot interaction task at different levels of modeling detail and thoroughness of exploration, thus overcoming the individual limitations of each technique. We demonstrate our approach through a handover task, the most critical part of a complex cooperative manufacturing scenario, for which we propose safety and liveness requirements to verify and validate. Should the resulting V&V evidence present discrepancies, an iterative process between the different V&V techniques takes place until corroboration between the V&V techniques is gained from refining and improving the assets (i.e., system and requirement models) to represent the human–robot interaction task in a more truthful manner. Therefore, corroborative V&V affords a systematic approach to “meta-V&V,” in which different V&V techniques can be used to corroborate and check one another, increasing the level of certainty in the results of V&V.
... Some previous work has focused on a robot's decision making, ignoring its environment [110,120]. Others assume that the environment is static and known, prior to the robot's deployment [76,128,174], which is often neither possible nor feasible [88]. For example, the environment may contain both fixed and mobile objects whose future behavior is unknown [31], or the robot's goal may be to map the environment, so the layout is unknown. ...
... This covers both safety and the public perception of safety; notions of usability and reliability; and a perception that the robot will not do anything unexpected, unsafe, or unfriendly [60]. This lack of both trust and safety assurances can hamper adoption of robotic systems in wider society [76], even where they could be extremely useful and potentially improve the quality of human life. ...
... The work in [173,176] captures air safety rules and assumptions using LTL (extended to capture BDI agent beliefs), which are used to verify that an autonomous pilotless aircraft follows the rules of the air in the same way as a pilot. Similarly, [60] and [76] present an approach for automatically building PTL models of the safety rules and environment of a robotic domestic assistant. These models are checked against a probabilistic description of the robot's environment (a sensor-equipped house in the United Kingdom) to ensure that the rules are sufficient to keep a human safe. ...
Autonomous robotic systems are complex, hybrid, and often safety critical; this makes their formal specification and verification uniquely challenging. Though commonly used, testing and simulation alone are insufficient to ensure the correctness of, or provide sufficient evidence for the certification of, autonomous robotics. Formal methods for autonomous robotics have received some attention in the literature, but no resource provides a current overview. This article systematically surveys the state of the art in formal specification and verification for autonomous robotics. Specially, it identifies and categorizes the challenges posed by, the formalisms aimed at, and the formal approaches for the specification and verification of autonomous robotics.
... Some techniques focus solely on the robotic control software, ignoring the environment [102,95]. Others assume that the environment is static and known, prior to the robot's deployment [111,153,63], which is often neither possible nor feasible [73]. For example, the environment may contain both fixed and mobile objects whose future behaviour is unknown [26] or the robot's goal may be to map the environment, so the layout is not known. ...
... This covers both safety and the public perception of safety; notions of usability and reliability; and a perception that the robot will not do anything unexpected, unpleasant or unfriendly [51]. This lack of both trust and safety assurances can hamper adoption of robotic systems in wider society [63], even where they could be extremely useful. ...
... The work in [51,63] presents the automatic modelling of the safety rules and environment of a robotic domestic assistant (a Care-O-Bot) in Probabilistic Temporal Logic (PTL). The Care-O-Bot's environment -a sensor-equipped UK domestic house -is modelled; the human isn't explicitly captured, but the house's sensors are able to change arbitrarily. ...
Robotic systems are complex and critical: they are inherently hybrid, combining both hardware and software; they typically exhibit both cyber-physical attributes and autonomous capabilities; and are required to be at least safe and often ethical. While for many engineered systems testing, either through real deployment or via simulation, is deemed sufficient the uniquely challenging elements of robotic systems, together with the crucial dependence on sophisticated software control and decision-making, requires a stronger form of verification. The increasing deployment of robotic systems in safety-critical scenarios exacerbates this still further and leads us towards the use of formal methods to ensure the correctness of, and provide sufficient evidence for the certification of, robotic systems. There have been many approaches that have used some variety of formal specification or formal verification in autonomous robotics, but there is no resource that collates this activity in to one place. This paper systematically surveys the state-of-the art in specification formalisms and tools for verifying robotic systems. Specifically, it describes the challenges arising from autonomy and software architectures, avoiding low-level hardware control and is subsequently identifies approaches for the specification and verification of robotic systems, while avoiding more general approaches.
Assistive robotic systems are quickly becoming a core technology for the service sector as they are understood capable of supporting people in need of assistance in a wide variety of tasks. This step poses a number of ethical and technological questions. The research community is wondering how service robotics can be a step forward in human care and aid, and how robotics applications can be realized in order to put the human role at the forefront. Therefore, there is a growing demand for frameworks supporting robotic application designers in a “human-aware” development process. This paper presents a model-driven framework for analyzing and developing human–robot interactive scenarios in non-industrial settings with significant sources of uncertainty. The framework’s core is a formal model of the agents at play—the humans and the robot—and the robot’s mission, which is then put through verification to estimate the probability of completing the mission. The model captures non-trivial features related to human behavior, specifically the unpredictability of human choices and physiological aspects tied to their state of health. To foster the framework’s accessibility, we present a verification tool-agnostic Domain-Specific Language that allows designers lacking expertise in formal modeling to configure the interactive scenarios in a user-friendly manner. We compare the formal analysis outputs with results obtained by deploying benchmark scenarios in the physical environment with a real mobile robot to assess whether the formal model adheres to reality and whether the verification results are accurate. The entire development pipeline is then tested on several scenarios from the healthcare setting to assess its flexibility and effectiveness in the application design process.
Guaranteeing safety is crucial for autonomous robotic agents. Formal methods such as model checking show great potential to provide guarantees on agent and multi-agent systems. However, as robotic agents often work in open, dynamic and unstructured environments, achieving high-fidelity environment models is non-trivial. Most verification approaches for agents focus on checking the internal reasoning logic without considering operating environments or focus on a specific type of environments such as grid-based or graph-based environments. In this paper we propose a framework to model and verify the decision making of autonomous robotic agents against assumptions on environments. The framework focuses on making a clear separation between agent modeling and environment modeling, as well as providing formalism to specify agent’s decision making and assumptions on environments. As the first demonstration of this ongoing research, we provide an example of using the framework to verify an autonomous UAV agent performing pylon inspection.