Lab

Autonomy and Verification Group

About the lab

The Autonomy and Verification Laboratory focusses on research into autonomous systems and their verification. Applications include unmanned aircraft, robotics and distributed sensor systems.
Website: https://autonomy-and-verificationgithub.io
Twitter: https://twitter.com/AandVNetwork

Featured projects (1)

Project
This project has three goals towards the use of intelligent agents in the scenario of autonomous vehicles: i.) implement models for autonomous vehicles based on Intelligent Agents controlling the high-level functions which are used in the decision-making process (e.g. obstacle avoidance, autonomous control). Moreover, we are also interested in using formal verification towards the decision-making process of our Agents by using Model Checking for Agent Programming Languages (MCAPL); ii.) formalise the rules of the road for road junctions by using temporal logic; next, these rules should be embedded into Intelligent Agents (which model the behaviour of autonomous vehicles); and the behaviour of the Agents must be formally verified in order to check whether or not an autonomous vehicle behave accordingly the urban traffic rules; iii.) endow the intelligent agents with mechanisms which are capable to deal with ethical decisions on autonomous vehicles, in a way one can verify different ethical frameworks and how they can work in the scenario of autonomous vehicles and the urban traffic rules.

Featured research (10)

Usually, the design of an Autonomous Vehicle (AV) does not take into account traffic rules and so the adoption of these rules can bring some challenges, e.g., how to come up with a Digital Highway Code which captures the proper behaviour of an AV against the traffic rules and at the same time minimises changes to the existing Highway Code? Here, we formally model and implement three Road Junction rules (from the UK Highway Code). We use timed automata to model the system and the MCAPL (Model Checking Agent Programming Language) framework to implement an agent and its environment. We also assess the behaviour of our agent according to the Road Junction rules using a double-level Model Checking technique, i.e., UPPAAL at the design level and AJPF (Agent Java PathFinder) at the development level. We have formally verified 30 properties (18 with UPPAAL and 12 with AJPF), where these properties describe the agent’s behaviour against the three Road Junction rules using a simulated traffic scenario, including artefacts like traffic signs and road users. In addition, our approach aims to extract the best from the double-level verification, i.e., using time constraints in UPPAAL timed automata to determine thresholds for the AVs actions and tracing the agent’s behaviour by using MCAPL, in a way that one can tell when and how a given Road Junction rule was selected by the agent. This work provides a proof-of-concept for the formal verification of AV behaviour with respect to traffic rules.
The complexity and flexibility of autonomous robotic systems necessitates a range of distinct verification tools. This presents new challenges not only for design verification but also for assurance approaches. Combining the distinct formal verification tools, while maintaining sufficient formal coherence to provide compelling assurance evidence is difficult, often being abandoned for less formal approaches. In this paper we demonstrate, through a case study, how a variety of distinct formal techniques can be brought together in order to develop a justifiable assurance case. We use the AdvoCATE assurance case tool to guide our analyses and to integrate the artifacts from the formal methods that we use, namely: fret, cocosim and Event-B. While we present our methodology as applied to a specific Inspection Rover case study, we believe that this combination provides benefits in maintaining coherent formal links across development and assurance processes for a wide range of autonomous robotic systems.
Modern AI systems have become of widespread use in almost all sectors with a strong impact on our society. However, the very methods on which they rely, based on Machine Learning techniques for processing data to predict outcomes and to make decisions, are opaque, prone to bias and may produce wrong answers. Objective functions optimized in learning systems are not guaranteed to align with the values that motivated their definition. Properties such as transparency, verifiability, explainability, security, technical robustness and safety, are key to build operational governance frameworks, so that to make AI systems justifiably trustworthy and to align their development and use with human rights and values.
Recently, robotic applications have been seeing widespread use across industry, often tackling safety-critical scenarios where software reliability is paramount. These scenarios often have unpredictable environments and, therefore, it is crucial to be able to provide assurances about the system at runtime. In this paper, we introduce ROSMonitoring, a framework to support Runtime Verification (RV) of robotic applications developed using the Robot Operating System (ROS). The main advantages of ROSMonitoring compared to the state of the art are its portability across multiple ROS distributions and its agnosticism w.r.t. the specification formalism. We describe the architecture behind ROSMonitoring and show how it can be used in a traditional ROS example. To better evaluate our approach, we apply it to a practical example using a simulation of the Mars curiosity rover. Finally, we report the results of some experiments to check how well our framework scales.
Long-term autonomy requires autonomous systems to adapt as their capabilities no longer perform as expected. To achieve this, a system must first be capable of detecting such changes. In this position paper, we describe a system architecture for BDI autonomous agents capable of adapting to changes in a dynamic environment and outline the required research. Specifically, we describe an agent-maintained self-model with accompanying theories of durative actions and learning new action descriptions in BDI systems.

Lab head

Michael Fisher
Department
  • Department of Computer Science
About Michael Fisher
  • Professor, and Royal Academy of Engineering Chair in Emerging Technologies, at the University of Manchester. See: https://web.cs.manchester.ac.uk/~michael

Members (10)

Willem Visser
  • Stellenbosch University
Clare Dixon
  • University of Liverpool
Louise Abigail Dennis
  • The University of Manchester
Benjamin Hirsch
  • Khalifa University
Jonathan Maxwell Aitken
  • The University of Sheffield
Rafael Cauê Cardoso
  • University of Aberdeen
Marie Farrell
  • National University of Ireland, Maynooth
Matt Webster
  • University of Liverpool
Georgios Kourtis
Georgios Kourtis
  • Not confirmed yet
Matt Webster
Matt Webster
  • Not confirmed yet

Alumni (2)

Maryam Kamali
  • University of Liverpool
Elisa Cucco
  • University of Liverpool