Autonomous systems are technologies that interact within an environment to gain knowledge and build awareness, learn, adapt and make decisions, with little or no human control. They include automated decision-making software and ‘smart’ devices as well as
self-driving cars, drones and healthcare and surgical robots. These systems are already used in many sectors of society. Given their increased use, it is important to ensure that they are designed, built and deployed in a way that can be fully relied upon.

Our goal is to develop a unifying framework that will integrate rigorous verification techniques for autonomous systems. Our framework will support the heterogeneous and adaptive nature of verification techniques, their scale, and their levels of abstraction: from requirements and planning to coding and control algorithms to actual hardware and robotic implementation.

This is a multi-disciplinary node with expertise in AI, robotics, human-computer interaction, systems and software engineering, and testing in order to realise our vision in collaboration with the Trustworthy Autonomous Systems Hub and other Nodes.

Our vision is that in about ten years we will have advanced the science and engineering techniques essential for the verification of a wide class of autonomous systems and their components. The purpose of this proposal is to provide leadership in verifiability to make this happen.

A successful outcome will be the availability of verified systems,
components, and algorithms in a UK repository: the Verified Autonomy Store. The repository is a unique concept for autonomous systems, but is inspired by the similar app stores in other domains.