Verification of Autonomous Systems

The overarching aim of the Verification of Autonomous Systems group is to develop novel computational methods and tools for providing safety guarantees to a wide range of autonomous systems, including autonomous vehicles, robotic systems, and swarm systems.

We are particularly active in the following topics:

  • Scalable methods and tools for the verification of neural networks, including CNNs, and RNNs.
  • Parameterised model checking methods for the verification of swarm systems.
  • AI-based specification languages and logic-based verification methods for reasoning about agent-based systems.
  • Safe reinforcement learning for agent-based systems.

Our work is guided by a passion for Artificial Intelligence and the belief that AI should be safe and secure for society to use.

We have a history of development and maintenance of open-source state-of-the-art toolkits for Safe AI and international collaboration both with academia and the industry.

We presently benefit from strong links with the Assured Autonomy DARPA program and the Centre for Doctoral Training in Safe and Trusted AI.

News

08 June 2023

VAS Group has a paper on verification-friendly networks accepted at IJCNN23

11 May 2023

VAS Group has a paper on robust explanations accepted at AAMAS23

02 May 2023

VAS Group has a paper on verification of agents accepted at AAMAS23

12 January 2023

VAS Group holds Mini-Workshop with Boeing

29 December 2022

Meet the team member: Benedikt Brückner

23 November 2022

VAS members have four papers accepted at AAAI 2023

... see all News

Next Scheduled Seminar

There are currently no seminars planned.

... see other seminars