Verification of Autonomous Systems

There is a growing trend towards autonomy in present and forthcoming computing applications, including web-services and autonomous vehicles. Many of these applications are based on the concept of autonomous agent.

The group focuses on developing methods aimed at verifying that autonomous multi-agent systems meet their specifications. Specifically, the group is concerned with developing efficient model checking techniques and tools to verify multi-agent systems specified by agent-based logics. The research draws from areas such as modal logic, multi-agent systems, and model checking.

More recently, systems based on neural networks have become of increasing importance. The group is now also investigating techniques to allow verification of systems based on neural networks. This research is partly funded by DARPA’s Assured Autonomy program.

A number of the group’s members are also involved in the Safe & Trusted AI Centre for Doctoral Training, which is jointly run with King’s College London.

We are always looking for passionate new PhD students, Postdocs, and Master students to join the team (more info) !

News

20 September 2020

Meet the new team member: Dr Yang Zheng

29 July 2020

VAS members have a paper accepted at KR 2020

21 July 2020

VAS group participates in VNN-COMP

12 July 2020

Results of the ARCH-COMP20 are out!

11 May 2020

VAS members give talks at AAMAS 2020

19 April 2020

VAS members have two papers accepted at IJCAI 2020

... see all News

Next Scheduled Seminar

There are currently no seminars planned.

... see other seminars