There is a growing trend towards autonomy in present and forthcoming computing applications, including web-services and autonomous vehicles. Many of these applications are based on the concept of autonomous agent.
The group focuses on developing methods aimed at verifying that autonomous multi-agent systems meet their specifications. Specifically, the group is concerned with developing efficient model checking techniques and tools to verify multi-agent systems specified by agent-based logics. The research draws from areas such as modal logic, multi-agent systems, and model checking.
More recently, systems based on neural networks have become of increasing importance. The group is now also investigating techniques to allow verification of systems based on neural networks. This research is partly funded by DARPA’s Assured Autonomy program.
A number of the group’s members are also involved in the Safe & Trusted AI Centre for Doctoral Training, which is jointly run with King’s College London.
We are always looking for passionate new PhD students, Postdocs, and Master students to join the team (more info) !
Strong Mixed-Integer Programming Formulations for Trained Neural Networks
2nd December 2019, 12pm
Room 217, Huxley Building
Francesco Leofante joins the group
28 October 2019Alex presents at ATVA 2019
01 October 2019Three new PhD students join the group!
01 March 2019Safe & Trusted AI CDT
23 January 2019VAS members have two papers accepted at AAMAS 2019