There is a growing trend towards autonomy in present and forthcoming computing applications, including web-services and autonomous vehicles. Many of these applications are based on the concept of autonomous agent.
The group focuses on developing methods aimed at verifying that autonomous multi-agent systems meet their specifications. Specifically, the group is concerned with developing efficient model checking techniques and tools to verify multi-agent systems specified by agent-based logics. The research draws from areas such as modal logic, multi-agent systems, and model checking.
More recently, systems based on neural networks have become of increasing importance. The group is now also investigating techniques to allow verification of systems based on neural networks.
We are always looking for passionate new PhD students, Postdocs, and Master students to join the team (more info) !
Building Trust in AI for Safety-Critical Systems
24 July 2018, 11am
Room 217, Huxley Building
VAS members have paper accepted at KR 201801 July 2018
Panagiotis Kouvaros re-joins the group as a posdoctoral researcher22 June 2018
VAS member wins international student award16 April 2018
VAS members have three papers accepted at IJCAI 201810 April 2018
Alessio Lomuscio awarded RAEng fellowship