Assured Autonomy (2018 - 2022)

Introduction

Intelligent autonomous systems are being increasingly applied to various domains and industries, due to advances in modelling, sensing, algorithm design and other technologies. However, formal safety assurances for these systems has not kept pace with these advances: this is because autonomous systems rely heavily on machine learning technologies, which lack the necessary mathematical framework to provide guarantees on correctness. Without assurances, trust in the safety and operation of any learning-enabled cyber-physical system (LE-CPS) will be limited, hampering their broad deployment.

This project aims to address this challenge by developing new mathematical approaches and tools for the verification of data-driven ML algorithms in these systems to enhance their autonomy and assure they are achieving an acceptable level of safety.

Motivation

Cyber-Physical Systems (CPSs) with learning components are increasingly being deployed in safety-critical applications. However formal verification methodologies only offer limited support for the safety analysis of such systems. The project aims to fill this gap by developing new algorithms and software toolkits for the verification of learning-enabled CPSs (LE-CPSs).

Objectives and current results

To enable the development of verification methods, the project aims at defining semantic models and formal specification languages for LE-CPSs. Our efforts are focussed on two main objectives:

  • the analysis of neural networks in isolation and
  • the analysis of interactions between several learning-enabled components and the environment.

Our group contributed several techniques to verify neural networks via, e.g., Mixed-integer Linear Programming or Symbolic Interval Propagation. Our approaches present state-of-the-art solutions for the verification of both feed-forward and recurrent neural networks.

We extended the techniques above to account for LE-CPSs operating in an environment. The resulting models are amenable to formal verification and we proposed several techniques to verify them against specifications expressed in rich logics such as LTL, CTL or ATL.

More details on our current research can be found in the Software and Publications sections below.

Software

The following tools have been developed within the project:

  • VENUS (VErification of NeUral Systems), a state-of-the-art sound and complete verification toolkit for ReLU-based feed-forward neural networks. It can be used to check reachability and local adversarial robustness properties. VENUS implements a MILP-based verification method and leverages dependency relations between the ReLU nodes to prune the search tree that needs to be considered during branch-and-bound. It additionally implements methods based on symbolic interval propagation and input domain splitting. [paper] [download]

Publications

P. Henriksen, F. Leofante, A. Lomuscio.
Repairing Misclassifications in Neural Networks Using Limited Data.
Proceedings of the 37th ACM/SIGAPP Symposium On Applied Computing (SAC22). Virtual Conference.

J. Lan, Y. Zheng, A. Lomuscio.
Tight Neural Network Verification via Semidefinite Relaxations and Linear Reformulations
Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI22). Virtual Conference.

M. Akintunde, E. Botoeva, P. Kouvaros, A. Lomuscio.
Formal Verification of Neural Agents in Non-deterministic Environments.
Journal of Autonomous Agents and Multi-Agent Systems, Vol 36(1). Springer.

P. Kouvaros, T. Kyono, F. Leofante, A. Lomuscio, D. Margineantu, D. Osipychev, Y. Zheng.
Formal Analysis of Neural Network-based Systems in the Aircraft Domain.
Proceedings of the 24th International Symposium on Formal Methods (FM21). Virtual conference.

B. Batten, P. Kouvaros, A. Lomuscio, Y. Zheng.
Efficient Neural Network Verification via Layer-based Semidefinite Relaxations and Linear Cuts.
Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI21). Montreal, Canada.

P. Kouvaros, A. Lomuscio.
Towards Scalable Complete Verification of ReLU Neural Networks via Dependency-based Branching.
Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI21). Montreal, Canada.

P. Henriksen, A. Lomuscio.
Efficient Neural Network Verification via Adaptive Refinement and Adversarial Search.
Proceedings of the 24th European Conference on Artificial Intelligence (ECAI20). Santiago de Compostela, Spain.

M. Akintunde, E. Botoeva, P. Kouvaros, A. Lomuscio.
Formal Verification of Neural Agents in Non-Deterministic Environments.
Proceedings of the 19th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS20). Auckland, New Zealand. IFAAMAS Press.

M. Akintunde, E. Botoeva, P. Kouvaros, A. Lomuscio.
Verifying Strategic Abilities of Neural Multi-agent Systems.
Proceedings of the 17th International Conference on Principles of Knowledge Representation and Reasoning (KR20). Virtual conference. AAAI Press.

E. Botoeva, P. Kouvaros, J. Kronqvist, A. Lomuscio, R. Misener.
Efficient Verification of Neural Networks via Dependency Analysis.
Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI20). New York City, New York. AAAI Press.

M. Akintunde, A. Kevorchian, A. Lomuscio, E. Pirovano
Verification of RNN-Based Neural Agent-Environment Systems
Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI 2019). Honolulu, Hawaii. AAAI Press.

M. Akintunde, A. Lomuscio, L. Maganti, E. Pirovano
Reachability Analysis for Neural Agent-Environment Systems
Proceedings of the 16th International Conference on Principles of Knowledge Representation and Reasoning (KR18). Tempe, Arizona. AAAI Press.

Team members

Principal investigator
Research associates
Collaborators
Previous collaborators

This research is funded by DARPA under the Assured Autonomy program.