Assured Autonomy (started 2018, duration 4 years)

Motivation

Cyber-Physical Systems (CPSs) with learning components are increasingly being deployed in safety-critical applications. However formal verification methodologies only offer limited support for the safety analysis of such systems. The project aims to fill this gap by developing new algorithms and software toolkits for the verification of learning-enabled CPSs (LE-CPSs).

Objectives and current results

To enable the development of verification methods, the project aims at defining semantic models and formal specification languages for LE-CPSs. Our efforts are focussed on two main objectives:

  • the analysis of neural networks in isolation and
  • the analysis of interactions between several learning-enabled components and the environment.

Our group contributed several techniques to verify neural networks via, e.g., Mixed-integer Linear Programming or Symbolic Interval Propagation. Our approaches present state-of-the-art solutions for the verification of both feed-forward and recurrent neural networks.

We extended the techniques above to account for LE-CPSs operating in an environment. The resulting models are amenable to formal verification and we proposed several techniques to verify them against specifications expressed in rich logics such as LTL, CTL or ATL.

More details on our current research can be found in the Software and Publications sections below.

Software

The following tools have been developed within the project:

  • VENUS (VErification of NeUral Systems), a state-of-the-art sound and complete verification toolkit for ReLU-based feed-forward neural networks. It can be used to check reachability and local adversarial robustness properties. VENUS implements a MILP-based verification method and leverages dependency relations between the ReLU nodes to prune the search tree that needs to be considered during branch-and-bound. It additionally implements methods based on symbolic interval propagation and input domain splitting. [paper] [download]

Publications

P. Henriksen, A. Lomuscio.
Efficient Neural Network Verification via Adaptive Refinement and Adversarial Search.
Proceedings of the 24th European Conference on Artificial Intelligence (ECAI20). Santiago de Compostela, Spain. To Appear.

M. Akintunde, E. Botoeva, P. Kouvaros, A. Lomuscio.
Formal Verification of Neural Agents in Non-Deterministic Environments.
Proceedings of the 19th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS20). Auckland, New Zealand. IFAAMAS Press.

E. Botoeva, P. Kouvaros, J. Kronqvist, A. Lomuscio, R. Misener.
Efficient Verification of Neural Networks via Dependency Analysis.
Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI20). New York City, New York. AAAI Press. To Appear.

M. Akintunde, A. Kevorchian, A. Lomuscio, E. Pirovano
Verification of RNN-Based Neural Agent-Environment Systems
Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI 2019). Honolulu, Hawaii. AAAI Press.

M. Akintunde, A. Lomuscio, L. Maganti, E. Pirovano
Reachability Analysis for Neural Agent-Environment Systems
Proceedings of the 16th International Conference on Principles of Knowledge Representation and Reasoning (KR18). Tempe, Arizona. AAAI Press.

Team members

Principal investigator
Research associates
Collaborators

Funding

This research is funded by DARPA under the Assured Autonomy program.