There are currently no seminars planned.
25 November 2022:
Mykel Kochenderfer
Automated Decision Making for Safety Critical Applications
Building robust decision making systems for autonomous systems is challenging. Decisions must be made based on imperfect information about the environment and with uncertainty about how the environment will evolve. In addition, these systems must carefully balance safety with other considerations, such as operational efficiency. Typically, the space of edge cases is vast, placing a large burden on human designers to anticipate problem scenarios and develop ways to resolve them. This talk discusses major challenges associated with ensuring computational tractability and establishing trust that our systems will behave correctly when deployed in the real world. We will outline some methodologies for addressing these challenges and point to some research applications that can serve as inspiration for building safer systems.
Mykel Kochenderfer is an Associate Professor of Aeronautics and Astronautics at Stanford University. He is the director of the Stanford Intelligent Systems Laboratory (SISL), conducting research on advanced algorithms and analytical methods for the design of robust decision making systems. Prior to joining the faculty in 2013, he was at MIT Lincoln Laboratory where he worked on aircraft collision avoidance, leading to the creation of the ACAS X international standard for manned and unmanned aircraft. He received his Ph.D. from the University of Edinburgh in 2006 and B.S. and M.S. degrees in computer science from Stanford University in 2003. Prof. Kochenderfer is a co-director of the Center for AI Safety. He is an associate editor of the Journal of Artificial Intelligence Research and the Journal of Aerospace Information Systems. He is an author of the textbooks Decision Making under Uncertainty - Theory and Application (MIT Press, 2015), Algorithms for Optimization (MIT Press, 2019), and Algorithms for Decision Making (MIT Press, 2022).
02 March 2020:
Michael Akintunde
Formal Verification of Neural Agents in Non-deterministic Environments
We introduce a model for agent-environment systems where the agents are implemented via feed-forward ReLU neural networks and the environment is non-deterministic. We study the verification problem of such systems against CTL properties. We introduce a bounded fragment of CTL and show its usefulness in identifying shallow bugs in the system. We present a novel parallel algorithm for MILP-based verification of agent-environment systems, present an implementation, and report the experimental results obtained against a variant of the VerticalCAS use-case.
17 February 2020:
Dr Elena Botoeva
Verification of Neural Networks: Specifying Global Robustness using Generative Models
The success of neural networks across most machine learning tasks and the persistence of adversarial examples have made the verification of such models an important quest. Several techniques have been successfully developed to verify robustness, and are now able to evaluate neural networks with thousands of nodes. The main weakness of this approach is in the specification: robustness is asserted on a validation set consisting of a finite set of examples, i.e. locally. We propose a notion of global robustness based on generative models, which asserts the robustness on a very large and representative set of examples. We show how this can be used for verifying neural networks. In this paper we experimentally explore the merits of this approach, and show how it can be used to construct realistic adversarial examples.
03 February 2020:
Henrik Åslund
Verifiably Safe Off-Model Reinforcement Learning
The desire to use reinforcement learning in safety-critical settings has inspired a recent interest in formal methods for learning algorithms. Existing formal methods for learning and optimization primarily consider the problem of constrained learning or constrained optimization. Given a single correct model and associated safety constraint, these approaches guarantee efficient learning while provably avoiding behaviors outside the safety constraint. Acting well given an accurate environmental model is an important pre-requisite for safe learning, but is ultimately insufficient for systems that operate in complex heterogeneous environments. This paper introduces verification-preserving model updates, the first approach toward obtaining formal safety guarantees for reinforcement learning in settings where multiple possible environmental models must be taken into account. Through a combination of inductive data and deductive proving with design-time model updates and runtime model falsification, we provide a first approach toward obtaining formal safety proofs for autonomous systems acting in heterogeneous environments.
27 January 2020:
Yi-Ling Liu
Adversarial Training for Image Recognition
We present a novel black-box adversarial training algorithm to defend against state-of-the-art attack methods in machine learning. In order to search for an adversarial attack, the algorithm analyses small regions around the input that are likely to make significant contributions for the generation of adversarial samples. Unlike some of the literature in the area, the proposed method does not require access to the internal layers of the model and is therefore applicable to applications such as security. We report the experimental results obtained on models of different sizes built for the MNIST and CIFAR10 datasets. The results suggest that known attacks on the resulting models are less transferable than those models trained by state-of-the art attack algorithms.
20 January 2020:
Francesco Leofante
AI Planning meets Production Logistics
Planning for logistics requires algorithms and tools that can handle the challenges such scenarios pose. On the one hand, expressive languages are needed to build faithful models; on the other hand, efficient solving techniques that can support these languages need to be devised. The existence of this trade-off, however, does not mean progress is not possible! Using the RoboCup Logistics League as a testbed, I will focus on production logistics and show how recent advances in satisfiability checking can be used to solve some interesting problems in the field.
13 January 2020:
Patrick Henriksen
Efficient Neural Network Verification via Adaptive Refinement and Adversarial Search
We propose a novel verification method for high-dimensional feed-forward neural networks governed by ReLU, Sigmoid and Tanh activation functions. We show that the method is complete for ReLU networks and sound for other networks. The technique extends symbolic interval propagation by using gradient descent to locate counter-examples from spurious solutions generated by the associated LP problems. The approach includes a novel adaptive splitting strategy intended to refine the nodes with the greatest impact on the output of the network. The resulting implementation, called VERINET, generally achieved speed-ups of an order of magnitude for safe cases and a speed-up of 3 orders of magnitude for unsafe cases on MNIST models. We also show that VERINET can verify networks of over 50,000 ReLU nodes trained on the CIFAR-10 data-set, a network significantly larger than networks previously verified via complete methods.
19 December 2019:
Dr Dragos Margineantu (Boeing)
Robust Machine Learning and Scalable Interactive Decision Systems
Deployable decision systems require much more than end-to-end learned models. This talk will focus on two of the questions that open up for the machine learning scientist in the process of architecting and implementing modern decision systems: [A] how to design robust systems and [B] how to build interactive decision systems.
Research questions on machine learning robustness fall into two broad categories: [A] model correctness analysis, verification & validation and [B] bounding the risk of handling (predictions & actions) observations on which the learner is not qualified to handle. The first part of the talk will focus on the latter set of questions: how do we learn models that ‘know when they don’t know’, what are the formalisms that we need, what is practically doable - both, for supervised learning and reinforcement learning. First we will explore methods for learning self-competence models in addition to the predictions. Next, we will discuss self-competence estimation approaches for reinforcement learning and decision making. Finally, we will explore multi-faceted learning and related techniques, that can be applied in decision systems with the goal of increased robustness.
The second part of the talk will focus on architecting scalable interactive systems. Here, we will explore how Inverse Reinforcement Learning methods can be applied in a user-in-the-loop setting for the detection of anomalous actions and for intent recognition.
02 December 2019:
Dr Elena Botoeva
Strong Mixed-Integer Programming Formulations for Trained Neural Networks
We present an ideal mixed-integer programming (MIP) formulation for a rectified linear unit (ReLU) appearing in a trained neural network. Our formulation requires a single binary variable and no additional continuous variables beyond the input and output variables of the ReLU. We contrast it with an ideal “extended” formulation with a linear number of additional continuous variables, derived through standard techniques. An apparent drawback of our formulation is that it requires an exponential number of inequality constraints, but we provide a routine to separate the inequalities in linear time. We also prove that these exponentially-many constraints are facet-defining under mild conditions. Finally, we study network verification problems and observe that dynamically separating from the exponential inequalities 1) is much more computationally efficient and scalable than the extended formulation, 2) decreases the solve time of a state-of-the-art MIP solver by a factor of 7 on smaller instances, and 3) nearly matches the dual bounds of a state-of-the-art MIP solver on harder instances, after just a few rounds of separation and in orders of magnitude less time.
18 November 2019:
Alexander Manta
Semantic Labelling and Learning for Parity Game Solving in LTL Synthesis
The paper proposes “semantic labelling” as a novel ingredient for solving games in the context of LTL synthesis. It exploits recent advances in the automata-based approach, yielding more information for each state of the generated parity game than the game graph can capture. Utilizing this extra information even in simple heuristics can help improve the standard strategy iteration approach to yield a winning strategy directly without any computation and also makes strategy iteration yield smaller solutions. Furthermore, equipping Q-learning with semantic information makes it competitive to strategy iteration. The experimental results show that already the simplest heuristics can achieve significant improvements demonstrating the utility of semantic labelling for more advanced learning approaches both for initialization and improvement of strategies.
21 October 2019:
Dr Panagiotis Kouvaros
Verification of ReLU-based Neural Networks
10 December 2018:
Markus N. Rabe
Deep Learning for Automated Reasoning
Until today, automated reasoning and theorem proving mostly relied on combinatorial search algorithms and handwritten heuristics. But the success of deep learning in various application areas prompts us to revisit those paradigms. In particular AlphaGo, DeepMind’s system mastering board games such as Go and Chess, has demonstrated that deep reinforcement learning can be fused with combinatorial search algorithms with great success. This talk will discuss two projects on leveraging deep learning for automated reasoning and theorem proving. The first part concerns learning better heuristics for a solver for quantified Boolean formulas, and the second part will give an outlook on the ongoing effort in Google on learning how to prove theorems in higher-order logic. I will focus on the challenges and tradeoffs that arise from the combination of the two worlds.
24 July 2018:
Prof. Mykel Kochenderfer
Building Trust in AI for Safety-Critical Systems
Starting in the 1970s, decades of effort went into building human-designed rules for providing automatic maneuver guidance to pilots to avoid mid-air collisions. The resulting system was later mandated worldwide on all large aircraft and significantly improved the safety of the airspace. Recent work has investigated the feasibility of using computational techniques to help derive optimized decision logic that better handles various sources of uncertainty and balances competing system objectives. This approach has resulted in a system called Airborne Collision Avoidance System (ACAS) X that significantly reduces the risk of mid-air collision while also reducing the alert rate, and it is in the process of becoming the next international standard. Using ACAS X as a case study, this talk will discuss lessons learned about building trust in advanced decision support systems. This talk will also outline research challenges in facilitating greater levels of automation into safety critical systems.
09 July 2018:
Xiaowei Huang
Approximate Verification of Deep Neural Networks with Provable Guarantees
Deep neural networks (DNNs) have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. In this talk, I will report our recent works towards the verification of feed-forward multi-layer DNNs. We enable exhaustive search of the region by employing discretisation, reduce the verification problem to the finding of an optimal solution in a two-player game, and propagate the analysis layer by layer. By utilising the fact that DNNs are Lipschitz continuous, we show that our approach can achieve provable guarantees. We implemented our approach into a software tool and conducted experiments on several DNNs trained on datasets such as MNIST, CIFAR10, and imageNet, and a DNN for Nexar traffic light challenge.