Secure AI Assistants (started 2020, duration 4 years)

Introduction and Motivation

There is an unprecedented integration of AI assistants into everyday life, from the personal AI assistants running in our smart phones and homes, to enterprise AI assistants for increased productivity at the workplace, to health AI assistants. Only in the UK, 7M users interact with AI assistants every day, and 13M on a weekly basis. A crucial issue is how secure AI assistants are, as they make extensive use of AI and learn continually. Also, AI assistants are complex systems with different AI models interacting with each other and with the various stakeholders and the wider ecosystem in which AI assistants are embedded. This ranges from adversarial settings, where malicious actors exploit vulnerabilities that arise from the use of AI models to make AI assistants behave in an insecure way, to accidental ones, where negligent actors introduce security issues or use AIS insecurely. Beyond the technical complexities, users of AI assistants are known to have mental models that are highly incomplete and they do not know how to protect themselves.

Overview and Objectives

SAIS (Secure AI assistantS) is a cross-disciplinary collaboration between the Departments of Informatics, Digital Humanities and The Policy Institute at King’s College London, and the Department of Computing at Imperial College London, working with non-academic partners:

  • Microsoft
  • Humley
  • Hospify
  • Mycroft
  • Policy and regulation experts
  • The general public, including non-technical users.

SAIS will provide an understanding of attacks on AIS considering the whole AIS ecosystem, the AI models used in them, and all the stakeholders involved, particularly focusing on the feasibility and severity of potential attacks on AIS from a strategic threat and risk approach. Based on this understanding, SAIS will propose methods to specify, verify and monitor the security behaviour of AIS based on model-based AI techniques known to provide richer foundations than data-driven ones for explanations on the behaviour of AI-based systems. This will result in a multifaceted approach, including:

  1. Novel specification and verification techniques for AIS, such as methods to verify the machine learning models used by AIS
  2. Novel methods to dynamically reason about the expected behaviour of AIS to be able to audit and detect any degradation or deviation from that expected behaviour based on normative systems and data provenance
  3. Co-created security explanations following a techno-cultural method to increase users’ literacy of AIS security in a way that users can comprehend

For more information, visit the SAIS website.

Funding

SAIS is funded by the Engineering and Physical Sciences Research Council (ref. EP/T026723/1).