AI Privacy and Security

We are surrounded by artificial intelligence. We have AI in our phones, in our computers, in our homes, in our cities, in an increasingly interconnected reality. We use AI to save time and make more accurate and automated decisions in many applications from healthcare to finance to policing to hiring. While this has brought amazing advancements, it has been shown, and there have been several headlines in the news about it, that AI may not be secure and may cause privacy violations. We need to ensure that AI treats us humans, and our data, fairly and safely, if we are to trust AI systems.

We are developing techniques for secure and privacy-respecting AI, particularly taking a human-centred approach, so that anyone, regardless of their knowledge of AI, could feel safe and in control when using it (Zhan et al., 2025) (Sun et al., 2025). We mainly focus on security and privacy in systems that use or embed AI, from voice-based or text-based AI Assistants (like ChatGPT, Gemini, Alexa, and Siri) to other types of Autonomous Systems and automated decision-making systems.

Related Projects
  • Secure AI Assistants (EPSRC) - SAIS
  • National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online (UKRI) - REPHRAIN
  • Evaluating third-party smart home assistant developers (ICO) - link

References

2025

  1. Usenix SEC
    Malicious LLM-Based Conversational AI Makes Users Reveal Personal Information
    Xiao Zhan, Juan-Carlos Carrillo, William Seymour, and Jose Such
    In USENIX Security Symposium (SEC), 2025
  2. ICML
    CASE-Bench: Context-Aware Safety Evaluation Benchmark for Large Language Models
    Guangzhi Sun, Xiao Zhan, Shutong Feng, Philip C. Woodland, and Jose Such
    In Proceedings of the International Conference on Machine Learning (ICML), 2025