AI for Cyber Defence (AICD) Research Centre – Machine Learning for Security and Privacy

Closing Date: 29/09/2023

Funding for innovative research that transforms the science of computer security and privacy through the application of autonomous decision-making techniques.

Founded in 2015 as a registered charity, the Alan Turing Institute is the national institute for data science and artificial intelligence (AI), and has its headquarters at the British Library in London. The AI for Cyber Defence (AICD) Research Centre is a department of the Alan Turing Institute and seeks to ensure the security and privacy of computer networks and systems through fundamental and applied advances in intelligent agents.

The AI for Cyber Defence (AICD) Research Centre – Machine Learning for Security and Privacy initiative recognises that the increasingly AI-enhanced landscape of cyber threats calls for correspondingly sophisticated and scalable defensive capabilities. AICD is seeking to transform the science of computer security and privacy through the application of autonomous decision-making techniques including, but not limited to, deep reinforcement learning (DRL) and foundational/transformer/language models. To this end, novel research proposals are invited in the areas of:

  • Fully autonomous cyber operations and network defence – Funding is available for proposals that focus on groundbreaking defence methods (eg utilising DRL, transformer-based approaches and other autonomous techniques) and the creation of environments for rigorous training and evaluation of these approaches. Proposals can also address existing challenges and propose new fundamentals for refining existing methods.
  • Generalisability techniques for autonomous agents – The dynamic nature of the cyber threat environment necessitates agents that can adapt. This includes continual learning, scaling to arbitrary environment sizes, fluctuating parameter robustness and allowing for arbitrary numbers of agents. Proposals in this area should delineate techniques and strategies for achieving generalisability.
  • Systems security attacks and defences – To strengthen defences the adversary must be understood. Proposals should focus on discovering innovative adversarial techniques and models including but not limited to, vulnerability discovery, automated red teaming and other offensive security strategies.
  • Robustness of autonomous agents for cyber use cases – Given the priviledged role of autonomous agents, their robustness in the face of adversarial attacks is imperative. Research is sought that interrogates the resilience of DRL agents under adversarial attack and devises strategies to enhance their robustness.
  • Wild card – Other projects that do not fit into any of the above categories can be considered if they can help autonomously defend networks at scale.

Selected proposals will have the advantage of closely collaborating with the AICD core team, gaining access to a wealth of expertise and resources. Successful researchers (or research teams) will be integrated into a national network, which may offer opportunities for extended or further funding based on the impact of results. Ultimately this research will contribute to protecting the UK, its people and the places they inhabit by providing tools for resilient cyber infrastructure at scale.

Funding body Alan Turing Institute
Maximum value Discretionary
Reference ID S25581
Category Science and Technology
Fund or call Fund