UK AI Security Institute Awards £4M Across 20 Systemic Safety Projects
March 2, 2026 · 2 min read
Claire Cummings
The UK's AI Security Institute — formerly the AI Safety Institute before its February 2025 rebrand — has awarded 20 seed grants of up to £200,000 each under its new Systemic AI Safety Grants Programme. The awards total approximately £4 million and mark the government's first dedicated funding stream for research on how AI deployment affects critical societal systems.
Future rounds with larger awards are planned.
Research That Goes Beyond Model Behavior
Unlike most AI safety funding, which focuses on individual model capabilities and alignment, the AISI programme targets systemic risks — what happens when AI is embedded across healthcare systems, energy grids, financial markets, and labor markets simultaneously.
Funded projects span a deliberately broad scope: AI-generated misinformation, critical infrastructure protection, user interactions with AI models, risk governance protocols, and sector-specific integration challenges. The programme specifically sought proposals that combine academic, industry, and civil society expertise, and welcomed international collaborators alongside UK-based lead investigators.
"Better understanding systemic safety will help inform priority interventions that governments and others could invest in, to address critical risks before they become severe harms," the Institute stated in its programme announcement.
What This Signals for AI Safety Funding
The seed grants are Phase 1 of what AISI has framed as a multi-round initiative. The initial £200,000 awards fund 12-month projects designed to develop baseline risk understanding and identify promising mitigation strategies. Phase 2, with substantially larger awards, will build on the most promising findings.
For AI safety researchers and organizations outside the UK, the programme's openness to international partners is notable. While lead investigators must be UK-based, the collaborative structure creates pathways for cross-border research teams — particularly relevant as the US, EU, and UK take divergent approaches to AI governance.
Researchers interested in AI safety funding — from government programmes to foundation grants — can track emerging opportunities through Granted, which indexes AI-related grants across federal, international, and private sources.
