Newsai

UK AI Safety Institute Awards £27M to 60 Alignment Research Projects

March 6, 2026 · 2 min read

Claire Cummings

The UK AI Security Institute has announced its first round of Alignment Project grants, funding 60 research projects across eight countries with £27 million. A second funding round is expected to open this summer, with individual grants ranging from £50,000 to £1 million.

An Unusual Coalition of Funders

The Alignment Project breaks the typical government-grant mold by pooling resources from competing AI companies and multiple national governments into a single funding stream. OpenAI committed $7.5 million (approximately £5.6 million), with additional backing from Microsoft, Amazon Web Services, and Anthropic. The Canadian AI Safety Institute and Australian AI Safety Institute have joined as government partners.

For researchers, this structure means access to larger awards with less geographic restriction than typical national programs. The first-round projects span theoretical and empirical work on scalable oversight, interpretability, adversarial robustness, and techniques for verifying that AI systems behave as intended.

Why Labs Are Paying to Study Their Own Weaknesses

The cross-industry participation signals something beyond corporate responsibility theater. OpenAI alone invested $7.5 million in research that could expose shortcomings in its own models. The reasoning: no single lab can solve alignment internally, and externally validated safety techniques benefit the entire industry by building public trust and informing regulation.

This model — government as coordinator, industry as co-funder, independent researchers as executors — could become a template for future AI safety funding mechanisms globally.

How to Position for Round Two

With the second round expected to open in summer 2026, researchers should begin preparing proposals now. The first round prioritized projects that could produce concrete, measurable advances in alignment techniques rather than purely theoretical contributions. Strong applicants combined technical depth with clear experimental plans and realistic timelines.

Researchers from any eligible country can apply. Those with existing safety research that needs dedicated compute and funding to scale should be well-positioned. The growing landscape of AI safety funding — from government programs to industry coalitions — is tracked on the Granted blog.

Not sure which grants to apply for?

Use our free grant finder to search active federal funding opportunities by agency, eligibility, and deadline.

Find Grants

Ready to write your next grant?

Draft your proposal with Granted AI. Win a grant in 12 months or get a full refund.

Backed by the Granted Guarantee