1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
Applications currently closed. Expected to reopen summer 2026.
The UK AI Security Institute (AISI) Alignment Project provides grants of up to £1 million (approximately $1. 3 million) per project for research on preventing advanced AI systems from exhibiting dangerous behavior. The program specifically supports work on robust alignment, oversight, and monitoring techniques for advanced AI systems.
This is one of AISI's flagship research funding programs, complementing its Challenge Fund and Systemic Safety Grants. The Alignment Project targets fundamental research questions about ensuring AI systems reliably follow human intentions and values, with a particular focus on scalable oversight methods and techniques that remain effective as AI capabilities increase.
Get alerted about grants like this
Save a search for “UK AI Security Institute” or related topics and get emailed when new opportunities appear.
Search similar grants →Based on current listing details, eligibility includes: Open to UK and international academic institutions and non-profit organisations. Research must focus on alignment, oversight, and monitoring of advanced AI systems to prevent dangerous behavior. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates Up to £1,000,000 per project Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is rolling deadlines or periodic funding windows. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.
The AISI Challenge Fund is a grant programme from the UK AI Security Institute focused on supporting research that directly addresses risks from advanced AI systems. Distinct from AISI's Systemic Safety programme (which targets broader societal deployment risks) and the Alignment Project (which focuses on preventing dangerous model behavior), the Challenge Fund supports a wider range of AI safety and security research proposals. The fund has been designed with a multi-stage review process and provides applicants with detailed application packs, clarification question documents, and cost guidance to support high-quality proposals. The programme is open to researchers at eligible UK and international academic institutions and non-profit organizations. The Challenge Fund launched on March 5, 2025, and the first round of applications has completed its review cycle. Future rounds are expected as AISI continues to expand its external research funding portfolio. The UK AI Security Institute was established following the 2023 AI Safety Summit at Bletchley Park and has rapidly grown into one of the world's leading government-funded AI safety research organizations, with a mandate to evaluate frontier AI models, conduct safety research, and fund external research that advances AI safety.
The Systemic AI Safety Grants programme is a joint initiative of the UK AI Security Institute (AISI), the Engineering and Physical Sciences Research Council (EPSRC), and Innovate UK, designed to safeguard societal systems during the rapid advancement and adoption of AI technologies. Unlike AISI's Alignment Project which focuses on preventing dangerous behavior in individual AI models, the Systemic Safety programme addresses broader societal risks from widespread AI deployment across critical sectors. The programme supports research across multiple priority areas: evaluations of dangerous capabilities and safeguards, studies on user interactions with AI models, risk governance through protocols and safety cases, sector-specific AI integration risks in education, healthcare, and finance, AI agent interactions and critical infrastructure vulnerability, AI-generated misinformation, and labour market impacts. The first phase selected 20 projects and those are currently underway. With a total fund of £8.5 million, the programme's initial £4 million allocation will expand as the scheme progresses. Successful applicants receive ongoing support, computing resources where needed, and access to a community of AI and sector-specific domain experts. The programme particularly values projects bringing together academic, industry, and civil society expertise.