1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
The UK AI Security Institute (AISI) Challenge Fund awards funding to researchers addressing pressing questions in AI safety and security. The fund covers research areas including AI safeguards, control mechanisms, alignment techniques, and societal resilience to AI risks. The AISI, formerly the UK AI Safety Institute, is the UK government's dedicated body for evaluating and ensuring the safety of advanced AI systems.
The Challenge Fund is designed to build a broader research ecosystem around AI safety by engaging academic institutions and non-profit organizations in the UK and internationally. While the most recent application round has closed, the program is part of AISI's ongoing research portfolio and is expected to continue with future funding rounds.
The program complements the larger AISI Alignment Project (up to £1M per project) and the Systemic Safety Grants, forming a comprehensive suite of AI safety research funding from the UK government.
Get alerted about grants like this
Save a search for “UK AI Security Institute (AISI)” or related topics and get emailed when new opportunities appear.
Search similar grants →Based on current listing details, eligibility includes: Researchers at eligible UK and international academic institutions and non-profit organisations. Specific eligibility details are published in each round's Application Pack. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates Up to £200,000 per project (approximately $260,000 USD). Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is rolling deadlines or periodic funding windows. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.
The AISI Challenge Fund is a grant programme from the UK AI Security Institute focused on supporting research that directly addresses risks from advanced AI systems. Distinct from AISI's Systemic Safety programme (which targets broader societal deployment risks) and the Alignment Project (which focuses on preventing dangerous model behavior), the Challenge Fund supports a wider range of AI safety and security research proposals. The fund has been designed with a multi-stage review process and provides applicants with detailed application packs, clarification question documents, and cost guidance to support high-quality proposals. The programme is open to researchers at eligible UK and international academic institutions and non-profit organizations. The Challenge Fund launched on March 5, 2025, and the first round of applications has completed its review cycle. Future rounds are expected as AISI continues to expand its external research funding portfolio. The UK AI Security Institute was established following the 2023 AI Safety Summit at Bletchley Park and has rapidly grown into one of the world's leading government-funded AI safety research organizations, with a mandate to evaluate frontier AI models, conduct safety research, and fund external research that advances AI safety.
The Systemic AI Safety Grants programme is a joint initiative of the UK AI Security Institute (AISI), the Engineering and Physical Sciences Research Council (EPSRC), and Innovate UK, designed to safeguard societal systems during the rapid advancement and adoption of AI technologies. Unlike AISI's Alignment Project which focuses on preventing dangerous behavior in individual AI models, the Systemic Safety programme addresses broader societal risks from widespread AI deployment across critical sectors. The programme supports research across multiple priority areas: evaluations of dangerous capabilities and safeguards, studies on user interactions with AI models, risk governance through protocols and safety cases, sector-specific AI integration risks in education, healthcare, and finance, AI agent interactions and critical infrastructure vulnerability, AI-generated misinformation, and labour market impacts. The first phase selected 20 projects and those are currently underway. With a total fund of £8.5 million, the programme's initial £4 million allocation will expand as the scheme progresses. Successful applicants receive ongoing support, computing resources where needed, and access to a community of AI and sector-specific domain experts. The programme particularly values projects bringing together academic, industry, and civil society expertise.
The UK AI Security Institute Alignment Project provides substantial funding of up to £1 million per project to support research aimed at preventing advanced AI systems from behaving dangerously. The program focuses specifically on developing robust alignment techniques, oversight mechanisms, and monitoring tools for advanced AI systems. This is a flagship research funding program from the AISI (formerly the UK AI Safety Institute), which serves as the UK government's primary body for evaluating the safety of frontier AI models. The Alignment Project represents one of the largest dedicated AI alignment research funding programs from a national government. While the current application round has closed, the program is part of AISI's ongoing research portfolio and is expected to continue with future funding rounds. The program complements the smaller AISI Challenge Fund (up to £200K) and Systemic Safety Grants, forming a comprehensive suite of government-backed AI safety research funding.