1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
This grant may no longer be accepting applications.
The description indicates applications may be closed. Check the funder's website to confirm availability before applying.
Visit funder's website →AISI AI Safety Challenge Fund is a grant from the AI Security Institute (AISI) that funds research advancing the science of AI security and addressing the most pressing challenges in AI safety. AISI provides funding to researchers and organizations tackling frontier AI safety questions, supporting work that helps ensure advanced AI systems are secure, robust, and aligned with human values.
Eligible applicants include academic researchers, independent research organizations, and security-focused nonprofits with proposals addressing critical gaps in AI safety science. The fund prioritizes high-impact research with the potential to shape AI safety practices and policy.
Get alerted about grants like this
Save a search for “AI Security Institute” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
Grants | The AI Security Institute (AISI) Read the Frontier AI Trends Report Please enable javascript for this website. Funding to advance the science of AI security and tackle the field’s most pressing questions. Advancing AI safety and security through funding and collaborations.
Widespread AI adoption requires research breakthrough across AI safety and security. We collaborate with leading researchers and institutions to identify promising projects and rapidly get them the funding they need. To date, we have run three large-scale grants programmes, including the >£15 million Alignment Fund .
You can see our priority research areas from past programmes below. Our programmes are not currently accepting applications. Applications are now closed.
The Alignment Project supports research to prevent advanced AI systems from behaving dangerously—either intentionally or accidentally. It awards up to £1 million per project, enabling new theoretical and empirical work on developing robust alignment, oversight, and monitoring techniques. View the Alignment Project Applications are now closed.
The Challenge Fund supports research to understand and tackle potential risks from advanced AI. It awards up £200,000 per project, accelerating innovation across fields like safeguards, control, alignment, and societal resilience. View Challenge Fund details Applications are now closed.
Our Systemic AI Safety Grants Programme aims to increase societal resilience to widespread AI deployment across areas like healthcare, energy grids, and financial markets. View example projects here . We have selected the first 20 projects, which are now underway.
Based on current listing details, eligibility includes: Researchers at eligible UK and international academic institutions and non-profit organizations. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates Up to £200,000 per project Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is rolling deadlines or periodic funding windows. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.
The AISI Challenge Fund is a grant programme from the UK AI Security Institute focused on supporting research that directly addresses risks from advanced AI systems. Distinct from AISI's Systemic Safety programme (which targets broader societal deployment risks) and the Alignment Project (which focuses on preventing dangerous model behavior), the Challenge Fund supports a wider range of AI safety and security research proposals. The fund has been designed with a multi-stage review process and provides applicants with detailed application packs, clarification question documents, and cost guidance to support high-quality proposals. The programme is open to researchers at eligible UK and international academic institutions and non-profit organizations. The Challenge Fund launched on March 5, 2025, and the first round of applications has completed its review cycle. Future rounds are expected as AISI continues to expand its external research funding portfolio. The UK AI Security Institute was established following the 2023 AI Safety Summit at Bletchley Park and has rapidly grown into one of the world's leading government-funded AI safety research organizations, with a mandate to evaluate frontier AI models, conduct safety research, and fund external research that advances AI safety.
The Systemic AI Safety Grants programme is a joint initiative of the UK AI Security Institute (AISI), the Engineering and Physical Sciences Research Council (EPSRC), and Innovate UK, designed to safeguard societal systems during the rapid advancement and adoption of AI technologies. Unlike AISI's Alignment Project which focuses on preventing dangerous behavior in individual AI models, the Systemic Safety programme addresses broader societal risks from widespread AI deployment across critical sectors. The programme supports research across multiple priority areas: evaluations of dangerous capabilities and safeguards, studies on user interactions with AI models, risk governance through protocols and safety cases, sector-specific AI integration risks in education, healthcare, and finance, AI agent interactions and critical infrastructure vulnerability, AI-generated misinformation, and labour market impacts. The first phase selected 20 projects and those are currently underway. With a total fund of £8.5 million, the programme's initial £4 million allocation will expand as the scheme progresses. Successful applicants receive ongoing support, computing resources where needed, and access to a community of AI and sector-specific domain experts. The programme particularly values projects bringing together academic, industry, and civil society expertise.
The UK AI Security Institute (AISI) Alignment Project provides grants of up to £1 million (approximately $1.3 million) per project for research on preventing advanced AI systems from exhibiting dangerous behavior. The program specifically supports work on robust alignment, oversight, and monitoring techniques for advanced AI systems. This is one of AISI's flagship research funding programs, complementing its Challenge Fund and Systemic Safety Grants. The Alignment Project targets fundamental research questions about ensuring AI systems reliably follow human intentions and values, with a particular focus on scalable oversight methods and techniques that remain effective as AI capabilities increase.