1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
This listing may be outdated. Verify details at the official source before applying.
Find similar grantsAI Safety Regranting Program is sponsored by Manifund. Supports researchers, startups, and developers worldwide working on AI safety and reducing existential risk from advanced AI systems.
Get alerted about grants like this
Save a search for “Manifund” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
These organizations offer financial support to organizations and individuals working on AI safety. Largest funder in x-risk reduction Most funding is done via proactive research, but there are frequent requests for proposals in certain areas. Previously called Open Philanthropy.
Best for mid- to large-scale projects Survival and Flourishing Fund (SFF) Funds organizations working on humanity's long-term survival and flourishing. Speculation Grants are rolling; the full S-Process runs annually and is currently open. Yes – closes 22 April 2026 An overview of the funding situation → An analysis of the main funding sources in AI safety over time, last updated early 2025 Thank you!
Your submission has been received! Oops! Something went wrong while submitting the form.
Science of Trustworthy AI Schmidt Sciences program making grants of up to $5M for technical research that improves our ability to understand, predict, and control risks from frontier AI systems. Long-Term Future Fund (LTFF) Large grantmaker funding individuals and small projects aiming to positively influence the long-term trajectory of civilization. Relatively straightforward application process.
Regranting platform where individual regranters have funding pools to direct towards publicly-listed projects. AI Alignment Foundation (AIAF) Funding neglected approaches to AI alignment through grants and fiscal sponsorship to independent researchers, educational programs, and engineers developing mech interp tools.
Foresight's 'AI for Science & Safety Nodes' program offers funding, a community hub, and local compute in either San Francisco or Berlin. AI Safety Tactical Opportunities Fund (AISTOF) Pooled multi-donor fund structured to be fast and rapidly capture emerging opportunities, including in governance, technical alignment, and evaluations.
12/14-week program run by Fifty Years, helping scientists and engineers build startups working on AI safety. Omidyar Network: Tech Journalism Fund Provides $5k–25k project funding for journalists covering topics related to how technology and society intersect. AI Risk Mitigation (ARM) Fund Aiming to reduce catastrophic risks from advanced AI through grants towards technical research, policy, and training programs for new researchers.
Effective Altruism Infrastructure Fund (EAIF) Aiming to increase the impact of effective altruism projects (including AI safety) by increasing their access to talent, capital, and knowledge. Future of Life Institute (FLI): Digital Media Accelerator Supporting digital content from creators raising awareness and understanding about ongoing AI developments and issues.
Funding initiative with $25M annual giving, backing people and ideas with the funding, strategic guidance, and networks they need to steer transformative AI toward beneficial outcomes. Future of Life Foundation (FLF) Incubator helping start new organizations that steer transformative technology towards benefiting life and away from extreme large-scale risks. Devises and executes bespoke giving strategies for major donors.
Grants focus on reducing the risks posed by AI, nuclear war, and engineered pandemics. UK AI Security Institute (AISI) UK government organization running large-scale grant programs funding AI safety research. Collaborates with researchers and institutions to identify promising projects to fund.
Advanced Research + Invention Agency (ARIA) UK government R&D funding agency aiming to unlock scientific and technological breakthroughs that benefit everyone. Similar to DARPA in the US. Helping entrepreneurs direct their charitable giving where it will do the most good by researching the world’s biggest problems and identifying the most effective solutions.
Tarbell: AI Reporting Grants Grants of $1k–20k to support journalism on AI and its impacts. Mainly focuses on written journalism, but also funds other formats. Incubating early-stage AI safety research organizations.
The program involves co-founder matching, mentorship, and seed funding, culminating in an in-person building phase. Identifying leaders from business, policy, and academia, and helping them take on ambitious projects in AI safety. 12-week program in San Francisco, USA, for pre-seed companies aiming to make an impact with AI safety and assurance technologies.
Future of Life Institute (FLI) Makes grants for studying the risks from powerful technologies and developing strategies for reducing them – through RFPs, contests, collaborations, and fellowships. Cooperative AI Foundation (CAIF) Charity foundation backed by a large philanthropic commitment supporting research into improving cooperative intelligence of advanced AI.
Initiative run by the Frontier Model Forum (i.e. frontier AI companies) to accelerate and expand the field of AI safety. Private foundation funding AI governance: policymaker education on capabilities, policy design across a range of future scenarios, and civil society coordination on AI risk.
Center on Long-Term Risk (CLR) Fund Supports projects and individuals aiming to address worst-case suffering risks from the development and deployment of advanced AI systems. CSET: Foundational Research Grants Supports the exploration of foundational technical topics that relate to the potential national security implications of AI over the long term.
Network of donors funding charitable projects that work one level removed from direct impact, often cross-cutting between cause areas. Helping philanthropists find, fund, and scale the most promising people and solutions to the world’s most pressing problems – including AI safety.
Vista Institute for AI Policy Sponsors students and recent graduates to undertake independent research with mentor guidance or to serve as research assistants for law professors and other AI policy experts. Empowering innovators and scientists to increase human agency by creating the next generation of responsible AI. Providing support, resources, and open-source software.
Global Technology Risk (GTR) Foundation Berlin-based foundation of tech investor Jan Beckers, funding research in LLM auditing/evaluations, interpretability, and global AI governance. Saving Humanity from Homo Sapiens (SHfHS) Small organization with a long history of finding people doing impactful work to prevent human-created existential risks and financially supporting them.
Swiss VC focused on reducing suffering risks, including that posed by catastrophic AI misuse and AI conflict. Makes both grants to nonprofits and investments in for-profit companies. Run by the Mercatus Center, this program seeks to support entrepreneurs and brilliant minds with highly scalable, “zero to one” ideas for meaningfully improving society.
Evergreen VC and fund by Jaan Tallinn backing tech that supports humanity's long-term survival. Profits from the VC go towards funding AI safety nonprofits. $100M fund by Menlo Ventures and Anthropic backing AI startups from seed to Series A, including those working on "trust and safety tooling".
VC firm investing in ethical founders developing transformative technologies that have the potential to impact humanity on a meaningful scale. Private foundation providing grants to projects working to make the future with advanced AI go well. Also invests in for-profits in order to redeploy capital to AI safety nonprofits.
Foundation run by Dustin Moskovitz and Cari Tuna providing funding in a range of cause areas, including AI safety. Program from Schmidt Sciences supporting exceptional people working on key opportunities and hard problems that are critical to get right for society to benefit from AI. Nonlinear AI Safety Advocacy Grants Grant program providing funding to those raising awareness about AI risks or advocating for a pause in AI development.
Early-stage venture fund supporting startups developing tools to enhance AI safety. Provides both financial investment and mentorship. VC firm investing in AI safety startups.
Run by exited founders and backed by Reid Hoffman, Eric Ries, and Geoff Ralston. VC investing in startups shaping the future of AI and humanity, committed to delivering financial returns while building a resilient and beneficial future. Funding initiative housed at TED helping entrepreneurs shape impactful ideas into viable multi-year plans and launching them to the world alongside visionary philanthropists.
Private foundation of Elastic co-founder Steven Schuurman, providing €5M/year to various initiatives including AI safety. Def/acc at Entrepreneur First Incubation program focusing on "defensive" tech – the idea that the most powerful solution to technological risk is often more technology.
VC aiming to empower founders building a radically better world with safe AI systems by investing in ambitious teams with defensible strategies that can scale to post-AGI. New VC fund by Igor Babuschkin (xAI co-founder). Backs AI safety research and startups building agentic systems.
Jed McCaleb's fund, making grants to organizations and projects in various cause areas including AI safety. Crowdsourced charity evaluator and one-stop shop for AI safety funding applications. No longer active.
BlueDot AGI Strategy Fund $5–50k grants for individuals building high-impact AI safety projects. Applicants must have completed their AGI Strategy course. The EU funded important projects to research AI safety and Tendery.
ai offered free support in applying. Funded projects that have a chance of substantially changing humanity's future trajectory for the better. Disbursed 8 million USD in the 2023 round but has been inactive since.
Economics of AI Fellowship Fellowship run by Stripe for graduate students and early-career researchers interested in pursuing foundational academic research around the economics of AI. Decentralized bounty platform offering prize money for delivering on specific AI safety projects. Platform taking grant applications and presenting them to a donor circle, who then reached out if they were interested.
Last round was in 2024. The largest funder in existential risk reduction. Most funding is via proactive research, but there are also frequent requests for proposals in certain areas.
Previously called Open Philanthropy. Survival and Flourishing Fund (SFF) Provides financial support to organizations working to improve humanity’s long-term prospects for survival and flourishing. Speculation Grants are rolling; full S-Process runs annually.
Youth Thank you! Your submission has been received! Oops!
Something went wrong while submitting the form. No Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form. Venture capitalist Thank you!
Your submission has been received! Oops! Something went wrong while submitting the form.
Suggest a funder to be listed here Let us know of any changes that should be made We're a global team of volunteers and professionals from various disciplines who believe AI poses a grave risk of extinction to humanity. Suggest a correction Give anonymous feedback Donate AI Safety Events & Training AI Safety Funding AISafety. com Updates Maintained by AI safety community-builders (ɔ) 2026 · This site is released under a CC BY-SA license
Based on current listing details, eligibility includes: Researchers, startups, and developers worldwide. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates $5,000 - $100,000 Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is rolling deadlines or periodic funding windows. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.
Past winners and funding trends for this program
Long-Term Future Fund (LTFF) is sponsored by Manifund (acting as a regranting platform for various funders). Long-Term Future Fund (LTFF) is a grant from Manifund that funds projects and individuals working to positively influence the long-term trajectory of civilization, with a focus on global catastrophic risks from advanced artificial intelligence and pandemics.
Manifund is an innovative regranting and community funding platform that connects AI safety researchers and projects with funders. The platform enables both regranting (where designated regrantors allocate funds to promising projects) and impact certificates (where project creators list their work for community funding). AI safety and existential risk reduction are primary focus areas. Projects can range from small independent research efforts to larger organizational initiatives. The platform provides transparency through public project listings and funding decisions, enabling rapid deployment of funds to promising AI safety work that might not fit traditional grant timelines.