1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
This listing may be outdated. Verify details at the official source before applying.
Find similar grantsFLI AI Safety Grants is sponsored by Future of Life Institute. Supports research on technical AI safety, AI governance, and policy, funding projects at major universities and research organizations worldwide.
Get alerted about grants like this
Save a search for “Future of Life Institute” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
Technical PhD Fellowships - Future of Life Institute Technical PhD Fellowships The Vitalik Buterin PhD Fellowship in AI Existential Safety is for PhD students who plan to work on AI existential safety research, or for existing PhD students who would not otherwise have funding to work on AI existential safety research. Deadline: November 21, 2025 Tuition and fees for 5 years of their PhD, with extension funding possible.
$40,000 annual stipend at universities in the US, UK and Canada. A $10,000 fund that can be used for research-related expenses such as travel and computing. Invitations to virtual and in-person events where they will be able to interact with other researchers in the field.
Applicants who are short-listed for the Fellowship will be reimbursed for this year's application fees for up to 5 PhD programs. See below for the definition of 'AI Existential Safety research' and additional eligibility criteria. Questions about the fellowship or application process not answered on this page should be directed to grants@futureoflife.
org The Vitalik Buterin Fellowships in AI Existential Safety are run in partnership with the Beneficial AI Foundation (BAIF) . FLI offers Buterin Fellowships in pursuit of a vibrant AI existential safety research community free from financial conflicts of interest.
Anyone awarded a fellowship will need to confirm the following: "I am aware of FLI’s assessment that moving from a Buterin Fellowship to working (even on a safety team) for a company that is a) racing to build AGI/ASI, and b) not pushing for strong binding AI regulation is a net negative for humanity.
I therefore agree that, if I accept a Buterin Fellowship and take a job at any such company (including Anthropic, GoogleDeepMind, Meta, OpenAI, or xAI) within 2 years of completing my Buterin Fellowship, I will donate half of my gross compensation each month to a charity mutually agreeable to me and FLI, including half of any stock options or bonuses.
” People that have been awarded grants within this grant program: Carnegie Mellon University Massachusetts Institute of Technology Casper, Stephen, et al. " Explore, Establish, Exploit: Red Teaming Language Models from Scratch. " arXiv preprint arXiv:2306.
09442 (2023). Shah, Rusheb, et al. " Scalable and Transferable Black-Box Jailbreaks for Language Models via Persona Modulation.
" arXiv preprint arXiv:2311. 03348 (2023). Stephen Casper, Cynthia Chen, Usman Anwar Casper, Stephen, et al.
" Open problems and fundamental limitations of reinforcement learning from human feedback. " arXiv preprint arXiv:2307. 15217 (2023).
Ji, Jiaming, et al. " Ai alignment: A comprehensive survey. " arXiv preprint arXiv:2310.
19852 (2023). AI Safety Papers - Jinesis AI Lab Pandey, Le, et al. " SocialHarmBench: Revealing LLM Vulnerabilities to Socially Harmful Requests " DisinfoCon 2025 (Oral) Simko, Sachan, et al.
" Improving Large Language Model Safety with Contrastive Representation Learning " EMNLP 2025 Main (Poster) Piedrahita, Strauss, et al . " Democratic or Authoritarian? Probing a New Dimension of Political Biases in Large Language Models " NLP4Democracy Workshop @ COLM (Poster) Piedrahita, Yang, et al.
" Corrupted by Reasoning: Reasoning Language Models Become Free-Riders in Public Goods Games " COLM 2025 Main (Poster) Pandey, Simko, et al. " Accidental Vulnerability: Factors in Fine-Tuning that Shift Model Safeguards " SoLaR Workshop @ COLM 2025 (Poster) Samway, Mihalcea, et al. " When Do Language Models Endorse Limitations on Universal Human Rights Principles?
" SoLaR Workshop @ COLM 2025 (Oral) Yadav, Liu, et al. " Revealing Hidden Mechanisms of Cross-Country Content Moderation with Natural Language Processing " ACL Findings 2025 (Poster) Hong, Dian Zhou, et al. " The Reasoning-Memorization Interplay in Language Models Is Mediated by a Single Direction " ACL Findings 2025 (Poster) Piatti, Jin, et al.
" Cooperate or Collapse: Emergence of Sustainable Cooperation in a Society of LLM Agents " NeurIPS 2024 (Poster) Ortu, Jin, et al. " Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals " ACL 2024 Main (Poster) Zhijing Jin, Sydney Levine, et al.
" When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment " NeurIPS 2022 (Oral) AI Existential Safety Research Definition FLI defines AI existential safety research as: Research that analyzes the most probable ways in which AI technology could cause an existential catastrophe (that is: a catastrophe that permanently and drastically curtailshumanity’s potential, such as by causing human extinction), and which types of research could minimize existential risk (the risk of such catastrophes).
Examples include: Outlining a set of technical problems and arguments that their solutions would reduce existential risk from AI, or arguing that existing such sets are misguided. Concretely specifying properties of AI systems that significantly increase or decrease their probability of causing an existential catastrophe, and providing ways to measure such properties.
Technical research which could, if successful, assist humanity in reducing the existential risk posed by highly impactful AI technology to extremely low levels. Examples include: Research on interpretability and verification of machine learning systems, to the extent that it facilitates analysis of whether the future behavior of the system in a potentially different distribution of situations could cause existential catastrophes.
Research on ensuring that AI systems have objectives that do not incentivize existentially risky behavior, such as deceiving human overseers or amassing large amounts of resources. Research on developing formalisms that help analyze advanced AI systems, to the extent that this analysis is relevant for predicting and mitigating existential catastrophes such systems could cause.
Research on mitigating cybersecurity threats to the integrity of advanced AItechnology. Solving problems identified as important by research as described in point 1, or developing benchmarks to make it easier for the AI community to work on such problems.
The following are examples of research directions that do not automatically count as AIexistential safety research, unless they are carried out as part of a coherent plan for generalizing and applying them to minimize existential risk: The mitigation of non-existential catastrophes, e.g. ensuring that autonomous vehicles avoid collisions, or that recidivism prediction systems do not discriminate based on race.
We believe this kind of work is valuable; it is simply outside the scope of this fellowship. Increasing the general competence of AI systems, e.g. improving generative modelling,or creating agents that can optimize objectives in partially observable environments. The purpose of the fellowship is to fund talented students throughout their PhDs to work on AI existential safety research.
To be eligible, applicants should either be graduate students or be applying to PhD programs. Funding is conditional on being accepted to a PhD program, working on AI existential safety research, and having an advisor who can confirm to us that they will support the student’s work on AI existential safety research. If a student has multiple advisors, these confirmations would be required from all advisors.
There is an exception to this last requirement for first-year graduate students, where all that is required is an “existence proof”. For example, in departments requiring rotations during the first year of a PhD, funding is contingent on only one of the professors making this confirmation. If a student changes advisor, this confirmation is required from the new advisor for the fellowship to continue.
An application from a current graduate student must address in the Research Statement how this fellowship would enable their AI existential safety research, either by letting them continue such research when no other funding is currently available, or by allowing them to switch into this area.
Fellows are expected to participate in annual workshops and other activities that will be organized to help them interact and network with other researchers in the field. Continued funding is contingent on continued eligibility, demonstrated by submitting a brief (~1 page) progress report due each summer. There are no geographic limitations on applicants or host universities.
We welcome applicants from a diverse range of backgrounds, and we particularly encourage applications from women and underrepresented minorities. Applicants will submit a curriculum vitae, a research statement, and the names and email addresses of up to three referees, who will be sent a link where they can submit letters of recommendation and answer a brief questionnaire about the applicant.
Applicants are encouraged but not required to submit their GRE scores using our DI code: 3234. The research statement can be up to 3 pages long, not including references, outlining applicants’ current plans for doing AI existential safety research during their PhD.
It should include the applicant’s reason for interest in AI existential safety, a technical specification of the proposed research, and a discussion of why it would reduce the existential risk of advanced AI technologies. For current PhD students, it should also detail why no existing funding arrangements allow work on AI existential safety research. The deadline for application is November 21, 2025 at 11:59 pm ET.
After an initial round of deliberation, those applicants who make the short-list will then go through an interview process before fellows are finalized. Offers will be made no later than the end of March 2026.
US-China AI Governance PhD Fellowships Deadline: November 21, 2025 Technical Postdoctoral Fellowships Request for Proposals on religious projects tackling the challenges posed by the AGI race Deadline: 2 February 2026, 23:59 EST Multistakeholder Engagement for Safe and Prosperous AI Deadline: 4 February 2025, 23:59 EST Join 70,000+ others receiving periodic updates on our work and focus areas.
Based on current listing details, eligibility includes: Academic researchers, research organizations, nonprofit organizations. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates $50,000 - $1,000,000 Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is rolling deadlines or periodic funding windows. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.
The Future of Life Institute's Digital Media Accelerator supports digital content creators who raise awareness and understanding about ongoing AI developments, risks, and safety challenges. The program funds creators across platforms including YouTube, TikTok, blogs, podcasts, and newsletters who can explain complex AI issues — such as AGI implications, control problems, misaligned AI goals, Big Tech power concentration, AI extinction risks, labor impacts of advanced AI, and progress toward AGI — in ways that diverse audiences can understand and relate to. The accelerator provides funding to help creators produce content, grow their channels, and spread AI safety awareness to new audiences. FLI particularly encourages creators interested in discussing AI safety on an ongoing basis and incorporating it into their regular content output. Applications are accepted on a rolling basis with no fixed deadline. This program fills a critical gap in AI safety communication by supporting accessible, creator-driven content that reaches audiences beyond traditional academic and policy circles.
The Future of Life Institute Digital Media Accelerator funds digital content creators producing material about AI developments, risks, and safety. The program supports various content formats across platforms, including YouTube explainer series, TikTok news channels, AI safety research newsletters, and podcast series addressing AGI progress and implications. Topics include AGI implications, AI control challenges, misaligned objectives, tech industry concentration, and existential risk from advanced AI. The program accepts applications on a rolling basis and seeks creators who possess an existing audience interested in AI safety awareness, have compelling content ideas for reaching new audiences on AI risks, and plan ongoing AI safety content integration. The program is part of FLI's broader mission to steer transformative technologies away from extreme risks and toward benefiting life.
The Future of Life Institute Digital Media Accelerator supports digital content creators in producing high-quality educational content about AI safety, existential risk, and responsible AI development. The program funds creators across all major platforms including YouTube explainer series, TikTok channels, newsletters, and podcasts covering topics such as AGI implications, AI control problems, misaligned goals, Big Tech power concentration, and labor impacts of advanced AI. Selected creators receive financial support and access to FLI's research network and subject matter experts. The program seeks creators who already have an existing following and compelling ideas for reaching new audiences with AI safety content.
Fire Science Innovations through Research and Education (FIRE) program is sponsored by National Science Foundation (NSF). This program invites innovative multidisciplinary and multisector investigations focused on convergent research and education activities in wildland fire. It supports research that can inform risk management and response, adaptation, and resilience across infrastructures, communities, cultures, and natural environments. Relevant topics include developing novel materials and methods for retrofitting existing buildings and remediating buildings following wildfire and smoke events.
The UKRI Policy Fellowships 2025, funded by the Economic and Social Research Council, offer 18-month placements for academics to co-design research with UK government and What Works Network host organizations. Awards range from £180,000 to £280,000 and support three fellowship tracks: core policy fellows, Natural Hazards and Resilience policy fellows, and What Works Innovation fellows. Applicants must hold a PhD or equivalent research experience, be based at a UKRI-eligible UK organization, and possess relevant subject matter or methodological expertise. Government-hosted positions target early to mid-career academics, while What Works fellowships welcome all career stages. Fellows work directly with policymakers to bridge academic research and policy development on pressing national and global challenges. The application deadline is July 15, 2025.