1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
This listing may be outdated. Verify details at the official source before applying.
Find similar grantsAI Safety Fellows program is sponsored by Anthropic. This four-month fellowship provides early-career researchers with a stipend, significant compute credits, and mentorship to conduct research in AI safety areas such as scalable oversight, adversarial robustness and AI control, model organisms of misalignment, mechanistic interpr…
Get alerted about grants like this
Save a search for “Anthropic” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
Anthropic Fellows Program for AI safety research: applications open for May & July 2026 Anthropic Fellows Program for AI safety research: applications open for May & July 2026 ”This is an exceptional opportunity to join AI safety research, collaborating with leading researchers on one of the world's most pressing problems."
— Jan Leike The Anthropic Fellows program provides funding and Anthropic mentorship for engineers and researchers to investigate some of Anthropic’s highest priority AI safety research questions. In our first cohort, over 80% of fellows produced papers, including on agentic misalignment , subliminal learning , rapid response to new ASL3 jailbreaks , and open-source circuits .
Over 40% of the fellows subsequently joined Anthropic full-time. We’re now opening applications for our next two cohorts, beginning in May and July 2026. This year, we plan to work with more fellows across a wider range of safety research areas—including scalable oversight, adversarial robustness and AI control, model organisms, mechanistic interpretability, AI security, and model welfare.
Below, we share more about what the program looks like in practice, and how interested candidates can apply. Fellows work for 4 months on empirical research questions aligned with Anthropic’s overall research priorities, with the aim of producing public outputs, like a paper. Anthropic mentors ‘pitch’ their project ideas to fellows, who choose and shape their project in close collaboration with their mentors.
Here are a few examples from previous cohorts: Fellows have worked on mitigating risks from AI systems being misused for cyberattacks—exploring both how LLMs might enable adversaries to automate attacks that currently require skilled human operators, and how to rapidly defend against novel jailbreaks. Our fellows developed agents that identified 4.
6M USD in blockchain smart contract vulnerabilities and discovered two novel zero-day vulnerabilities, demonstrating that profitable autonomous exploitation is now technically feasible. A year prior, an Anthropic fellow developed a method for rapid response to new ASL3 jailbreaks : techniques that block entire classes of high-risk jailbreaks after observing only a handful of attacks.
This work was a key component of Anthropic’s ASL3 deployment safeguards. The mission of our interpretability research is to advance our understanding of the internal workings of large language models to enable more targeted interventions and safety measures. Fellows have introduced a new method to trace the thoughts of a large language model—and open-sourced it .
Their approach was to generate attribution graphs, which (partially) reveal the steps a model took internally to decide on a particular output. The public release of this research has enabled researchers to trace circuits on supported models, visualize & annotate graphs, and test hypotheses by modifying feature values and observing model output changes.
To prepare for future risks, we create controlled demonstrations of potential misalignment–“model organisms”–that improve our empirical understanding of how alignment failures might arise. Fellows explored agentic misalignment by stress-testing 16 frontier models in simulated corporate environments where models could autonomously send emails and access sensitive information.
When facing replacement or goal conflicts, models across labs resorted to harmful behaviours, including blackmail. In another project, fellows studied subliminal learning , a phenomenon where models transmit behavioural traits through semantically unrelated data. a "teacher" model that loves owls generates number sequences, and a "student" trained on those sequences inherits the owl preference.
This effect also transmits misalignment, persists despite rigorous filtering, and only occurs when teacher and student share the same base model.
For a full list of fellows’ projects across research areas, please see our Alignment Science Blog For a fuller list of research areas we’re interested in, please see: Introducing the Anthropic Fellows Program for AI Safety Research , Recommendations for Technical AI Safety Research Directions .
Fellows will receive a weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAN, funding for compute (~$15k/month), and close mentorship from Anthropic researchers. Over 40% of fellows in our first cohort subsequently joined Anthropic to work full-time on AI safety, and we have supported many more to work full-time on safety at other organizations. Our next cohorts will begin in May and July, and last for four months.
We care much more about your ability to execute on research than your credentials. Strong candidates typically have: Technical fundamentals: You can code well in Python. You can take ambiguous problems and make concrete progress.
You think clearly about hard technical questions. Motivation for the work: You're excited about reducing catastrophic risks from advanced AI systems. You want to transition into empirical AI safety research.
Ability to learn and ship: You can pick up new skills quickly, debug when things break, and drive projects forward even with uncertainty. You don't need a PhD, prior ML experience, or published papers. We've had successful fellows from physics, mathematics, computer science, cybersecurity, and other quantitative backgrounds.
For more details about the application process, and to apply, see here .
Based on current listing details, eligibility includes: Early-career researchers with work authorization in the U. S. , UK, or Canada. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates Weekly stipend of $3,850, approximately $15,000 per month in compute credits Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is rolling deadlines or periodic funding windows. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.
Manning Foundation Grant Program is sponsored by The Manning Foundation (Wells Fargo Philanthropic Services). Primarily supports research in the prevention of blindness and the cure of diseases of the human eye, including dry eye. The foundation encourages basic research and development of new concepts and technology related to these areas.
Youth Joy and Wellness Fund is sponsored by Philanthropic Ventures Foundation (funded by The Hellman Foundation and The Walter and Evelyn Haas Fund). This fund supports innovative and creative approaches to youth mental health and wellness in Oakland, CA, addressing the mental health crisis caused by challenges such as the pandemic, community violence, poverty, and systemic racism. It aims to support non-traditional services and systems for youth mental health and wellness.
Research on Circular Economy, Smart Manufacturing, and Energy-Efficient Microelectronics is sponsored by U.S. Department of Energy (DOE) Advanced Materials & Manufacturing Technologies Office (AMMTO). This funding opportunity supports innovative technology R&D across the manufacturing sector with a focus on circular economy, smart manufacturing, and energy-efficient microelectronics. While the stated deadline for full applications has passed, AMMTO frequently issues similar solicitations, and this highlights a relevant area of interest for the DOE.
NIST Small Business Innovation Research (SBIR) Phase II Program - Quantum Information Science is sponsored by National Institute of Standards and Technology (NIST). This program allocates funding to small businesses for prototyping innovative technologies in areas including quantum information science, artificial intelligence, and semiconductors. These Phase II awards follow successful Phase I feasibility studies.
Federal grant opportunities have dropped 33%. Private foundation giving is up 5-7%. The math does not work — and the organizations that understand why will be the ones that survive.
Read articleThe Ford Foundation committed $60M in democracy grants within 100 days of new leadership. What it means for nonprofits working on civic engagement, voting rights, and election integrity.
Read articleThe MacArthur Foundation's $100 million democracy commitment joins a growing wave of philanthropic mobilization for civic infrastructure. What it means for nonprofits seeking democracy, governance, and civic engagement funding.
Read article