1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
Responsible AI research has matured from an academic niche into a major federal and foundation funding priority. NSF invests over $100 million annually in responsible AI through its Responsible AI (RAI) program, AI Institutes focused on trustworthy AI, and the Fairness in Artificial Intelligence program. The National AI Research Institutes portfolio includes centers specifically addressing AI ethics, fairness, and societal impact.
Private foundations have become significant funders: the MacArthur Foundation, Ford Foundation, and Open Philanthropy each support AI governance research. The Patrick J. McGovern Foundation funds equitable AI deployment, and the Hewlett Foundation invests in AI accountability and transparency. NIST's AI Risk Management Framework has created new funding opportunities for organizations developing tools to assess and mitigate AI risks.
Competitive proposals address specific technical challenges — bias detection and mitigation, algorithmic auditing, privacy-preserving AI, explainable AI, and AI safety — while connecting to real-world deployment contexts. Interdisciplinary teams combining computer science, social science, law, and domain expertise are strongly favored.
NSF Responsible AI
Dedicated program funding research on AI fairness, transparency, accountability, and societal impact. Individual awards $150K-$1.5M.
Browse grants →NSF AI Institutes (Trustworthy AI)
Multi-million-dollar research institutes focused on trustworthy, fair, and transparent AI systems.
Browse grants →NIST AI Risk Management
Grants and contracts for tools, standards, and evaluation methods implementing the NIST AI Risk Management Framework.
MacArthur/Ford AI Governance
Foundation grants for AI governance research, policy analysis, civil society capacity building, and community-centered AI accountability mechanisms.
The Horizon Europe RAISE Networks of Excellence for AI in Science - Agriculture and Environmental Pollution (HORIZON-RAISE-2026-01-02) funds one large-scale network of excellence applying AI to agricultural sciences and environmental pollution research. This is one of the first pilot topics under the Resource for AI Science in Europe (RAISE) initiative, part of the broader €100 million AI in Science programme in the Horizon Europe 2026-27 work programme. The project will establish collaborative networks deploying trustworthy AI applications for challenges in food systems, agricultural productivity, and environmental contamination monitoring and remediation.
The Survival and Flourishing Fund (SFF) supports organizations working on long-term survival and flourishing of sentient life, with a strong focus on AI safety, AI governance, biosecurity, and institutional resilience. Founded by Jaan Tallinn (co-founder of Skype), SFF has distributed approximately $152 million since 2019, with $34.9 million in 2025 alone. The 2026 round is estimated at $20-40 million. SFF uses the S-Process evaluation method involving multiple independent assessors and offers three funding mechanisms: Speculation Grants (rolling basis, smaller amounts for quick-turnaround projects), full S-Process grants (annual competitive round), and Initiative Committee funding. Individual grants range from $10,000 to $4,000,000. Assessors may fund proposals across AI safety, pandemic preparedness, governance, and other existential risk areas.
The Gates Foundation AI to Accelerate Charitable Giving Grand Challenge funds innovative AI solutions that transform philanthropic giving for global health and development. The program asks: How might AI support donors to give more and give sooner? Three challenge areas are targeted: (1) Donor Discovery — AI tools connecting donors to causes and understanding impact potential; (2) Conversion — solutions reducing barriers from donation intent to action; and (3) Infrastructure — foundational systems ensuring philanthropic data visibility to AI systems. Projects should focus on global health and development priorities including health improvement and poverty reduction in low- and middle-income countries. This is a distinct program from the Gates Foundation AI Fellows Program, administered through the Grand Challenges portal. Applicants must demonstrate responsible AI practices addressing bias, privacy, and data governance.
109 matching grants · showing 30
The Wellcome Trust Generative AI for Anxiety, Depression and Psychosis program funds fundamental research on using generative AI to improve the measurement or treatment of three major mental health conditions: anxiety disorders, depression, and psychotic disorders. Awards of up to £3 million support research teams of 2-8 members with expertise spanning mental health research, generative AI, AI ethics, and lived experience of mental health conditions. The program supports two main research directions: (1) creating or improving generative models specifically for mental health measurement and intervention, and (2) developing evidence on how humans, AI systems, and clinicians can collaborate safely and efficiently in mental health contexts. Notably, this program funds fundamental research only and explicitly excludes real-world deployment of AI tools. The program includes an accelerator stage where teams refine proposals before submitting full funded research applications. This represents one of the largest dedicated investments in AI for mental health research globally, reflecting Wellcome's strategic priority in understanding and treating mental health conditions.
The Mozilla Foundation Democracy x AI Incubator funds technology projects that strengthen democratic institutions and civic participation through responsible AI. This cohort supports 10 projects at $50,000 each for 12 months, with top performers eligible for Tier II funding of $250,000. Projects must address one of three categories: (1) better information systems including verification tools, diverse information sources, and algorithmic transparency; (2) institutional transparency and accountability mechanisms; or (3) civic space protection and expansion including organizing tools, privacy technologies, and surveillance resistance. The incubator provides mentorship, peer learning, and connections to Mozilla's network alongside financial support. Applications require working technology with demonstrated traction, a committed team capable of 12-month execution, and at least partial open-source commitment or a clear roadmap to open source. This is distinct from other Mozilla programs and specifically targets the intersection of AI and democratic resilience.
The CRA Trustworthy AI Research Fellowship is a grant from the Computing Research Association (CRA), funded by Microsoft, that supports early-career computing researchers advancing trustworthy artificial intelligence through interdisciplinary collaboration with the humanistic social sciences. Fellows participate in a year-long structured program including a virtual kickoff, a four-day in-person Trustworthy AI Field School in Cambridge, MA (July 26–31, 2026), quarterly virtual convenings, and mentorship from established scholars. Eligible applicants must be 1–3 years post-PhD (degree conferred May 1, 2023–July 1, 2025), hold a tenure-track faculty or visiting researcher position at a U.S. institution, and have interdisciplinary training in a humanistic social science. Each Fellow receives a $17,000 stipend plus covered travel, lodging, and meals. The application deadline is March 31, 2026.
The Cambridge ERA:AI (Existential Risk and AI) Research Fellowship 2026 is a 10-week immersive research program based at the University of Cambridge designed to support early-career researchers and PhD students exploring frontier AI safety and governance. The program offers a fully funded fellowship with salary, mentorship from leading AI safety researchers at Cambridge, and access to one of the world's premier research environments. Fellows work on original research in AI safety, AI governance, AI alignment, and related areas of existential risk from advanced AI systems. The program commences July 6, 2026 and provides global mentorship connecting fellows with the broader AI safety research community. This is a highly competitive opportunity for researchers who want to make the transition into AI safety and governance research or deepen their existing work in these critical fields.
The Horizon Europe RAISE Networks of Excellence for AI in Science - Agriculture and Environmental Pollution (HORIZON-RAISE-2026-01-02) funds one large-scale network of excellence applying AI to agricultural sciences and environmental pollution research. This is one of the first pilot topics under the Resource for AI Science in Europe (RAISE) initiative, part of the broader €100 million AI in Science programme in the Horizon Europe 2026-27 work programme. The project will establish collaborative networks deploying trustworthy AI applications for challenges in food systems, agricultural productivity, and environmental contamination monitoring and remediation.
The Survival and Flourishing Fund (SFF) supports organizations working on long-term survival and flourishing of sentient life, with a strong focus on AI safety, AI governance, biosecurity, and institutional resilience. Founded by Jaan Tallinn (co-founder of Skype), SFF has distributed approximately $152 million since 2019, with $34.9 million in 2025 alone. The 2026 round is estimated at $20-40 million. SFF uses the S-Process evaluation method involving multiple independent assessors and offers three funding mechanisms: Speculation Grants (rolling basis, smaller amounts for quick-turnaround projects), full S-Process grants (annual competitive round), and Initiative Committee funding. Individual grants range from $10,000 to $4,000,000. Assessors may fund proposals across AI safety, pandemic preparedness, governance, and other existential risk areas.
The Gates Foundation AI to Accelerate Charitable Giving Grand Challenge funds innovative AI solutions that transform philanthropic giving for global health and development. The program asks: How might AI support donors to give more and give sooner? Three challenge areas are targeted: (1) Donor Discovery — AI tools connecting donors to causes and understanding impact potential; (2) Conversion — solutions reducing barriers from donation intent to action; and (3) Infrastructure — foundational systems ensuring philanthropic data visibility to AI systems. Projects should focus on global health and development priorities including health improvement and poverty reduction in low- and middle-income countries. This is a distinct program from the Gates Foundation AI Fellows Program, administered through the Grand Challenges portal. Applicants must demonstrate responsible AI practices addressing bias, privacy, and data governance.
The Gates Foundation AI to Accelerate Charitable Giving Grand Challenge seeks innovative AI solutions that transform philanthropic giving. The RFP addresses the question: How might AI support donors to give more and give sooner? Projects must address one of three challenge areas: (1) Donor Discovery & Connection — helping donors identify aligned causes through recommendation engines, personalized learning tools, and impact visualization; (2) Convert Intent to Action — reducing barriers preventing motivated donors from completing donations through streamlined giving processes and community-building systems; (3) Foundational Infrastructure — building underlying data systems and standards for philanthropic AI including data pipelines, nonprofit interoperability with AI agents, and fraud detection. Solutions benefiting global health and development in low- and middle-income countries receive priority consideration. Grantees must participate in up to three learning convenings with peer organizations. Applications are submitted through the Gates Foundation portal at submit.gatesfoundation.org.
The Alfred P. Sloan Foundation Metascience and AI Postdoctoral Fellowship supports early career researchers in the social sciences and humanities who are building careers in understanding the implications of AI for the science and research ecosystem. The fellowship provides up to $250,000 over two years with particular emphasis on researchers in philosophy, sociology of science, and metascience. Areas of interest include reproducibility and transparency of machine learning-enabled science, how philosophy of science can inform optimal uses of machine learning for knowledge production, and the relative value of foundation models versus traditional machine learning for scientific discovery. This is not a program for direct AI tool development or general AI ethics; the focus is specifically on AI's impact on the practice and methodology of science. The program is delivered in partnership with UK Research and Innovation (UKRI) and the Social Sciences and Humanities Research Council (SSHRC) of Canada, and includes a fully funded residential summer school. Applications opened February 2026, with virtual office hours held in March 2026.
The Lilly Endowment Artificial Intelligence in Higher Education Initiative is a landmark $500 million multi-phase program to help Indiana colleges and universities develop comprehensive strategies for integrating AI across their institutions. Phase 1 provided planning grants of $125,000 to $300,000 for institutions to explore AI challenges and opportunities with proposals due December 1 2025. Phase 2 offers implementation grants of $5 million to $25 million per institution due May 1 2026 to fund institutional AI implementation projects. Phase 2 also includes collaboration grants from a $200 million pool for multi-institution partnerships with concept papers due May 1 2026 and full proposals due September 25 2026. The initiative aims to help institutions consider how AI is reshaping teaching and learning, prepare students for an AI-shaped workforce, and develop responsible AI governance frameworks. This program is distinct from the FIPSE Advancing AI in Education Special Projects which is a federal program open to all U.S. postsecondary institutions and from the Spencer Foundation Initiative on AI and Education which funds research rather than institutional implementation.
Schmidt Sciences' Science of Trustworthy AI RFP supports technical research aimed at improving understanding, prediction, and control of risks from advanced AI systems while enabling their safe deployment. The program funds research across three core aims: (1) characterizing and forecasting misalignment in frontier AI systems, (2) developing generalizable measurements and interventions for AI safety, and (3) overseeing superhuman-capability AI systems and addressing multi-agent risks. Tier 1 awards provide up to $1 million over 1-3 years, while Tier 2 awards range from $1-5 million or more over 1-3 years. Schmidt Sciences also offers compute access, software engineering support through the Virtual Institute for Scientific Software, API credits with frontier model providers, and community engagement opportunities for funded researchers.
Schmidt Sciences invites proposals for its 2026 Science of Trustworthy AI program, supporting technical research that improves our ability to understand, predict, and control risks from frontier AI systems. The program funds work across three research aims: characterizing misalignment in frontier AI systems, developing generalizable measurements and interventions for AI safety, and overseeing AI systems with superhuman capabilities including multi-agent risks. Research areas include interpretability, robustness, alignment, and risk prediction. Tier 1 awards up to $1M support focused investigations, while Tier 2 awards of $1M-$5M+ fund larger multi-year collaborative efforts across multiple institutions. Schmidt Sciences also provides compute resources, software engineering support, and API credits with frontier model providers. Preference is given to multi-PI collaborations. This is a rigorous scientific research program focused on technical AI safety, not policy analysis.
Science of Trustworthy AI RFP is sponsored by Schmidt Sciences. Schmidt Sciences invites proposals for the Science of Trustworthy AI program, which funds technical research to understand, predict, and control risks from frontier AI systems. Two funding tiers are available: Tier 1 (up to $1M, 1-3 years) and Tier 2 ($1M-$5M+, 1-3 years).
Schmidt Sciences 2026 AI Interpretability RFP is a pilot program seeking new methods for detecting and mitigating deceptive behaviors from AI models. The program focuses on understanding deceptive behaviors from large language models including sycophancy and knowingly giving harmful advice problems that are appearing more frequently in frontier AI systems trained with noisy human feedback. Research areas include developing monitoring and detection methods for model deception creating targeted steering methods for intervening on model truthfulness building visualizations or dashboards that communicate model truthfulness to users applying detection and steering methods to AI debate settings or decision support systems and studying the role of deception mitigations in multi-agent interactions. This program is distinct from the Schmidt Sciences Science of Trustworthy AI RFP which focuses more broadly on understanding and controlling frontier AI risks. The Interpretability RFP specifically targets research on detecting deceptive LLM behaviors and developing practical interventions.
The Rising Fund: Responsible AI Funding Cycle is a grant from The Rising Fund that funds early-stage organizations working to ensure artificial intelligence advances equity and community engagement for marginalized populations across the United States. This funding cycle focuses on organizations that strengthen civic life and address the potential harms of AI on underserved communities. Grants of ,000 (unrestricted) are available to eligible applicants. Eligible organizations must be in an early stage and demonstrate a clear focus on enhancing equity, community engagement, and civic participation for marginalized communities. The application deadline is June 13, 2026.
APA AI2050 Prizes is an award from Schmidt Sciences, administered through the American Philosophical Association, that funds published research addressing the "Hard Problems" of Artificial Intelligence — including AI safety, ethics, alignment, and technical scalability. The program awards two prizes per cycle: one for an early-career researcher and one for an established researcher, each receiving $10,000. Eligible applicants include graduate students, faculty, and researchers working at the intersection of philosophy and technical AI disciplines. The 2026 application deadline is June 23, 2026. Submissions should represent original published or forthcoming work that substantively advances understanding of fundamental challenges in AI.
Horizon Europe: Cluster 3 - Civil security for society, Enhancing the Security, Privacy and Robustness of AI Models and Systems (SecureAI) is sponsored by European Commission (Horizon Europe). This topic focuses on enhancing the security, privacy, and robustness of AI models and systems. While not exclusively focused on youth, robust and secure AI systems are foundational for ensuring youth AI safety.
Office of Naval Research (ONR) Long Range Broad Agency Announcement (BAA) for Navy and Marine Corps Science & Technology is sponsored by Office of Naval Research (ONR). Seeks proposals for AI research supporting naval applications, including trustworthy AI, human-AI collaboration, and AI for decision support. Emphasis on basic and applied research (TRL 1-5) advancing artificial intelligence for future naval capabilities.
Metaplanet Holdings, founded by Skype co-founder Jaan Tallinn, provides grants and investments for technology and research that supports long-term human survival, with AI existential safety as a primary focus. The fund supports AI safety research, AI alignment work, governance initiatives, and technological solutions that reduce catastrophic and existential risks from advanced AI systems. Tallinn is one of the most significant individual funders in the AI safety space, having co-founded the Centre for the Study of Existential Risk and the Future of Life Institute. Metaplanet provides flexible funding for projects that may not fit traditional grant structures.
The Vista Institute AI Policy Fellowship supports students and recent graduates in conducting independent research or serving as research assistants with law professors and AI policy experts. Fellows work on critical AI governance, AI law, and AI policy questions. Most fellows are selected through Vista's courses and the AI Law and Policy Workshop, though unsolicited proposals are occasionally funded. The program provides mentored research opportunities at the intersection of AI technology and public policy, helping develop the next generation of AI governance professionals.
Manifund is an innovative regranting and community funding platform that connects AI safety researchers and projects with funders. The platform enables both regranting (where designated regrantors allocate funds to promising projects) and impact certificates (where project creators list their work for community funding). AI safety and existential risk reduction are primary focus areas. Projects can range from small independent research efforts to larger organizational initiatives. The platform provides transparency through public project listings and funding decisions, enabling rapid deployment of funds to promising AI safety work that might not fit traditional grant timelines.
The Effective Altruism Infrastructure Fund (EAIF) provides grants for projects that build the effective altruism community and support high-impact cause areas, with AI safety being a primary focus. The fund supports AI safety field-building, community organizing, research infrastructure, educational programs, career development initiatives, and organizational capacity building in the AI safety ecosystem. EAIF complements the Long-Term Future Fund (which focuses on direct AI safety research) by funding the infrastructure and community that enables safety research to happen. Applications are reviewed on a rolling basis with decisions typically made within 4-8 weeks.
The Auspicious Fund invests in early-stage startups working at the intersection of AI and humanity, with a focus on ensuring AI development benefits people broadly. The fund supports companies building responsible AI products, AI safety tools, AI-powered solutions for social challenges, and technologies that keep humans in the loop of AI systems. The fund provides capital, mentorship, and access to a network of AI researchers and entrepreneurs committed to beneficial AI development.
Foundational Research Grants is sponsored by Center for Security and Emerging Technology (CSET). The Foundational Research Grants (FRG) program supports the exploration of foundational technical topics related to the potential national security implications of AI over the long term. Areas of interest include AI assurance for general-purpose systems in open-ended domains, technical tools for external scrutiny of AI to ensure safe and ethical development and use, and frontier AI risks and regulations. This aligns with AI safety testing and certification by seeking to advance understanding of underlying technical issues relevant to strategic and policy perspectives on AI.
The Safe AI Fund (SAIF), founded by former Y Combinator president Geoff Ralston, is an early-stage venture fund providing $100,000 investments to startups developing tools that enhance AI safety, security, and responsible deployment. Investments are structured as a SAFE with a $10 million post-money cap following the standard YC format. The fund has built a portfolio of over 25 companies across AI agent safety, interpretability, risk management, cybersecurity, and responsible AI deployment. Beyond capital, SAIF provides weekly mentorship office hours with Geoff Ralston and access to a network of leading AI-focused venture capitalists, institutional investors, and strategic partners. The fund predominantly invests in US-based startups but is open to compelling opportunities globally.
Small Business Innovation Research and Small Business Technology Transfer Programs (SBIR/STTR) Phase I FY 2026 Release 1 is sponsored by U.S. Department of Energy Office of Science. Supports small business R&D addressing DOE mission needs, including advanced scientific computing, AI/ML, and related technologies where AI safety and risk mitigation proposals can align with topics in computational science and high-performance computing.
Vista Institute AI Policy Fellowship is sponsored by Vista Institute for AI Policy. The Vista Institute for AI Policy offers grant-based fellowships supporting independent research or research assistance positions focused on AI policy. The program targets students and recent graduates who want to contribute to AI governance and policy research.
NSF's ExpandAI program invests $16.3 million to advance AI innovation by strengthening and broadening participation in AI research and education at minority-serving institutions (MSIs) including HBCUs, HSIs, Alaska Native Serving Institutions, and Predominantly Black Institutions. Funding supports strengthening research programs in AI, recruiting faculty and staff with AI expertise, creating bridge programs for prospective graduate students, leading workshops, providing access to research resources, community-building, and seeding ethical and responsible AI practices into education. The program aims to develop a diverse, well-trained national AI workforce.
Bridge to Artificial Intelligence (Bridge2AI) - Network for AI Health Science is sponsored by NIH Common Fund. As part of the Bridge2AI program's second stage, the 'Network for AI Health Science' initiative will bring together scientific experts to develop safety measures for responsible AI use and research in health sciences.
Foresight Institute's AI for Science & Safety Nodes program provides grants of $10,000-$100,000 (with higher amounts available for AI safety focus areas) to individuals, teams, and organizations working on AI-first projects. The program operates from hubs in San Francisco and Berlin (launching April 1, 2026) and supports projects across seven focus areas: AI for Security, Private AI, Decentralized & Cooperative AI, AI for Science & Epistemics, AI for Neuro/Brain-Computer Interfaces, AI for Longevity Biotechnology, and AI for Molecular Nanotechnology. Beyond funding, recipients receive office and event space at SF or Berlin hubs, access to local private compute infrastructure, and connection to the Foresight community. The total annual pool is approximately $3 million. Applications are reviewed on a rolling monthly basis (last day of each month) until capacity is reached. The program is supported by Protocol Labs, Gigafund, and 100 Plus Capital.
New AI funding opportunities, deadline alerts, and grant writing tips every Tuesday.
Use our free grant finder to search active federal funding opportunities by agency, eligibility, and deadline.
Get a free Grant Score and see how well your organization matches grants like this one.