Granted

AI Ethics Grants: Who Is Funding Responsible AI Research?

February 24, 2026 · 5 min read

David Almeida

Half a billion dollars from ten foundations. Forty million from a single Open Philanthropy RFP. Eighteen million from NSF's first responsible-technology cohort. The money flowing into AI ethics, fairness, and governance research has never been larger -- and for the first time, philanthropic funders are outpacing the federal government in this space.

Tracking responsible AI funding? See our AI Ethics Grants page for current opportunities.

That shift matters for researchers and nonprofits chasing these dollars. Federal programs like NSF's Responsible AI portfolio come with rigid timelines, institutional eligibility requirements, and multi-year review cycles. Foundation money moves faster, funds advocacy and policy work the feds won't touch, and increasingly coordinates across funders to avoid duplication. Knowing who is spending what -- and on which slice of the responsible AI problem -- is the difference between a targeted proposal and a shot in the dark.

The Federal Baseline: NSF, NIST, and ReDDDoT

NSF invests over $700 million annually in AI research broadly, but only a fraction targets ethics and societal impact directly. The most relevant program is ReDDDoT (Responsible Design, Development, and Deployment of Technologies), a joint initiative between NSF and philanthropic partners including the Ford Foundation, Patrick J. McGovern Foundation, Pivotal Ventures, the Schmidt Fund for Strategic Innovation, and Siegel Family Endowment. In 2025, ReDDDoT made its inaugural awards: more than $18 million to 44 multidisciplinary teams, with Phase 2 grants of up to $1.5 million over three years. The FY25 solicitation (NSF 24-524) focuses on AI, biotechnology, and disaster mitigation, requiring teams to integrate ethical and societal considerations from the start rather than bolting them on after the fact.

NIST operates differently -- fewer grants, more standards-setting. Its AI Risk Management Framework shapes how industry and government evaluate trustworthy AI, and its measurement science grants fund applied research in AI safety and explainability at $10,000 to $500,000 per year for up to five years. In late 2025, NIST committed $20 million to two new centers -- one for AI in manufacturing, one for AI cybersecurity in critical infrastructure -- run through MITRE. These are not ethics grants in the traditional sense, but they fund the measurement infrastructure that makes AI accountability possible.

Humanity AI: Philanthropy's Coordinated Bet

The biggest development in responsible AI funding is Humanity AI, a $500 million, five-year initiative launched in October 2025 by ten major foundations: the Ford Foundation, MacArthur Foundation, Omidyar Network, Mellon Foundation, Mozilla Foundation, Doris Duke Foundation, Lumina Foundation, Kapor Foundation, David and Lucile Packard Foundation, and Siegel Family Endowment. Co-chaired by Omidyar and MacArthur, it spans five priority areas -- democracy, education, humanities and culture, labor and economy, and security.

Aligned grantmaking began in fall 2025. MacArthur moved first with $10 million in initial grants, including $2 million to the AI Now Institute and $2 million to the Brookings Institution. Rockefeller Philanthropy Advisors will distribute pooled-fund grants starting in 2026. This is not a single pot with a single deadline -- each funder retains its own grantmaking process, which means ten separate application pathways, each with different eligibility criteria and reporting requirements.

For applicants, the implication is clear: a proposal rejected by MacArthur's AI Opportunity program might fit perfectly within Ford's technology-and-society portfolio or Lumina's workforce-and-AI lens. Understanding each funder's angle within Humanity AI matters more than the headline number.

Open Philanthropy and the Technical Safety Pipeline

Open Philanthropy occupies a distinct lane. Its 2025 Technical AI Safety RFP committed roughly $40 million across 21 research directions -- adversarial testing, model transparency, interpretability, theoretical alignment -- with a streamlined application process: a 300-word expression of interest, two-week turnaround on invitations, then full proposals. The fund signaled willingness to deploy substantially more depending on application quality.

Since 2014, Open Philanthropy has put hundreds of millions into AI safety research. Its grants tend to go to academic labs, independent research organizations, and individual researchers, often funding positions and infrastructure that federal grants rarely cover. The application closed in April 2025, but Open Philanthropy operates on rolling timelines and has historically issued new RFPs as prior cohorts deploy.

Mozilla, Google.org, and the Corporate Layer

Mozilla Foundation's newest program is the Democracy x AI Incubator, backing ten projects with $1 million total, with the two strongest advancing to up to $300,000 each over 24 months. Priority areas include algorithmic accountability, information ecosystem resilience, and community-led governance models. Applications opened in early 2026. Mozilla also runs its Fellows Program, supporting up to ten technologists annually working at the intersection of AI and public interest.

On the corporate side, Google.org's AI Opportunity Fund committed $75 million to AI skills training across the U.S., with a separate $15 million directed at government workforce AI training through the Partnership for Public Service. Its 2026 Impact Challenges -- AI for Government Innovation (deadline April 3, 2026) and AI for Science (deadline April 17, 2026) -- offer up to $3 million per project, with responsible AI principles baked into the evaluation criteria. Microsoft and Meta invest heavily in internal responsible AI teams and publish frameworks, but neither operates a comparable external grants program at this scale.

Where the Gaps Are -- and Where to Look Next

The Rockefeller Foundation carved out a different niche with its AI Readiness Project, launched in late 2025 to help state governments build capacity for responsible AI adoption, with plans to expand from 30 states to all 50 plus territories and Tribal Nations by 2026. It is funding at least ten state-level pilots and building a public knowledge hub -- practical, implementation-focused work that fills a gap between the research grants from NSF and the advocacy funding from Humanity AI partners.

The overall landscape reveals a division of labor. Federal agencies fund measurement science and technical research. Foundations fund policy, advocacy, community engagement, and workforce readiness. Open Philanthropy funds the deepest technical safety work. Corporate programs fund training and applied deployment. The researchers and nonprofits who land funding in this space are the ones who understand which funder owns which problem -- and write proposals that speak to that specific mandate rather than to "responsible AI" as a vague aspiration.

For organizations navigating this crowded field, Granted can surface the right opportunities across federal, foundation, and corporate funders so you spend less time searching and more time writing proposals that match.

Get AI Grants Delivered Weekly

New funding opportunities, deadline alerts, and grant writing tips every Tuesday.

Browse all AI grants

More AI Articles

Not sure which grants to apply for?

Use our free grant finder to search active federal funding opportunities by agency, eligibility, and deadline.

Find Grants

Ready to write your next grant?

Let Granted AI draft your proposal in minutes.

Try Granted Free