MacArthur's $500M Humanity AI Bet: Biggest Foundation AI Commitment Yet
February 25, 2026 · 5 min read
Jared Klein
Five hundred million dollars across ten foundations, five priority areas, and a five-year window. The Humanity AI coalition, announced in October 2025, is the largest coordinated philanthropic commitment to shaping how artificial intelligence affects ordinary people -- and the pooled fund starts making grants this year.
Browse our AI Ethics Grants page for current opportunities in responsible AI funding.
That figure dwarfs what any single U.S. foundation has spent on AI policy, workforce readiness, or digital rights in the past decade. But the real story is not the headline number. It is the structure: ten funders, each retaining independent grantmaking alongside a shared pooled fund, creating at least eleven distinct pathways for researchers, nonprofits, and advocacy organizations to secure funding. Understanding who sits at this table -- and what each funder actually cares about -- is the difference between a competitive proposal and a generic one.
Who Is at the Table
The ten founding members span nearly every corner of institutional philanthropy. The MacArthur Foundation and Omidyar Network co-chair the initiative. The remaining eight are the Ford Foundation, Mellon Foundation, Mozilla Foundation, Doris Duke Foundation, Lumina Foundation, Kapor Foundation, David and Lucile Packard Foundation, and the Siegel Family Endowment.
None of the ten has disclosed its individual commitment within the $500 million total. But their existing portfolios tell you where each will likely direct funds. MacArthur's new AI Opportunity Big Bet focuses on the economy and workforce -- particularly in Chicago -- and the foundation already moved $10 million in initial grants, including $2 million to the AI Now Institute and $2 million to Brookings. Omidyar has a long track record in digital rights and civic tech. Ford runs a substantial technology-and-society program. Mellon anchors the humanities and cultural preservation angle. Mozilla funds open-source tools and algorithmic accountability. Lumina centers workforce education and credentialing. Kapor leads on equity in tech. Packard invests in environmental and scientific applications. Siegel Family Endowment focuses on technology's impact on learning and civic life.
The coalition remains open to additional funders. For applicants, that means the landscape of Humanity AI-aligned dollars will likely grow beyond the initial $500 million.
Five Priority Areas and What They Actually Mean
Humanity AI's grantmaking spans five domains. Each one maps to specific types of work -- and specific types of organizations:
Democracy. Funding for projects that develop frameworks, tools, and partnerships to ensure AI strengthens democratic processes rather than eroding them. Think election integrity research, algorithmic transparency in government decision-making, and civil society capacity building. Ford, Omidyar, and Mozilla are the natural leads here, given their existing democracy portfolios.
Education. AI implemented in the interest of students -- not surveillance tech or edtech for its own sake. Lumina and Siegel Family Endowment have the deepest existing investments. Proposals that address AI literacy, equitable access to AI-powered learning tools, or community college and workforce training pipelines will align most closely.
Humanities and Culture. Protecting artists and creators from IP theft while harnessing AI to expand creative possibility. Mellon and Doris Duke are the anchors. Organizations working on copyright frameworks, AI-generated content governance, and tools that empower rather than replace creative professionals should target this lane.
Labor and Economy. This is MacArthur's primary focus through its AI Opportunity program. The emphasis is on ensuring AI enhances how people work rather than simply automating them out of jobs -- with a particular interest in frontline workers and communities historically excluded from technology's benefits.
Security. Holding AI developers and deployers to protective standards. Proposals addressing automated decision-making in criminal justice, surveillance oversight, data privacy in AI systems, or safety standards for AI in critical infrastructure fit here. Packard's science and engineering focus and Mozilla's technical accountability work make them likely co-leads.
How the Money Flows -- Two Tracks
This is the structural detail most coverage has missed. Humanity AI operates on two parallel tracks, and they have different timelines and application processes.
Track 1: Aligned grantmaking (already underway). Each of the ten foundations directs funding through its own existing grantmaking process toward Humanity AI's priority areas. MacArthur's $10 million in initial grants is the clearest example. This means ten separate application pathways, each with its own eligibility criteria, deadlines, and reporting requirements. A proposal rejected by MacArthur's AI Opportunity program might fit Ford's technology-and-society portfolio or Lumina's workforce lens. Research each funder independently.
Track 2: Pooled fund (grants starting 2026). Rockefeller Philanthropy Advisors serves as fiscal sponsor and is hiring a small staff, starting with an Executive Director, to manage the pooled fund. Grants from this centralized pot begin in 2026, though specific RFP timelines and application mechanics have not been announced. Organizations can sign up for updates at the Humanity AI website.
MacArthur is also actively recruiting a Director of AI Opportunity -- a five-year, Chicago-based role that will shape the Big Bet program's grantmaking strategy. That hire will likely define how MacArthur's share of the $500 million gets deployed.
How to Position a Proposal
The scale disparity is worth acknowledging upfront. OpenAI's nonprofit arm announced $25 billion in initial funding around the same time Humanity AI announced $500 million. As Siegel Family Endowment's Katy Knight put it, the philanthropic sector will need to be strategic about deployment because the scale difference with industry is enormous.
That constraint makes alignment critical. Four strategies for organizations preparing proposals:
Pick a lane. The worst approach is a vague pitch about "responsible AI." Each priority area has distinct evaluation criteria. A proposal about AI-driven voter suppression detection goes to the democracy track. A proposal about AI tutoring equity goes to education. Do not straddle.
Know your funder. A workforce retraining proposal aimed at MacArthur should speak to Chicago and frontline workers. The same concept aimed at Lumina should emphasize credentialing and postsecondary access. Same project, different framing.
Lead with the human impact. The entire premise of Humanity AI is that people -- not just technologists -- should decide how AI gets used. Proposals centered on community participation, participatory design, or grassroots governance will resonate more than purely technical work.
Watch the pooled fund timeline. The Rockefeller Philanthropy Advisors team is still building out. When the first pooled-fund RFP drops, it will likely move fast. Organizations with a draft concept and letters of support already in hand will have a significant advantage.
The Humanity AI coalition represents a rare moment where major foundations have agreed not just on the problem but on a shared framework for addressing it. Whether that coordination survives contact with ten different boards and grantmaking cultures remains to be seen -- but the money is real, the timeline is now, and Granted can help you track every open opportunity as these funders begin deploying capital.
