Humanity AI's First $18M: How a 10-Foundation Coalition Just Set the Public-Interest AI Funding Map
May 16, 2026 · 7 min read
Claire Cummings
On May 13, the Humanity AI coalition disclosed how it plans to spend the first $18 million of the $500 million the ten participating foundations pledged in October 2025. Eight million dollars went to twelve inaugural grantees at exactly $500,000 each. Three million dollars went to a single anchor initiative — AI Civics, led by Data & Society in partnership with the Digital Public Library of America. Another ten million dollars was set aside for a forthcoming open call that the coalition says will launch this summer. For the public-interest AI field, this is the most consequential single funding decision of 2026 to date. It defines the doctrinal frame that the field's largest philanthropic pool will operate inside for the rest of the decade.
The mechanics matter, and the mechanics are unusual. Humanity AI is a pooled philanthropic fund, not a coordinated grantmaking statement. The ten founding partners — the Doris Duke Foundation, Ford Foundation, Lumina Foundation, Kapor Foundation, the John D. and Catherine T. MacArthur Foundation, the Mellon Foundation, Mozilla Foundation, Omidyar Network, the David and Lucile Packard Foundation, and the Siegel Family Endowment — agreed in October to put real shared money behind a coordinated thesis. Rockefeller Philanthropy Advisors serves as fiscal sponsor and is hiring an executive director to run the pooled vehicle. That structure is what allowed the first eighteen-million-dollar tranche to land at twelve named organizations on a single day, rather than dribbling out across ten separate program officers' calendars over twelve months.
The thesis is that AI policy and AI's impact on society are being shaped almost entirely by the developers building the systems, and the philanthropic field needs to fund a counterweight strong enough to put people, communities, and democratic institutions back inside the loop. The Chronicle of Philanthropy summarized the framing in October as foundations explicitly wanting to "curb AI developers' influence." That framing is reflected in the choice of inaugural grantees. The Distributed AI Research Institute, the AI Now Institute, Partnership on AI, the Center for Democracy and Technology, and the Council on Foreign Relations LEAD AI program are all organizations whose principal activity is producing public-interest AI policy research, governance proposals, or accountability frameworks aimed at the operators of frontier AI systems. The Pulitzer Center funds journalism on AI. TechEquity and Kinfolk Tech are focused on labor and community impact. Student Defense runs its SHAPE AI program inside higher-education advocacy. The Center on Resilience & Digital Justice anchors the civil-liberties end of the portfolio.
What is notably absent is anything that looks like direct foundational AI research or model development. Humanity AI is not funding the next Llama or the next Claude. It is funding the institutional civil society that surrounds, scrutinizes, and constrains the next Llama and the next Claude. That is the signal that grantseekers, especially organizations contemplating applying to the summer open call, should internalize before they touch a draft proposal.
What the $500,000 award size tells you
Twelve organizations each received exactly half a million dollars. That is not a coincidence. It is the price point Humanity AI has chosen for the seat at the table — meaningful enough to fund a serious workstream or a small team for a year, modest enough to spread across a dozen named grantees in the first batch and still leave $10 million for the open call. Compare that to the $3 million single award to the Data & Society AI Civics anchor. The coalition is willing to write a single check six times the size of the standard inaugural grant when it decides one organization will carry a specific cross-coalition initiative. The signal: standard membership in the public-interest AI portfolio is $500K. Anchor responsibility for a named workstream can be six times that.
For grantseekers, the implication is that the realistic ask in the open call is going to land in the $250K-to-$750K range for a project grant. Asking for $2 million as a first-time Humanity AI applicant, without an anchor mandate from the coalition itself, would be reading the room incorrectly. Asking for $100,000 would underuse the vehicle and signal the work is not at coalition scale. The middle of the road is where the open call will live.
The five issue areas — and what they exclude
When the coalition announced in October, it named five issue areas the pooled fund and the aligned grantmaking would prioritize: democracy, education, humanities and culture, labor and economy, and security. The May tranche maps cleanly onto those five. AI Now and CDT cover democracy and policy infrastructure. The Pulitzer Center sits inside humanities and journalism. TechEquity and Kinfolk Tech cover labor. Student Defense covers education. Council on Foreign Relations LEAD AI covers security. The Center on Resilience & Digital Justice and DAIR straddle civil rights and democracy. AI Civics is the cross-cutting public-knowledge initiative.
What this categorization quietly excludes is also informative. Humanity AI is not funding open-source model development, AI safety research that operates inside frontier labs, alignment work in the technical-AI-safety tradition, or AI-for-science applications. Those frames have other large funders — Schmidt Sciences, Open Philanthropy, the NSF's TechAccess and AI-Ready America programs, and several frontier-lab internal grant pools. (Our deep dive on the Schmidt Sciences Science of Trustworthy AI RFP covers the alignment side of the philanthropic AI map, and our analysis of NSF TechAccess covers the federal infrastructure side.) Humanity AI is the public-interest, civil-society, governance-and-accountability quadrant of the same map. Organizations applying for the summer open call should frame their work inside democracy, education, humanities, labor, or security — and not inside the technical-AI-safety frame, which has different funders and different theory of change.
The summer open call: what to prepare now
Humanity AI says the $10 million open call will launch in summer 2026, with eligibility and deadline details forthcoming. Coalition staff have signaled that the call will prioritize "organizations working at the intersection of AI and public interest, particularly those closest to AI's impact." Three preparatory moves are worth making before the call opens, because organizations that wait until the RFP drops will be writing inside a four-to-six-week window against a national applicant pool that will be deep.
First, identify which of the five issue areas best frames your work and write the case for fit. The twelve inaugural grantees are, in effect, the rubric. If your work does not credibly belong on a list that includes DAIR, CDT, AI Now, Partnership on AI, TechEquity, and the Pulitzer Center, the case for funding under Humanity AI is going to be harder than the case for the same work under a different philanthropic frame. Read the public materials from those twelve organizations, find the two whose work yours most resembles, and articulate the differentiation.
Second, line up the community partnership. Several of the inaugural grants — most visibly the Center on Resilience & Digital Justice and TechEquity awards — went to organizations whose primary credential is proximity to communities affected by AI deployments. The coalition is not going to fund Washington-based policy shops at the exclusion of frontline groups. If your organization is a research institute or a national policy nonprofit, identify the community-based partner you can credibly co-lead the work with and start the relationship now, not in the proposal-writing window.
Third, understand the fiscal-sponsorship mechanics. Rockefeller Philanthropy Advisors manages the pooled fund, but the individual founding foundations also continue to make aligned grants from their own portfolios. That means a grantseeker pursuing Humanity AI alignment has two parallel paths: the pooled-fund open call, and direct cultivation with the individual program officers at MacArthur, Ford, Mellon, Mozilla, Omidyar, Packard, Lumina, Kapor, Doris Duke, and Siegel whose AI portfolios are now coordinated around the same five issue areas. The strongest grantseekers will pursue both paths in parallel rather than treating the pooled fund as the only entry point.
What the timing means for nonprofit budgets
For organizations whose 2026 operating plan depends on Humanity AI funding, the practical timing is tight. A summer open call, with even a generous review period, points to award decisions in late 2026 and grant start dates in 2027. That means the $10 million pool will not show up in nonprofit operating budgets until next fiscal year. Organizations that need bridge funding to keep public-interest AI work alive through the rest of 2026 will have to look elsewhere — to the individual founding foundations' direct grantmaking, to other AI philanthropy pools, or to general operating reserves. The inaugural eighteen-million-dollar tranche is real money, but it lands in twelve organizations' bank accounts, not in the field at large. The field-at-large moment is the summer call.
The bigger frame is that Humanity AI has, in a single eighteen-million-dollar press release, made the public-interest AI civil-society field denser, better-resourced, and more institutionally coordinated than it has been at any point in the technology's history. The next decade of AI policy in the United States is going to be argued largely between frontier developers on one side and a coalition of well-funded civil-society organizations on the other. Today, that coalition just got a $500 million war chest and a coordinated grantmaking infrastructure. The summer open call is the door through which the next ring of organizations will join it.