Ten Foundations, $500 Million, One Pooled Fund: Humanity AI's First $18M Round Signals the Open Call to Come
May 14, 2026 · 7 min read
Claire Cummings
On October 14, 2025, ten of the largest U.S. philanthropic foundations announced something the field had never quite seen at this scale: a five-year, $500 million pooled fund dedicated entirely to ensuring that artificial intelligence develops in ways that serve people and communities rather than only the technology platforms building it. The coalition called itself Humanity AI. The press release was characteristically careful — long on principles, short on grantees. The field watched and waited to see what an unprecedented coalition of competitors-turned-collaborators would actually fund.
On May 12, 2026, the field got its first answer. Humanity AI announced more than $18 million in grants — $8 million to twelve inaugural grantees at $500,000 each, plus a special $3 million commitment to a Data & Society–led AI Civics initiative, and a forward commitment of $10 million for a forthcoming open call expected to launch this summer. The selection of those twelve organizations, the size of the individual checks, and the design of the open call together tell us almost everything about what kinds of organizations Humanity AI is going to back over the next five years — and which kinds will be left looking elsewhere.
This deep dive walks through the coalition's structure, the deliberate signal embedded in the inaugural twelve, the five priority areas that will drive open call eligibility, and the practical positioning work an organization should be doing now to be ready when the application window opens.
The Coalition: Ten Foundations, One Co-Chaired Structure
The Humanity AI founding members are not a random assemblage. They represent ten of the most influential, longest-tenured, and most ideologically diverse foundations in U.S. philanthropy: the John D. and Catherine T. MacArthur Foundation, Omidyar Network, the Doris Duke Foundation, the Ford Foundation, the Lumina Foundation, the Kapor Foundation, the Mellon Foundation, the Mozilla Foundation, the David and Lucile Packard Foundation, and the Siegel Family Endowment.
Omidyar Network and MacArthur Foundation co-chair the initiative. Rockefeller Philanthropy Advisors (RPA) serves as fiscal sponsor and is managing the pooled fund, building out staffing, and hiring an executive director to lead funder engagement and learning. This is the structural design that distinguishes Humanity AI from the dozen-or-so AI safety and AI ethics funding consortia that emerged in 2023 and 2024. Those earlier efforts were largely aligned-funder collaboratives — each foundation kept its own checkbook, the group shared a strategy memo. Humanity AI is a true pooled fund. Money goes in. Grant decisions come out. The line between "Ford-funded" and "Mellon-funded" disappears at the point of the grant.
That distinction matters operationally. A pooled fund can make decisions faster than a coordinated-funder collaborative because it does not require parallel board approvals at ten separate foundations. It can also take bigger swings — $500K to twelve organizations, with $3M to one anchor, is a check size that would require months of internal alignment at most individual foundations but can move in a single Humanity AI grant committee meeting.
Reading the Inaugural Twelve
The twelve organizations selected for the first $500,000 each are the clearest signal Humanity AI has yet sent about its theory of change. The list:
- AI Now Institute — Policy research on AI accountability across labor, climate, and government
- Center for Democracy and Technology — Civil rights and liberties in the digital age
- Center on Resilience & Digital Justice — Accountability and repair for digital and AI harms
- Council on Foreign Relations LEAD AI — Policy analysis on consequential AI issues
- Distributed AI Research (DAIR) Institute — Community-driven grassroots AI knowledge
- Kinfolk Tech — Art, technology, and collective memory
- Partnership on AI — Collaborative research across academia, civil society, and industry
- Pulitzer Center — Global journalism training on AI coverage
- Student Defense (SHAPE AI) — Practical AI guidance for under-resourced schools
- TechEquity — Industry accountability for economic prosperity
A separate $3 million commitment went to Data & Society, in partnership with the Digital Public Library of America, to anchor the "AI Civics" initiative — a community-governance project that will use the country's library infrastructure as the on-ramp for public deliberation about AI.
What does the list tell us? Three things matter.
First, this is not a Silicon Valley list. None of these organizations are AI safety labs, none are technical alignment research institutes, none are direct-to-deployment policy shops. The selection skews heavily toward civil-society organizations whose primary discipline is governance, accountability, or community organizing — not AI engineering. This is consistent with the founding press release, which explicitly framed Humanity AI as a counterweight to AI developers' influence rather than a contributor to AI capability research.
Second, the $500K grants are deliberately structured as general operating support rather than project funding. For mid-sized nonprofits in policy, journalism, and community organizing, $500K of unrestricted multi-year capacity is transformative. It is also a category of grant that has been getting scarcer in mainstream philanthropy. Humanity AI is signaling that the coalition will be a meaningful source of the unrestricted general operating support that civil-society AI organizations have been begging for since 2022.
Third, the inclusion of the Pulitzer Center for global AI journalism training and Kinfolk Tech for art and collective memory tells us that Humanity AI is not narrowly defining "AI for the public good" as policy and regulation. The coalition is funding storytelling, cultural memory, and journalism as legitimate AI public-interest work. For organizations whose work touches AI through cultural, narrative, or artistic channels rather than legislative ones, the open call is the place to watch.
The Five Priority Areas
Humanity AI has explicitly named five priority areas, and the open call is expected to fund within them:
1. Democracy. Partnerships and frameworks to protect democratic rights and freedoms in the AI era. The Center for Democracy and Technology and the Council on Foreign Relations LEAD AI grants signal this is a substantial commitment area.
2. Education. Ensuring AI implementation in schools expands knowledge access while strengthening learning outcomes for all students. The Student Defense (SHAPE AI) grant indicates a specific focus on under-resourced K–12 contexts, not on elite university AI literacy.
3. Humanities and Culture. Positioning AI as a creative enhancer rather than replacement, protecting artists' and creators' intellectual property and ownership rights. Mellon Foundation's longstanding humanities portfolio is the dominant influence here; Kinfolk Tech is the inaugural exemplar.
4. Labor and Economy. Using AI to enhance human work performance across communities rather than enabling wholesale workforce replacement. AI Now Institute and TechEquity anchor this priority.
5. Security. Establishing rigorous safety standards for AI development and deployment, from autonomous vehicles to automated lending decisions. This is the priority area where the coalition's framing diverges most clearly from the AI safety lab community — Humanity AI's "security" includes algorithmic decision-making in everyday consumer systems, not only frontier AI alignment.
The $10 Million Open Call: What to Watch For
Humanity AI has committed $10 million to a forthcoming open call expected to launch in summer 2026. The coalition has stated that the application criteria, timeline, and focus areas will be announced "in the coming months." Organizations interested in applying are directed to join the Humanity AI mailing list at humanityai.ai.
A few practical inferences are reasonable from the structure of the first round:
-
Average grant size will likely cluster around $500K. If the open call follows the inaugural pattern, the math points to roughly 15–20 grantees from the $10M open call pool. An organization positioning for a $50K project grant is targeting the wrong fund. An organization positioning for a $500K–$1M general operating support grant is in the right range.
-
General operating support is more likely than project funding. The inaugural round was structured as institutional capacity-building rather than discrete projects. Organizations should frame proposals around mission alignment, organizational capacity, and ability to absorb a substantial multi-year investment — not narrowly scoped deliverables.
-
U.S.-focused civil-society organizations will be prioritized. The inaugural twelve are all U.S.-based with predominantly U.S. policy and community focus. Global organizations should not assume eligibility until the open call criteria are published.
-
Track record on AI-and-society work matters. Every inaugural grantee has at least three years of explicit AI public-interest work in their published portfolio. New organizations pivoting to AI as of 2025 will likely face a steeper credibility threshold.
Positioning for the Open Call
For organizations whose work credibly aligns with one of the five priority areas, the next four to twelve weeks are positioning time, not application time. The substantive work to do now:
- Publish a clear, accessible articulation of how your work intersects with AI's impact on people and communities. Humanity AI grantmakers will be reading public-facing materials.
- Document organizational capacity. Board structure, audited financials, multi-year program track record, and ability to absorb a $500K+ unrestricted grant without operational strain.
- Identify a credible coalition or partnership angle. The AI Civics initiative shows the coalition is willing to anchor partnerships between traditionally separate institutional types (libraries plus research nonprofits, in that case). Original partnership designs may carry weight.
- Track the humanityai.ai signup list closely. When the application window opens, it will likely be short.
For organizations whose work does not align with the five priority areas — particularly technical AI safety research labs, AI-deployment startups, and AI-adjacent academic centers without a civil-society arm — Humanity AI is not the fund. The coalition has been unusually explicit about who it is funding and who it is not, and the inaugural twelve confirm the framing. That clarity is itself a service to the field: it lets organizations spend their proposal-writing time on funders whose theory of change matches their work.
The next signal will come this summer. The $10 million open call is the moment Humanity AI moves from a foundation-curated portfolio to a field-open competition — and it is the first real test of whether a $500 million pooled fund can scale beyond the relationships its ten founding members already had.