Inside the DOGE-ChatGPT Fiasco: How AI Killed 97% of NEH Grants and What It Means for Every Applicant
March 16, 2026 · 6 min read
Claire Cummings
A $349,000 grant to replace an aging HVAC system at a North Carolina museum got flagged as a diversity, equity, and inclusion program. A documentary about Jewish women's slave labor during the Holocaust — DEI. A project to digitize photographs of Appalachian residents — also DEI. In 22 days, the Department of Government Efficiency gutted the National Endowment for the Humanities, terminating roughly 97 percent of active grants using a process that outsourced critical judgment to OpenAI's ChatGPT.
The details, unsealed in depositions filed March 6, 2026, in the US District Court for the Southern District of New York, don't just reveal incompetence at one small agency. They expose a template for AI-assisted decision-making in federal grantmaking that should alarm every organization that depends on government funding — which is to say, nearly all of them.
The 120-Character Verdict
The methodology was breathtakingly crude. DOGE employees fed grant descriptions into ChatGPT with a single prompt: "Does the following relate at all to DEI? Respond factually in less than 120 characters." The chatbot's responses — yes or no, plus a brief rationale — were pasted into a spreadsheet. That spreadsheet became the kill list.
No definition of DEI was provided to the AI. No human reviewed whether the chatbot's reasoning made sense. No subject-matter experts weighed in. The system flagged grants containing terms like "BIPOC," "LGBTQ," "Tribal," and "homosexual" regardless of actual program content. When the High Point Museum's HVAC grant came up, ChatGPT dutifully explained that "improving HVAC systems enhances preservation conditions for collections, aligning with the goal of providing greater access to diverse audiences." That was enough.
This wasn't a supplement to human judgment. Depositions from two DOGE team members reveal it replaced the analysis NEH's own career staff had already completed. Staff had reviewed the grants and marked many as "N/A" — not violating any executive order. DOGE overrode those assessments.
How the 22-Day Dismantling Unfolded
The timeline, reconstructed from court filings and Inside Higher Ed's reporting, moves fast:
March 12, 2025: DOGE's Small Agencies Team, led by GSA employee Justin Fox and his superior Nate Cavanaugh, met with NEH leadership.
Late March 2025: Fox drafted termination letters for grants the ChatGPT spreadsheet had flagged. Internal emails show Fox warning NEH Acting Chair Michael McDonald: "We're getting pressure from the top," and expressing his preference that McDonald "remain on our side."
April 1, 2025: NEH canceled approximately $100 million in grants and terminated 65 percent of its employees.
McDonald, who served as NEH General Counsel for over two decades before becoming Acting Chair, later acknowledged in his deposition that he felt "much less confident" about terminating the flagged grants but proceeded "in the interest of time." The termination letters themselves contained factual errors, citing executive orders that did not exist.
Within three weeks, an agency that had operated since 1965 — supporting the preservation of historical records, archaeological research, language documentation, and public humanities programming across all 50 states — was functionally hollowed out.
The Legal Challenge
Three of the most prominent humanities organizations in the country — the American Council of Learned Societies (ACLS), the American Historical Association (AHA), and the Modern Language Association (MLA) — filed suit in May 2025 and moved for summary judgment in March 2026. Their arguments target three constitutional violations:
First Amendment: The grant terminations were viewpoint-based. Projects were flagged not because they failed to meet programmatic criteria but because they touched subjects — Jewish history, Native American languages, LGBTQ experiences, racial history — that the administration associated with disfavored perspectives. One Jewish Telegraphic Agency investigation found that multiple Jewish-themed grants were tagged as "DEI" and cancelled.
Equal Protection (Fifth Amendment): The ChatGPT flagging system disproportionately targeted grants involving racial minorities, indigenous communities, and LGBTQ populations — not through deliberate targeting of those groups, but through the chatbot's pattern-matching on demographic terminology.
Separation of Powers: DOGE staff — not NEH leadership, and not Congress — directed the terminations. The NEH's authorizing statute vests grant-making authority in the agency's chair and advisory council. Congress appropriated the funds. An ad hoc efficiency team from the General Services Administration short-circuited both.
The discovery process also revealed that McDonald and DOGE staff communicated via Signal with auto-delete enabled, violating Federal Records Act requirements for preserving official government communications. And in a detail that drew particular scrutiny, McDonald allegedly solicited the Tikvah Fund for a single-source award valued at $10 million after the mass terminations — raising questions about whether the purge cleared space for politically favored organizations.
Why This Matters Beyond the Humanities
If you're a researcher whose NIH or NSF grant has nothing to do with humanities, you might be tempted to dismiss this as someone else's problem. That would be a mistake.
The NEH episode is a proof of concept. The same ChatGPT-and-spreadsheet methodology could be applied to any federal grant portfolio. The General Services Administration is already soliciting public comments on a proposal requiring all 220,000+ federal grantees to certify they don't engage in "diversity, equity, and inclusion" initiatives. The NSF has already cancelled 1,574 grants as of April 2025, with approximately 90 percent described as "related to DEI." NIH has terminated dozens of active research grants. The Department of Education has cut over $600 million in teacher-training grants.
The pattern is consistent: keyword-based screening, minimal human review, aggressive timelines, and institutional leadership that defers to external pressure rather than defending programmatic integrity.
As Granted News reported, the lawsuit has already produced the most detailed public record of how AI-assisted grant review actually works inside the current administration. That record is damning — not because the technology is inherently flawed, but because it was deployed without guardrails, definitions, or accountability.
What Grant Applicants Should Do Now
The practical implications are immediate:
Audit your language. Review active and pending grant applications for terminology that pattern-matching systems might flag. This doesn't mean sanitizing your work — it means understanding what automated reviewers look for and ensuring your descriptions emphasize programmatic outcomes rather than identity-category labels.
Document everything. If your grant is terminated or modified, preserve all communications. The NEH plaintiffs' case was strengthened enormously by discovery. Federal agencies are required to maintain records of decision-making processes, and AI-generated spreadsheets are discoverable.
Watch the certification requirement. The GSA's proposed DEI certification for all federal grantees has a comment period ending in late March. If finalized, it would apply across every federal agency, not just those that have already conducted purges. Institutions that receive any federal funding — universities, hospitals, nonprofits, state agencies — would need compliance protocols.
Understand your legal position. Courts have already blocked some of the administration's most aggressive grant actions. The NEH lawsuit is testing whether AI-assisted, viewpoint-based grant termination violates constitutional protections. Depending on the outcome, organizations that lost funding may have grounds for reinstatement.
Diversify your funding base. The 2026 philanthropy outlook shows private foundation giving holding steady. Organizations that have relied primarily on federal grants should be actively pursuing foundation, corporate, and state-level funding. The One Big Beautiful Bill Act's new above-the-line charitable deduction may increase individual giving, creating additional non-federal revenue streams.
The Deeper Question
The NEH case will ultimately test a question that goes well beyond humanities funding: Can the federal government use AI to make consequential decisions about who receives public money — and if so, under what constraints?
The depositions suggest that the current answer is "with no constraints at all." A chatbot with no training on grant policy, no definition of the terms it was asked to evaluate, and no mechanism for appeal or review became the de facto decision-maker for $100 million in public funding. The humans in the loop — career staff who had actually read the grants — were overridden.
For the 220,000+ organizations that receive federal grants, the stakes of this lawsuit extend far beyond the humanities. The methodology DOGE used at the NEH wasn't a one-off experiment. It was a template. And whether that template becomes standard practice or gets struck down in court will shape the federal funding landscape for years to come.
If you're navigating a shifting federal landscape and need to identify alternative funding sources, tools like Granted can help you search across federal, state, and foundation opportunities — and build competitive proposals before deadlines close.