DARPA Launches CLARA Program to Fund AI That Can Prove Itself
March 7, 2026 · 2 min read
Jared Klein
DARPA has opened its newest AI research program to proposals, and the premise is blunt: modern machine learning is powerful but fundamentally untrustworthy for high-stakes applications. The Compositional Learning-And-Reasoning for AI Complex Systems Engineering (CLARA) program wants to fix that.
Why DARPA Is Betting on Provable AI
The gap CLARA targets is well-documented across both academic literature and operational failure reports. Today's ML systems achieve impressive benchmark performance but cannot explain their reasoning, fail unpredictably on edge cases, and resist formal verification. In defense, healthcare, and critical infrastructure contexts, that opacity is a dealbreaker.
CLARA's solution: fund research that tightly integrates machine learning with automated reasoning — not loose coupling or post-hoc explanation layers, but genuine compositional architectures where logical proofs and hierarchical reasoning blocks are part of the system from the ground up.
DARPA's own framing sets the bar high: "Assurance under CLARA means verifiability with strong explainability to humans, based on automated logical proofs and hierarchical, vetted logic building blocks."
Two Technical Areas, Open-Source Required
The program is structured across two technical areas. TA1 funds development of new high-assurance ML/AR composition approaches — the theory, algorithms, and working code. TA2 builds a software composition library that integrates validated TA1 tools into a common framework.
Awards reach up to $2 million per team. Universities, nonprofits, and small businesses are all eligible. One notable requirement: all software deliverables must be released under a permissive open-source license, meaning successful CLARA research will feed directly into the broader AI safety ecosystem.
April 10 Deadline, Fast Award Timeline
Full proposals are due April 10, 2026, with DARPA targeting award execution by June 9 — a 120-day turnaround from posting to contract. That pace is aggressive even by DARPA standards and suggests the agency views verifiable AI as an urgent capability gap.
Researchers working at the intersection of formal methods, automated reasoning, and machine learning should review the full solicitation details and consider whether their work fits CLARA's integration-first philosophy. For broader context on every active DARPA AI program accepting applications, in-depth analysis is available on the Granted blog.