Granted

DARPA Is Funding AI You Can Actually Prove Works — and the Code Must Be Open Source

February 25, 2026 · 4 min read

David Almeida

There is a quiet crisis in AI deployment that no amount of benchmark performance will fix: nobody can prove these systems will do what they claim. A large language model might score well on medical licensing exams, but no hospital will stake patient safety on a system whose reasoning cannot be formally verified. A military planner might use AI to generate courses of action, but trusting those recommendations in a contested environment requires more than "it usually works."

DARPA's answer is CLARA — the Compositional Learning-And-Reasoning for AI Complex Systems Engineering program — and its solicitation, published February 10, landed with a set of requirements that should make AI safety researchers and formal methods specialists sit up straight. The program offers up to $2 million per award over 24 months, proposals are due April 10, 2026, and every piece of software produced must be released as open source under an Apache 2.0 license.

What CLARA Actually Wants

The core premise is deceptively simple: current AI systems are either fast and opaque (machine learning) or transparent and slow (automated reasoning). CLARA wants both properties in the same system.

Program manager Benjamin Grosof, working out of DARPA's Defense Sciences Office, has laid out a technical vision that goes well beyond bolting a logic module onto a language model. The solicitation explicitly criticizes the industry's current approach of "tacking specialized automated reasoning components onto a large language model," calling the result a system with "weak assurance and a lack of real safeguards."

Instead, CLARA wants tight compositional integration of ML and AR components — Bayesian systems, neural networks, and logic programs — with hierarchical structure and transparent operation. The program defines assurance as "verifiability with strong explainability to humans, based on automated logical proofs and hierarchical, vetted logic building blocks." That is a high bar, and it is intentional.

Two Technical Areas, One Open-Source Library

The program is split into two tracks. Technical Area 1 (TA1) funds the development of new high-assurance ML/AR composition approaches, including theory, algorithms, and working software implementations. This is where the fundamental research happens — new architectures for integrating probabilistic inference with logical verification, new methods for compositional reasoning, new ways to produce proofs alongside predictions.

Technical Area 2 (TA2) takes a different angle: building a software composition library that integrates and validates TA1 tools into a common framework. Think of it as the middleware layer — the infrastructure that makes individually verified components work together as a system.

Both tracks converge on the open-source requirement. DARPA is not just tolerating open source here; it is mandating it. All software developed during the program must be released under a commercialization-friendly license, with Apache 2.0 preferred. For a defense agency, that is a notable stance — it signals that DARPA sees the value of this work extending far beyond classified applications.

Where DARPA Sees This Going

The solicitation outlines three application domains that hint at the program's ambitions.

Course-of-action planning is the most obviously defense-relevant. Military planners need AI systems that can evaluate complex scenarios and explain why they recommend specific actions — not just produce outputs, but produce proofs that those outputs follow from stated assumptions and constraints.

Multi-condition medical guidance targets the healthcare bottleneck head-on. Patients with multiple chronic conditions create combinatorial complexity that overwhelms even experienced clinicians. An AI system that can reason across drug interactions, comorbidities, and treatment guidelines — and prove its reasoning is sound — would be transformative. But only if that proof is legible to the physicians who have to act on it.

Supply chain and logistics rounds out the set. Global supply chains involve thousands of interacting variables, and the consequences of AI errors can cascade across continents. Verifiable reasoning about inventory levels, shipping routes, and demand forecasts would make AI-driven logistics systems trustworthy enough for actual deployment.

Who Should Apply

CLARA sits at an intersection that relatively few research groups occupy. You need expertise in both machine learning and formal methods — probabilistic programming, automated theorem proving, satisfiability solvers, type-theoretic approaches to verification, or neurosymbolic AI architectures.

University research groups with strengths in formal verification, programming language theory, or probabilistic AI should look closely. So should groups working on neurosymbolic reasoning, where the integration of neural and symbolic components is already a core research question.

Small businesses and defense contractors with AI safety practices are also well-positioned, particularly if they can demonstrate existing work on verifiable AI systems. The $2 million ceiling and 24-month period of performance make this accessible to smaller teams — you do not need a massive lab to compete.

One subtle but important detail: DARPA held an information session on February 19 to walk through the solicitation. If you missed it, the presentation slides and a FAQ document are available on the CLARA program page. Read them before writing your proposal.

The Timeline Is Tight

Proposals are due April 10, 2026, through SAM.gov under solicitation DARPA-PA-25-07-02. DARPA's target is to execute awards by June 9, within 120 calendar days of posting. That is an aggressive timeline by government standards, and it suggests the agency considers this work urgent.

For researchers who have been publishing on AI safety, formal verification of ML systems, or neurosymbolic architectures, CLARA represents a rare opportunity: DARPA funding for work that is both technically demanding and genuinely needed, with an open-source mandate that ensures the results reach the broader research community.

The April 10 deadline leaves roughly six weeks to assemble a team and write a proposal — tight but manageable if you start now. Granted can help you structure your response to match what DARPA program managers expect and identify collaborators who complement your technical profile.

Get AI Grants Delivered Weekly

New funding opportunities, deadline alerts, and grant writing tips every Tuesday.

Browse all DARPA grants

More DARPA Articles

Not sure which grants to apply for?

Use our free grant finder to search active federal funding opportunities by agency, eligibility, and deadline.

Find Grants

Ready to write your next grant?

Let Granted AI draft your proposal in minutes.

Try Granted Free