ARPA-H's IGoR Program: Why Only Three Teams Will Win the Five-Year AI Biomedical Research Contract
May 14, 2026 · 7 min read
David Almeida
On May 5, 2026, the Advanced Research Projects Agency for Health announced the launch of its Intelligent Generator of Research (IGoR) program — a five-year, multi-team initiative to compress biomedical discovery cycles by at least an order of magnitude using AI-driven hypothesis generation, automated experimentation, and continuously refined models of biological systems. ARPA-H expects to award approximately three Other Transaction agreements covering all four of IGoR's technical components. Solution summaries are due June 25, 2026 at 12:00 PM Eastern. Full proposals follow on August 6, 2026.
For application teams who saw the announcement and immediately reached for a familiar template — a single PI, a single institution, a narrowly scoped tool proposal — the structure of this opportunity is going to feel disorienting. IGoR is not an R01 with a bigger budget. It is not an SBIR with a longer performance period. It is a consortium contract, structured as an Other Transaction agreement, that requires the awardee to deliver a working closed-loop research system spanning hypothesis generation, experiment design, automated wet-lab execution, and model refinement. Three awards. Four technical components per award. Five years of execution.
This deep dive unpacks why IGoR's design favors integrated multi-institution consortia over single labs, what the three-phase structure tells us about ARPA-H's commercialization expectations, how the June 25 solution summary functions as a high-stakes gate (and what survives it), and the realistic eligibility landscape for academic medical centers, biotech firms, AI labs, and contract research organizations considering a bid.
The "Three OT Agreements, Four Components" Math
ARPA-H's stated intent — approximately three Other Transaction agreements, each addressing all four technical components — is the most consequential signal in the announcement. It tells you three things immediately.
First, this is not a program where ARPA-H expects to assemble a portfolio of best-of-breed point solutions. The agency could have structured IGoR as twelve awards of $25M each, with each team owning a single component. It did not. The choice to fund three teams across all four components means ARPA-H wants vertically integrated systems that work end-to-end — a hypothesis-generation module that talks to an experiment-design module that drives an automated wet-lab platform that feeds results back into a model that refines the next hypothesis. That round trip has to happen inside a single awardee, not across awardees.
Second, the four-component breadth means no single institution can credibly bid alone. A top AI lab can probably build the hypothesis-generation and modeling layers, but it cannot run the wet-lab loop. An academic medical center can run the wet-lab loop, but it does not have a foundation-model team. A contract research organization can execute experiments at scale, but it lacks the disease-area scientific leadership. The winning teams will be three-to-five-institution consortia anchored by a credible prime — usually a research-intensive university, a major nonprofit research institute, or a large biotech — with explicit subcontracting relationships to specialty partners.
Third, the "approximately three" award count is a competitive signal. ARPA-H's recent OT programs have typically attracted twenty to sixty solution summaries; converting to three full awards implies a roughly 5–15% selection rate at the summary stage. Teams that arrive at the August 6 full proposal already represent the top decile of submissions, and the final selection is made among a small group of strong consortia. This is not a numbers game where weak proposals get carried by ambitious topline pitches. The summary has to do real work.
The Three-Phase Structure Is a Commercialization Plan in Disguise
IGoR's five years break down into three phases that mirror ARPA-H's broader thesis about how transformative biomedical capabilities get built and transitioned out of the agency.
Phase I (months 1–18) is concept and component development. Teams stand up the four components, demonstrate that each works independently against a single disease area chosen at award, and prove out the foundational AI infrastructure. This phase is where most program execution risk lives — if a team's hypothesis-generation module turns out not to scale, or its wet-lab automation cannot handle the experimental throughput the AI is demanding, the team will not survive the gate review.
Phase II (months 19–36) is cross-team integration and interoperability demonstrations. ARPA-H's framing here is unusual: it expects the three awardee teams to interoperate, not just internally but across consortia. That likely means standardized data formats, shared evaluation harnesses, and joint experimental protocols. The agency is treating IGoR as a platform-building exercise where the three awardees collectively define a new infrastructure layer for biomedical research, not as three independent product builds.
Phase III (months 37–60) is scaling, transition, commercialization planning, and extension into a second disease area. The two-disease-area requirement matters: ARPA-H is testing whether the system generalizes. A platform that works only for one cancer subtype is not a platform — it is a custom-built pipeline. The commercialization-planning language signals that the agency expects awardees to surface a credible spin-out, licensing, or contract-research-organization commercialization path by year five.
This phasing has two practical implications for proposal teams. The disease area chosen at the start needs to be ambitious enough that success demonstrates platform value, but tractable enough that a working closed loop is feasible within 18 months. And the proposal should already name a credible second disease area for Phase III, with a sketch of why the platform's components will generalize.
The June 25 Solution Summary Is the Real Filter
ARPA-H's two-stage process — solution summaries first, then full proposals — is deliberately designed to discourage teams that have not done the consortium-assembly work upfront. Summaries are typically 8–15 pages, with the agency primarily evaluating three things: technical approach credibility, team composition and capability, and management plan for a multi-institution consortium.
The technical-approach evaluation at the summary stage is not about whether the team has solved the science. It is about whether the team has a defensible answer to a harder question: how does your hypothesis-generation module avoid producing a long tail of plausible-but-wrong hypotheses that exhausts the experimental budget? Every credible IGoR proposal has to engage with the well-known failure mode of AI-driven biomedical research — high apparent novelty, low experimental validation rate. A summary that hand-waves through this section will not advance.
The team-composition evaluation is where consortium structure becomes load-bearing. Reviewers will look for named PIs at each component layer, real subcontracting commitments (not letters of interest), and a track record of cross-institutional collaboration. A consortium assembled in the four weeks before the summary deadline will read as exactly that. The teams that have been quietly coordinating since the program's pre-solicitation discussions in early 2026 have a structural advantage that cannot be closed with proposal-writing effort alone.
The management plan is the most underweighted section in most academic-led proposals. ARPA-H runs OT agreements with hands-on program managers who expect monthly technical reviews, structured deliverables, and rapid pivots when components fail their gate criteria. A consortium that proposes a traditional academic governance structure — a steering committee meeting quarterly, decisions by consensus — will struggle to clear this bar. The plan needs an empowered program lead with clear authority to reallocate budget across institutions as the program develops.
The Realistic Eligibility Landscape
ARPA-H's Other Transaction authority allows it to award to a wider range of entities than traditional NIH or NSF grants — including for-profit firms, traditional nonresearch contractors, and consortia structured as LLCs or limited partnerships. For IGoR specifically, the eligibility landscape segments into four realistic prime-applicant types.
Research-intensive academic medical centers with strong AI/ML and translational science capabilities are the natural prime applicants. Institutions like Broad, Whitehead, MD Anderson, Memorial Sloan Kettering, and similar peers have the disease-area depth, biorepository access, and computational infrastructure to anchor a credible consortium. Their weakness is typically wet-lab automation throughput, which they will need to subcontract to specialized partners.
Large biotech firms with internal AI research divisions — Genentech, Recursion, Insitro, and similar — can prime, but they will need to bring genuinely open scientific partners to credibly deliver a research platform rather than an internal commercial pipeline. ARPA-H is sensitive to programs that look like subsidized product development.
AI-native research organizations like Schmidt Sciences-backed institutes, Arc Institute, or the Chan Zuckerberg Biohub can prime if they can credibly partner with disease-area scientific leadership. Their advantage is foundation-model and automation expertise; their gap is clinical and biological domain depth.
Contract research organizations with AI-augmented capabilities can prime, but they face the steepest credibility climb: ARPA-H program managers will probe hard on scientific leadership and novel-discovery capacity rather than execution-only capacity.
For teams that do not fit any of these archetypes — a single lab with a great hypothesis-generation algorithm, a startup with a wet-lab automation platform — the realistic path is not to prime but to be a sought-after subcontracting partner to one of the three or four credible primes that will emerge over the next six weeks. The window to position is closing fast.
What to Do Before the June 25 Solution Summary
For teams already in motion, the priority work over the next six weeks is consortium-formation closure: signed teaming agreements with subcontract scopes, named PIs at each component, a designated program lead with budget authority, and a defensible answer to the hypothesis-validation-rate question. For teams considering whether to enter at this stage, the honest assessment is that priming a credible bid in six weeks is implausible — but joining an existing consortium as a specialized component partner is still feasible, particularly for groups with differentiated wet-lab automation, biorepository access, or specialized model-evaluation expertise.
IGoR is a program designed to fund three winning consortia and accept the rest of the field as program-shaping inputs. The teams that treat the solution-summary deadline as the moment of competition have already lost; the teams that have been assembling since March are running out the clock.