WRF's $200K Bet on Agentic AI in Water Utilities: RFP 5394 Closes May 20
May 16, 2026 · 6 min read
Arthur Griffin
The Water Research Foundation posted Request for Proposals 5394, "Evaluating Scalability, Reproducibility, and Impact of GenAI and Agentic AI in the Water and Wastewater Sector," under its Emerging Opportunities Program in March 2026, with proposals due 3:00 p.m. Mountain Time on Wednesday, May 20. Up to $200,000 is available for a single research team. The application closes in four days. The RFP is a small line on the federal funding map — a single award at a foundation-scale budget — but the structure of the work statement is doing something the sector-utility research community has not previously done at this scale. WRF is asking the research team to integrate the National Institute of Standards and Technology AI Risk Management Framework core functions directly into the technical scope, not as a compliance overlay applied after the technical work is done.
That distinction matters more than the dollar amount suggests. The water and wastewater sector operates roughly 50,000 community water systems and 16,000 wastewater treatment facilities across the United States, the vast majority of which are publicly owned and operated by municipal or special-district authorities with limited internal AI engineering capacity. The sector's AI adoption pattern over the last three years has been characterized by individual utility pilots — DC Water and HRSD's Agentic AI framework development, WSSC Water's resource recovery operations work, and a handful of other lighthouse projects — without a shared evaluation framework for which pilots are scaling well, which are stalled, and which are surfacing cybersecurity and operational-safety concerns that have not been publicly documented. WRF is positioning RFP 5394 as the field-defining evaluation study that fills that gap.
What the work statement actually asks for
The technical scope is specific. The selected research team must identify at least four unique and meaningful GenAI applications that have been successfully implemented at a water or wastewater utility, then pilot each of those four applications at a minimum of two additional utilities. That structure — four applications across at least six utilities total — is designed to test whether AI applications that have worked in a single-utility context can be transferred to peer utilities with different operational scales, data infrastructures, and workforce configurations. One of the four applications is required to focus on knowledge transfer and staff training, recognizing that the operational bottleneck in utility AI adoption is rarely the model itself but the workforce capacity to integrate the model into existing operations. The remaining three applications can address customer service support, human resources and employee management, reporting, or other utility functions where GenAI has demonstrated initial promise.
The Agentic AI component of the work statement is structured differently. The team is asked to catalog Agentic AI applications currently in use within the sector, capture lessons learned from those applications, and reproduce a single low-effort Agentic AI application at another utility to demonstrate the transferability framework. The asymmetric treatment — pilot four GenAI applications, but only reproduce one Agentic AI application — reflects the field's current maturity gap. GenAI applications in the sector are scattered enough that a meaningful sample exists to evaluate. Agentic AI applications are early enough that WRF is more interested in cataloging the existing landscape than in stress-testing transferability across multiple instances.
The NIST AI Risk Management Framework integration is the requirement that elevates the technical scope above what a conventional pilot-study research project would look like. The RFP requires the team to integrate cybersecurity protocols into operations, implement secure development practices into model training and deployment, and adopt the NIST AI RMF core functions as guardrails on the pilot work. The four core functions of the framework — Govern, Map, Measure, and Manage — are not technically prescriptive in themselves, but their integration into a multi-utility pilot study forces the team to produce documentation that other utilities can use as a template for their own AI adoption work rather than as a post-hoc compliance artifact.
Why this RFP exists now
The sector-utility AI conversation has shifted noticeably over the last twelve months. WSSC Water's $150,000 collaboration with research partners to advance AI for resource recovery operations, announced in March, signaled that publicly-owned utilities are now willing to put procurement budget against AI research rather than treating it as a vendor-driven activity. The American Water Works Association and the Water Environment Federation both published organizational positions on AI adoption in the sector during 2025, with both organizations explicitly flagging the absence of shared evaluation methods as a barrier to responsible adoption. The Federal Energy Regulatory Commission and the Environmental Protection Agency have signaled in separate proceedings that they are monitoring sector AI adoption for cybersecurity and operational-safety implications without yet proposing regulatory action.
Into that environment, WRF's RFP 5394 functions as the sector's first attempt to produce a shared methodology before regulators do it for them. The research team that wins the award will, over the performance period, produce the documentation that AWWA and WEF can recommend to member utilities, that EPA and state primacy agencies can reference in their oversight work, and that vendor companies offering AI services to utilities can use to structure their offerings against a defensible evaluation framework. That positioning makes the $200,000 budget more strategically valuable than the dollar amount alone would suggest.
Who can credibly compete on four days' notice
Four days from the date of this analysis to the May 20 deadline is not enough time for a research team without existing utility relationships to put a competitive proposal together. The RFP requires the proposing team to identify, by the time of submission, a credible plan for which utilities will host the pilots and which applications will be evaluated. That implies the proposing team needs either existing memoranda of understanding with utilities willing to host pilots or sufficient relationship depth that the utility partnerships can be confirmed within the proposal narrative. The serious bidders are likely already drafting.
Eligible applicants include universities, research organizations, utilities, and public agencies. Universities with existing applied-water-engineering programs — Stanford's Codiga Resource Recovery Center, Arizona State's Biodesign Institute, the University of Michigan's Water Center, and a small number of others — are positioned to compete on technical depth and on the breadth of utility partnerships they have built through prior WRF-funded work. Engineering consulting firms with utility-services practices — Hazen and Sawyer, Carollo, Black and Veatch, and others — are positioned to compete on the strength of their utility-client relationships and on the practitioner-grade documentation expectations the work statement implies. Independent applied-AI research organizations without prior sector-utility footprints face the steepest climb in the four-day window.
What downstream utilities and vendors should do regardless of who wins
For utility general managers and operations directors reading this, the practical implication is that within roughly twelve months of the May 20 award decision, the sector will have a publicly available framework for evaluating GenAI and Agentic AI pilots, with documentation specific to four utility-function categories and lessons-learned cataloging across a wider set of Agentic AI applications. Utility leaders considering AI adoption decisions during 2026 should treat the publication of the RFP 5394 deliverables as the next major informational milestone in the sector and time their adoption decisions accordingly where feasible.
For vendors offering GenAI or Agentic AI services to utilities, the implication is that the procurement environment in 2027 and beyond will be structured against a shared evaluation methodology that the vendor's offering will be measured against. Vendors that engage proactively with the eventual research team — providing case-study access, technical documentation, and operational data — will be positioned to align their offerings with the framework as it is being built. Vendors that wait to react after the framework is published will be positioned to retrofit against documentation they did not help shape.
The broader sector-utility AI funding landscape
RFP 5394 is the most prominent but not the only sector-utility AI funding opportunity active during May. The Bureau of Reclamation has issued multiple AI-applied research solicitations under its Science and Technology Program for FY2026, the Department of Energy's Industrial Efficiency and Decarbonization Office has included water-utility energy optimization within its FY26 funding announcements, and several state-level public utility commissions have begun separately funding AI pilots at investor-owned water utilities under utility-rate-case allowances. The WRF opportunity is distinctive primarily because it operates at the cross-utility-evaluation layer rather than at the individual-utility-pilot layer that most of the adjacent funding supports.
For researchers and consulting firms whose work programs span utility infrastructure, AI engineering, and standards development, the May 20 deadline is the most immediate action item in the sector. For utilities and vendors observing from the sidelines, the post-award deliverables timeline — typically twelve to eighteen months for a WRF Emerging Opportunities Program project — is the more relevant horizon. Either way, the structure WRF has chosen for RFP 5394, with NIST AI RMF integration baked into the technical scope rather than appended as a compliance requirement, signals where the sector's AI evaluation work is heading.