AI Compute Grants & GPU Credits: Every Program for Researchers
February 25, 2026 · 5 min read
David Almeida
Compute is the bottleneck for most AI research, and the bottleneck is expensive. A single training run on a frontier model can cost millions in GPU hours. But scattered across federal agencies, national laboratories, and corporate programs are billions of dollars in free compute allocations that most researchers never apply for — either because they do not know the programs exist or because the application processes feel opaque. Here is every major program worth your time, with current deadlines and specifics on what each one actually provides.
Browse our AI Grants page for current opportunities beyond compute access.
NSF ACCESS: The Easiest On-Ramp to Free Supercomputing
The NSF ACCESS program (Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support) replaced XSEDE in 2022 and remains the single broadest path to free compute for U.S.-based researchers. It provides access to dozens of HPC systems — including GPU clusters — at no cost, with or without an existing NSF grant.
ACCESS uses a tiered allocation system. Explore allocations require only a one-page abstract and are approved within days, giving researchers enough credits to benchmark code and run pilot experiments. Discover and Accelerate tiers scale up with proportionally more justification. Maximize allocations, the largest awards, follow a semi-annual review cycle: the current submission window runs June 15 through July 31, 2026, with awards starting October 1. The previous window closed January 31, and those awards began April 1.
The key advantage is low friction. A postdoc with a half-page description of their project can have GPU access within a week through Explore. No PI status required. Apply at allocations.access-ci.org.
DOE Leadership Computing: Exascale Machines for the Asking
The Department of Energy operates three programs that together allocate the majority of time on America's most powerful supercomputers — including the exascale systems Frontier at Oak Ridge and Aurora at Argonne, plus Perlmutter at NERSC. Researchers worldwide are eligible, not just DOE-funded investigators.
INCITE (Innovative and Novel Computational Impact on Theory and Experiment) is the flagship. It reserves up to 60 percent of available node-hours on Frontier, Aurora, and Polaris. Individual awards typically range from 500,000 to 1,000,000 node-hours on Aurora or Frontier, with larger allocations possible for exceptional proposals. The 2026 call accepted submissions from April through June 2025; the 2027 call will open on a similar timeline. Apply at doeleadershipcomputing.org.
ALCC (ASCR Leadership Computing Challenge) targets high-risk, high-payoff research aligned with DOE mission areas. The 2026-2027 cycle provides time on Frontier, Aurora, Polaris, and Perlmutter. New this cycle: ALCC accepts multi-year proposals of up to three years, and the submission process has been simplified to a single proposal without a pre-proposal stage. The next call is expected to open in November 2026 for the 2027-2028 allocation year.
ERCAP (Energy Research Computing Allocations Process) distributes time on NERSC's Perlmutter, which houses over 7,000 NVIDIA A100 GPUs. Perlmutter allocations are split into separate CPU and GPU pools, and GPU requests require demonstrating code readiness. The 2027 allocation year submissions typically open in August and close in early October. Current guidance is at docs.nersc.gov/allocations.
NAIRR: The National AI Research Resource
The NAIRR Pilot, led by NSF in partnership with 13 federal agencies and 28 industry partners, was created to democratize access to AI compute, datasets, and pre-trained models. It is currently transitioning from the pilot phase to a permanent operations center under solicitation NSF 25-546.
NAIRR aggregates resources from contributors including Microsoft (which committed $20 million in Azure credits), NVIDIA, and several national laboratories. Access is open to researchers, educators, and students at U.S.-based academic institutions, nonprofits, federal agencies, tribal agencies, and even startups with federal grants. Graduate students can apply with a faculty advisor's support letter. Some resources — particularly pre-trained models and datasets — are available without a formal proposal.
What makes NAIRR distinct is breadth: it is not just raw compute, but an integrated ecosystem of cloud credits, HPC allocations, curated datasets, and software platforms. Current allocation opportunities are listed at nairrpilot.org/opportunities/allocations.
Argonne's APEX Program: Staff Support Plus GPU Time
For AI researchers specifically, the ALCF's APEX (AI Program for EXploration) program at Argonne National Laboratory stands apart from pure allocation programs. APEX pairs leadership-scale computing time on Aurora with dedicated ALCF staff collaborators and an ALCF-funded postdoctoral researcher embedded in each project. It targets novel applications of AI in science — introducing new methods or bringing established techniques into entirely new domains.
APEX projects run two years with a one-year renewal review. Proposals must demonstrate a clear need for leadership-scale resources, which can include large-scale training, inference, simulation, and data analytics. The most recent call closed February 27, 2026; watch alcf.anl.gov for the next cycle.
Cloud Provider and Industry Programs
The three major cloud providers each run research credit programs that operate year-round with rolling applications:
AWS Cloud Credit for Research provides credits for building cloud-hosted research tools and science-as-a-service applications. Faculty and staff awards have no hard cap; student awards max at $5,000. Credits expire after one year. Note that as of February 2026, Free Tier accounts are ineligible. Apply at aws.amazon.com/cloud-credit-for-research.
Google Cloud Research Credits offer up to $5,000 for faculty, postdoctoral researchers, and nonprofit lab researchers, and $1,000 for PhD students. Applications are accepted on a rolling basis and require a brief research proposal. Credits cover Compute Engine, Cloud Storage, BigQuery, and most other services. Apply at edu.google.com/programs/credits/research.
Microsoft Azure Research Credits provide compute for proof-of-concept work, workload migration, and tool development. Microsoft also channels significant Azure resources through NAIRR, including up to $3.5 million per grand challenge project. Start at microsoft.com/azure-academic-research.
NVIDIA Academic Grant Program goes beyond cloud credits to offer hardware and dedicated GPU hours: up to 30,000 H100 80GB hours, or up to eight RTX PRO 6000 GPUs, or up to two DGX Spark desktop supercomputers for qualifying projects in robotics, autonomous vehicles, 5G/6G, and federated learning. Apply at nvidia.com/academic-grant-program.
Building a Compute Portfolio
The researchers who secure the most compute treat it like a funding portfolio. An INCITE allocation covers flagship training runs. An ACCESS Explore allocation handles prototyping. Cloud credits from two or three providers cover burst inference and data preprocessing. NAIRR fills gaps with specialized datasets and pre-trained checkpoints.
None of these programs are mutually exclusive, and most review committees view applications favorably when the PI can demonstrate existing allocations elsewhere — it signals that the work is real and the code is ready. The biggest mistake is applying to only one.
If you are mapping the full funding landscape for an AI research program — compute, direct grants, and everything in between — Granted can surface opportunities across federal, state, and foundation sources faster than any manual search.
