Granted

AI Grant Budget Templates: How to Price GPU Compute, Data, and Talent

February 25, 2026 · 6 min read

Claire Cummings

A $500,000 NSF award sounds generous until you realize a single H100 GPU costs $3.90 per hour on AWS — and training a mid-sized language model can burn through 10,000 GPU-hours before you have a publishable result. Most principal investigators building AI research budgets are guessing at numbers that program officers and review panels can immediately spot as unrealistic. The gap between what PIs think compute costs and what it actually costs is where proposals die.

Browse our AI Grants page for current opportunities across NSF, NIH, DOE, and DARPA.

GPU Compute: What a Training Run Actually Costs

The price of GPU compute has dropped sharply over the past year, but it still dominates most AI research budgets. Here are the numbers you need for a defensible budget justification.

NVIDIA H100 (current standard for large-scale training):

NVIDIA A100 (still widely used, increasingly affordable):

For budget justification purposes, plan on $3.00-$4.00 per H100-hour at on-demand rates from a major cloud provider. Reviewers know these numbers. If you budget $8/hour because you pulled a stale figure from 2024, you look uninformed. If you budget $0.50/hour citing a spot market that might not exist when your grant starts, you look reckless.

Translating to real projects: Training a 7-billion-parameter model from scratch typically requires 2,000-5,000 H100-hours. Fine-tuning an existing foundation model on domain-specific data might take 100-500 H100-hours. A year of iterative experimentation — hyperparameter sweeps, ablation studies, failed runs — can easily total 5,000-15,000 GPU-hours. At $3.50/hour, that is $17,500 to $52,500 in compute alone.

For multi-year projects, budget conservatively for year one and note in your justification that cloud GPU prices have been declining 30-40% annually. Reviewers appreciate PIs who acknowledge price volatility rather than locking in a single rate across a three-year award.

Free Compute: NAIRR and ACCESS Allocations

Before you budget a single dollar for cloud compute, check whether your project qualifies for the National AI Research Resource (NAIRR) Pilot. This NSF-led program provides free access to GPU clusters, cloud environments, AI-ready datasets, and pre-trained models — roughly 3.77 exaFLOPS of total compute capacity across federal partners including DOE national laboratories.

Eligibility covers US-based researchers and educators at academic institutions, nonprofits, federal agencies, and even startups with existing federal grants. Graduate students can apply with a faculty support letter. The application requires a three-page proposal uploaded through the NAIRR Pilot portal, and start-up allocations (up to three months on a single resource) are reviewed within two weeks.

The NAIRR Pilot is transitioning to a permanent operation center under NSF solicitation 25-546, which means the resource pool will expand. If your timeline allows it, applying for a NAIRR allocation first and using commercial cloud as a backup line item shows reviewers you are being fiscally responsible with federal dollars.

The older ACCESS program (successor to XSEDE) also provides free HPC allocations, though its GPU resources are more limited than NAIRR's AI-focused infrastructure. Both programs can be cited as cost-sharing in your budget justification.

Data Labeling and Annotation Costs

The second budget line that trips up AI proposals is data. If your project involves supervised learning, reinforcement learning from human feedback, or any form of curated training data, you need annotation costs.

Per-label pricing (most common for vision and NLP tasks):

Hourly rates for complex or domain-specific annotation:

A dataset of 50,000 labeled medical images at $0.30 per bounding box with an average of four annotations per image costs $60,000. That number shocks PIs who assumed a graduate student could do it manually, but reviewers at NIH and NSF know exactly how long annotation takes and will question budgets that omit it.

Enterprise annotation contracts with providers like Scale AI or Labelbox run $93,000 to $400,000+ annually for sustained labeling pipelines. For most grant-funded projects, a per-label or hourly engagement is more appropriate and easier to justify.

ML Talent: Postdocs, Engineers, and the Salary Gap

The biggest tension in AI grant budgets is compensation. Federal pay scales were not designed for a labor market where entry-level machine learning engineers command $150,000 and senior ML researchers exceed $250,000 at companies competing for the same talent pool.

Academic postdocs (NIH/NSF pay scales):

Research scientists and staff engineers (university rates):

Industry comparison (for context, not for your budget):

You cannot budget industry rates on a federal grant. But you can — and should — budget realistically for the talent your project requires. If you need someone who can implement transformer architectures, manage distributed training across a GPU cluster, and debug CUDA memory errors, a $62,652 postdoc stipend may not attract that person. Many agencies allow research scientist titles at higher pay bands. NSF's budget justification format lets you explain why a specific role requires specific compensation. Use that space.

For SBIR/STTR proposals, the calculus shifts. Small businesses can budget market-rate salaries because the expectation is commercialization, and reviewers understand you are competing with industry for hires. Budget $140,000-$180,000 for a mid-level ML engineer on Phase II and justify it with regional salary data.

Putting It All Together: A Sample Year-One Budget

Here is what a realistic year-one compute budget looks like for a mid-scale AI research project on a $300,000 annual award:

Line ItemCost
GPU compute (4,000 H100-hours at $3.50/hr)$14,000
Cloud storage and networking (S3, data transfer)$3,600
Data annotation (20,000 labeled samples at $0.25/label)$5,000
Postdoc (1.0 FTE, Year 1 NRSA + fringe at 30%)$81,448
Research programmer (0.5 FTE at $110,000 + fringe)$71,500
Software licenses (experiment tracking, annotation tools)$4,000
Conference travel (2 venues)$6,000
Indirect costs (estimated 50% MTDC)$92,774
Total$278,322

The remaining $21,678 gives you room for cost overruns on compute — which happen on every AI project — or additional annotation rounds that reviewers will expect you to need.

One detail that separates funded proposals from rejected ones: cite your sources in the budget justification. Reference specific cloud provider pricing pages. Name the NAIRR allocation you applied for. Link to the NIH NRSA stipend notice. Reviewers reward specificity because it signals you have actually scoped the work rather than rounding to the nearest $50,000.

For researchers navigating the maze of AI funding mechanisms, solicitation requirements, and budget formats across multiple agencies, Granted can help you move from scattered pricing research to a polished, submission-ready proposal.

Sources:

Get AI Grants Delivered Weekly

New funding opportunities, deadline alerts, and grant writing tips every Tuesday.

Browse all AI grants

More AI Articles

Not sure which grants to apply for?

Use our free grant finder to search active federal funding opportunities by agency, eligibility, and deadline.

Find Grants

Ready to write your next grant?

Let Granted AI draft your proposal in minutes.

Try Granted Free