Granted
Sign in

Platform

Built for Grant Discovery at Scale

Hybrid retrieval, federated multi-provider search, and a learned scoring model—engineered to surface the right grants from 67,000+ opportunities in under 200ms.

Search Architecture

Four-Stage Hybrid Retrieval

Every query passes through a cascading pipeline that combines lexical precision with semantic understanding. Each stage narrows and reranks candidates until only the most relevant grants remain.

67,000+

grants indexed

12

data sources

<200ms

scoring latency

$0.003

per query

01

<1ms

query time

Full-Text Search

PostgreSQL ts_rank over indexed grant titles, descriptions, and eligibility criteria. Sub-millisecond candidate retrieval from 67,000+ opportunities.

02

recall lift

Query Expansion

LLM-generated synonym sets and domain-specific rewrites broaden recall without sacrificing precision. A query for "youth STEM" also surfaces "K-12 science education" and "after-school technology programs."

03

768-d

vectors

Embedding kNN

Dense vector search with pgvector finds semantically similar grants that keyword matching misses. Captures conceptual overlap across different funding vocabularies.

04

60.3%

P@5

Cross-Encoder Reranking

A fine-tuned cross-encoder jointly attends to the full query–document pair, producing calibrated relevance scores that outperform bi-encoder similarity alone.

Federated Search

Five Providers, Five Web Indices

For open-web grant discovery, we query five LLM providers simultaneously—each backed by a distinct search index. Results are fused under a single learned scoring function that ignores every model's self-reported relevance score.

Gemini

Google Search index

GPT-4.1

Bing/OpenAI index

Claude

Anthropic web search

Grok

X/Twitter + web index

Perplexity

Independent web index

Key Insight

We discard every model's self-reported confidence score and re-evaluate all candidates through our own 15-feature scoring function—trained on 1,034 labeled query–grant pairs.

Scoring Model

15-Feature Learned Relevance

A gradient-boosted model scores every candidate grant across five feature categories. Trained on 1,034 labeled queries, validated at 60.3% Precision@5—then distilled into a lightweight scorer for sub-200ms inference.

Text

  • BM25
  • TF-IDF overlap
  • Title match

Semantic

  • Cosine similarity
  • Cross-encoder score
  • Query expansion hits

Metadata

  • Agency match
  • Category alignment
  • Eligibility fit

Freshness

  • Days to deadline
  • Posted recency
  • Update frequency

Penalty

  • Expired flag
  • Duplicate detection
  • Low-quality signals

1,034

labeled queries

15

scoring features

60.3%

Precision@5

~0ms

distilled inference

Data Pipeline

12 Sources, Real-Time Ingestion

We ingest grant data from 12 federal and institutional sources, normalize schemas, deduplicate listings, and maintain freshness with daily sync jobs. 99.98% of opportunities are fully tagged with eligibility, category, and deadline metadata.

Grants.gov
SAM.gov
NSF
SBIR.gov
NIH RePORTER
Commerce.gov
USDA NIFA
ED.gov
EPA Grants
HUD Exchange
State SFAs
Foundation RFPs

67,000+

opportunities

99.98%

fully tagged

Daily

sync cadence

Grant Writing Engine

Six Steps from RFP to Polished Draft

Every grant has unique requirements. Granted's workflow ensures each one is identified, addressed, and woven into a draft that speaks directly to your funder.

Step 01

RFP Analysis

Upload your RFP or grant guidelines. Granted’s AI reads the full document and identifies every required section, evaluation criterion, and compliance requirement.

Step 02

Requirement Discovery

The system discovers the grant’s full structure—from project narratives and budget justifications to data management plans and letters of support.

Step 03

Grant Writing Coach Q&A

A grant writing coach asks targeted questions about your organization, team qualifications, project goals, and budget. Your answers ground every section in your real data.

Step 04

Coverage Tracking

Track which requirements have been addressed and which need attention. See coverage percentage in real time as the coach gathers information.

Step 05

Section-by-Section Drafting

Each section is drafted individually using your specific answers and the RFP’s requirements. No generic templates, no placeholders.

Step 06

Purpose-Built, Not General-Purpose

General-purpose AI doesn’t read your RFP, track coverage, or ground output in your data. Granted does—because it was built for this one job.

For submission to NeurIPS 2026

Read the Technical Paper

Full methodology, ablation studies, and benchmark results for the hybrid retrieval pipeline and knowledge-distilled scoring model.

What Makes This Different

ChatGPT, Claude, and other general-purpose AI tools are powerful writers—but they weren't designed for grant proposals. Here's what Granted does that they don't.

CapabilityGrantedGeneric AI
Reads and parses your full RFP
Identifies every required section automatically
Asks targeted questions about your organization
Tracks requirement coverage in real time
Grounds every paragraph in your specific data
Produces section-by-section drafts, not one-shot output
Knows the structure of grant proposals

Your Data Stays Yours

Everything you upload to Granted—your RFP, your coach answers, your drafts—is private to your account. We never use your data to train models, and we never share it with third parties.

Start winning grants today

Stop wrestling with blank pages and generic AI output. Upload your RFP and let Granted build a proposal that's grounded in your work.

Talk to sales