1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
ARIA Scaling Trust Programme is a grant from ARIA (Advanced Research and Invention Agency) providing up to £50 million for research teams to develop secure coordination infrastructure for AI agents. The programme seeks to enable AI agents to securely coordinate, negotiate, and verify with one another on behalf of humans.
Phase 1 funding targets open-source coordination infrastructure and fundamental research moving from empirical to theory-driven guarantees in agentic coordination. ARIA also offers opportunity seeds up to £500,000 for individual research teams. Eligible applicants include research organizations, universities, and technology companies working on AI safety and coordination challenges.
Get alerted about grants like this
Save a search for “ARIA” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
We're funding research at the edge of what is technologically or scientifically possible. Our programmes are designed to advance complex, large-scale ideas which require coordinated investment and management across disciplines and institutions. To build a programme, each Programme Director directs the review, selection, and funding of a portfolio of projects which work in tandem to drive breakthroughs.
Opportunity seeds (up to £500k) With smaller budgets and less structure than programmes, seeds support individual research teams to uncover new pathways that could inspire future programmes or might justify additional support as a standalone project. These calls are currently open for applications – make sure to read the calls for proposals before applying.
Massively Scalable Neurotechnologies: TA1 Concept papers We’re accepting concept papers for TA1 of our £50m new programme, Massively Scalable Neurotechnologies. This programme seeks radically new ways to deliver responsive neurotechnologies to the brain without brain surgery.
We believe that some of the most radical solutions to this delivery challenge may come from Creators who do not traditionally consider themselves neurotechnologists. So whether you are a synthetic biologist developing engineered cells or a biotechnologist developing new vectors to access the brain — we want to hear from you.
Learn more about this call Scalable Neural Interfaces Scaling Trust: Full proposals We’re accepting applications for funding within our £50m Scaling Trust programme. The programme’s goal is to create the capability for AI agents to securely coordinate, negotiate, and verify with one another on our behalf.
To kickstart Phase 1 of this programme, we are seeking to fund teams to develop open-source coordination infrastructure and perform fundamental research that moves us from empirical to theory-driven guarantees in agentic coordination.
Learn more about this call Trust Everything, Everywhere Enduring Atmospheric Platforms: Full proposals ARIA is launching a programme backed by at least £50 million to unlock the stratosphere as a persistent operating environment. The goal is to solve the interdependent challenges of flight and energy to create a resilient and sustainable platform layer between Earth and space.
Success will be measured by a single, galvanising demonstration: keeping a platform aloft for one week while maintaining line-of-sight to a fixed ground point and continuously powering a 300W payload. This technical breakthrough will provide the physical backbone required for next-generation advanced communications, serving as a critical enabler for the projected £13–20 trillion annual economic potential of AI.
Learn more about this call Rolling opportunity seeds Building on our previous funding calls for opportunity seed projects, we’re launching an open rolling call for proposals as an experiment across multiple opportunity spaces. We’re keen to learn from this process and use the lessons to make future calls stronger and more effective.
We're looking to fund projects within the Mathematics for Safe AI, Nature Computes Better, Scoping Our Planet and Scalable Neural Interfaces opportunity spaces, with up to £500k each.
Read the call for proposals Apply now Mathematics for Safe AI, Nature Computes Better, Scoping Our Planet, Scalable Neural Interfaces If you are disabled or have a long-term health condition, ARIA can offer support to help you with our funding application process or when you are carrying out your project. We seek out exceptional scientists and engineers and empower them to turn their ideas into reality.
Keep up-to-date on our latest opportunity spaces, programmes and funding calls.
Based on current listing details, eligibility includes: Open to researchers, institutions, and organizations working on AI agent coordination, trust infrastructure, and related areas. Rolling Opportunity Seeds are open to individual research teams. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates £50,000,000 total programme; Opportunity Seeds up to £500,000 per project Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is March 24, 2026. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.
Scaling Trust - ARIA Funding is a grant from the Advanced Research and Invention Agency (ARIA), a UK government R&D funding body established to unlock scientific and technological breakthroughs. The Scaling Trust programme funds research and development focused on trust and trustworthiness in emerging technologies. ARIA supports work from the full R&D ecosystem — including startups, universities, and research organisations — and actively encourages cross-disciplinary and cross-sector collaboration. Eligible applicants are UK-based researchers and organisations. ARIA operates as an open funding agency, reaching across disciplines and institutions to break down research silos and discover new pathways at the frontier of what is scientifically or technologically possible.
ARIA's Scaling Trust programme is a £50 million (~$63 million) initiative to create the capability for AI agents to securely coordinate, negotiate, and verify with one another on behalf of humans. The programme's Phase 1 seeks to fund teams developing open-source coordination infrastructure and performing fundamental research that advances from empirical to theory-driven guarantees in agentic coordination. The Opportunity Space is titled 'Trust Everything, Everywhere' and is led by Programme Director Alex Obadia. ARIA does not run rolling grant competitions; instead, Programme Directors define bold scientific missions and build tailored cohorts of researchers and innovators to deliver them. This programme addresses the critical challenge of enabling trustworthy multi-agent AI systems that can autonomously coordinate while maintaining security and verifiability guarantees. Research areas include cryptographic protocols for agent verification, game-theoretic coordination mechanisms, formal methods for trust guarantees, and open-source infrastructure for agent-to-agent communication.