1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
No specific deadline on page; one active call in progress for 'Risks from internal deployment of frontier AI models'; previous calls are closed.
Foundational Research Grants (FRG) is sponsored by Center for Security and Emerging Technology (CSET). FRG supports the exploration of foundational technical topics related to the potential national security implications of AI over the long term.
Current areas of interest include AI assurance for general-purpose systems in open-ended domains, technical tools for external scrutiny of AI, frontier AI risks and regulations, and AI security and nonproliferation.
Get alerted about grants like this
Save a search for “Center for Security and Emerging Technology (CSET)” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
Foundational Research Grants | Center for Security and Emerging Technology Foundational Research Grants Foundational Research Grants (FRG) supports the exploration of foundational technical topics that relate to the potential national security implications of AI over the long term. In contrast to most CSET research, which is performed in-house by our team of fellows and analysts, FRG funds external projects by technical teams.
The program aims to advance our understanding of underlying technical issues to shed light on questions of interest from a strategic or policy perspective. Current areas of interest include: AI assurance for general-purpose systems in open-ended domains: Machine learning systems are rapidly becoming larger, more complex, more capable, and more general-purpose.
Existing assurance approaches for systems with automated or autonomous capabilities do not appear to be well suited to the kinds of large-scale deep learning systems that are currently being developed and deployed. FRG is interested in whether—and how rapidly—assurance approaches are likely to be developed that are suitable for such systems both now and for the long term.
Technical tools for external scrutiny of AI: As AI’s impact on the world grows, so does the need for external scrutiny of privately held AI systems, to ensure that they are being developed and used in safe and ethical ways. But granting access to outsiders has ramifications for the privacy, security, and intellectual property of AI developers.
There are early indications that different technical methods—including approaches using privacy-preserving tools and hardware-based features—can help reduce these tensions. FRG is interested in investigating how well these tools can work in practice. Frontier AI risks and regulations: The term “ frontier AI ” has begun to be used to refer to general-purpose AI systems that are at or just beyond the current cutting edge.
These systems raise a range of questions and policy challenges, which FRG is interested to explore. AI security and nonproliferation: As AI systems become more capable, it will be important that their developers are able to prevent unauthorized actors from accessing or using them. FRG is interested in supporting work that could make this more feasible.
Anthony Corso and Mykel Kochenderfer at Stanford University , for work on the progress and outlook for the reliability of AI systems; and The Python Software Foundation , for work to improve security incident reporting infrastructure for the Python Package Index. OpenMined , for work supporting the Christchurch Call Initiative on Algorithmic Outcomes . The Center for AI Safety , for work on measuring and mitigating AI deception.
Peter Henderson at Princeton University , for work on the safety of different model release strategies. Tim G. J.
Rudner at New York University , for work on quantifying uncertainty in large language models. Percy Liang and Daniel Ho at Stanford University , for work on understanding what type of model access is needed to meaningfully audit frontier models. The Partnership on AI , for a framework for post-deployment monitoring of AI agents.
Jessica Newman, Deepika Raman, and Anthony Barrett at the UC Berkeley Center for Long-Term Cybersecurity and its AI Security Initiative, for work on defining intolerable risk thresholds for frontier AI systems.
Calls for research ideas: [In progress] Risks from internal deployment of frontier AI models: Full details [PDF] [Closed] Expanding the toolkit for frontier model releases: Full details [PDF] [Closed] AI assurance for general-purpose systems in open-ended domains: Full details [PDF] FRG is directed by Helen Toner , with support from Andrea Guerrero . To learn more, please review this policy .
By continuing to browse the site, you agree to these terms.
Based on current listing details, eligibility includes: External technical teams. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates Not specified, varies by project Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is rolling deadlines or periodic funding windows. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.
Past winners and funding trends for this program