1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
Opens Feb 18 2026; Deadline May 17 2026 11:59pm AoE; Notification Summer 2026
Science of Trustworthy AI RFP is sponsored by Schmidt Sciences. Schmidt Sciences invites proposals for the Science of Trustworthy AI program, which funds technical research to understand, predict, and control risks from frontier AI systems. Two funding tiers are available: Tier 1 (up to $1M, 1-3 years) and Tier 2 ($1M-$5M+, 1-3 years).
Get alerted about grants like this
Save a search for “Schmidt Sciences” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
2026 Science of Trustworthy AI RFP - Schmidt Sciences 2026 Science of Trustworthy AI RFP Opens Feb 18 2026 07:00 AM (EST) Deadline May 17 2026 11:59 PM (EDT) Request for Proposals: Science of Trustworthy AI May 17th, 2026 by 11:59pm AoE Tier 1: Up to $1M (1-3 years) Tier 2: $1M-5M+ (1-3 years) March 11th, 2026, 10-11am ET. View webinar here April 15th, 2026, 2-3pm ET. Register here trustworthyai@schmidtsciences.
org Schmidt Sciences invites proposals for the Science of Trustworthy AI program , which supports technical research that improves our ability to understand, predict, and control risks from frontier AI systems while enabling their trustworthy deployment. This Request for Proposals is grounded in our Research Agenda , which defines the scientific scope and priorities.
The questions in each subsection guide what we consider in scope; they are not an exhaustive checklist. Proposals need not match any question(s) verbatim, but should clearly advance the underlying scientific objectives of our research agenda and explain why the work advances the science of trustworthy AI.
We expect strong proposals—especially at funding Tier 2—to take a clear stand on a small number of core questions and pursue them deeply, rather than addressing many agenda items superficially.
The research agenda has three connected aims: Aim 1: Characterize and forecast misalignment in frontier AI systems : Understand why frontier AI training-and-deployment safety stacks still result in models learning effective goals that fail under distribution shift, pressure, or extended interaction.
Aim 2: Develop generalizable measurements and interventions : Advance the science of evaluations with decision-relevant construct and predictive validity, and develop interventions that control what AI systems learn (not just what they say).
Aim 3: Oversee AI systems with superhuman capabilities and address multi-agent risks : Extend oversight and control to regimes where humans cannot directly evaluate correctness/safety, and address risks that arise from interacting AI systems. Preference will be given to proposals from collaborations among multiple PIs and labs.
For Aim 3, we are considering grouping projects together to expedite rapid empirical progress on effective superhuman oversight. More broadly, we encourage collaboration across this agenda and expect to support shared compute and targeted convenings, where helpful. We invite applicants to apply to either or both funding tiers.
Applicants may submit more than one proposal to each tier. Tier 1 : Up to $1M (1-3 years) Tier 2 : $1-5M+ (1-3 years) Although we expect to fund projects at both tiers, we are most interested in ambitious Tier 2 proposals that, if successful, would change what the field believes is possible for understanding, measuring, or controlling risks from frontier AI systems.
Schmidt Sciences aims to support the compute needs of ambitious and risky AI research. Applicants may request either funding for compute or access to Schmidt Sciences’ computing resources (subject to availability and terms). The computing resources offer access to cutting edge GPUs and CPUs, accompanied by large-scale data storage and high-speed networking.
Please see the application form for more information.
Beyond compute, Schmidt Sciences offers a range of support: Software engineering support through the Virtual Institute for Scientific Software API credits with frontier model providers Opportunities to engage with the program’s community through convenings and workshops We invite individual researchers, research teams, research institutions, and multi-institution collaborations across universities, national laboratories, institutes, and non-profit research organizations.
We are open globally and encourage collaborations across geographic boundaries. Indirect costs must be at or below 10% to comply with our policy . Proposals will be evaluated by Schmidt Sciences staff and external reviewers holistically.
Key considerations include: Research Agenda Fit . Does the proposal clearly engage with the intention behind the scientific questions and objectives in the research agenda? Scientific Quality and Rigor .
Is the proposed work technically sound, well-motivated, and capable of producing generalizable insight? Potential Impact . If successful, would it materially advance the science of trustworthy AI and is there a plausible argument for why this will meaningfully reduce risks from frontier AI systems (ideally through ambitious, field-shaping contributions)?
Feasibility and Scope . Is the project appropriately scoped for the requested budget and duration? Team Expertise .
Is the team well-suited to execute the proposed work, with relevant technical expertise, sufficient capacity, and a level of time commitment commensurate with the ambition of the project? Cost Effectiveness . Is the proposed budget reasonable and well-justified given the project’s goals and planned activities?
Tiers 1 and 2 have the same selection criteria, with a higher bar for tier 2 projects. For Tier 2, priority will be given to projects which are demonstrably a primary focus for the lead investigator(s).
Common reasons proposals are non-competitive Proposals lack a core focus Proposals suggest tools/benchmarks/evaluations without a credible validity argument (e.g., construct validity, predictive validity, robustness under optimization pressure) Proposals describe vague methods (“we will explore…”) instead of concrete activities, experiments, analyses, and baselines Proposals do not state clearly what would happen if the project succeeds, and what we learn if it fails 2026 Science of Trustworthy AI RFP Request for Proposals: Science of Trustworthy AI May 17th, 2026 by 11:59pm AoE Tier 1: Up to $1M (1-3 years) Tier 2: $1M-5M+ (1-3 years) March 11th, 2026, 10-11am ET.
View webinar here April 15th, 2026, 2-3pm ET. Register here trustworthyai@schmidtsciences. org Schmidt Sciences invites proposals for the Science of Trustworthy AI program , which supports technical research that improves our ability to understand, predict, and control risks from frontier AI systems while enabling their trustworthy deployment.
This Request for Proposals is grounded in our Research Agenda , which defines the scientific scope and priorities. The questions in each subsection guide what we consider in scope; they are not an exhaustive checklist. Proposals need not match any question(s) verbatim, but should clearly advance the underlying scientific objectives of our research agenda and explain why the work advances the science of trustworthy AI.
We expect strong proposals—especially at funding Tier 2—to take a clear stand on a small number of core questions and pursue them deeply, rather than addressing many agenda items superficially.
The research agenda has three connected aims: Aim 1: Characterize and forecast misalignment in frontier AI systems : Understand why frontier AI training-and-deployment safety stacks still result in models learning effective goals that fail under distribution shift, pressure, or extended interaction.
Aim 2: Develop generalizable measurements and interventions : Advance the science of evaluations with decision-relevant construct and predictive validity, and develop interventions that control what AI systems learn (not just what they say).
Aim 3: Oversee AI systems with superhuman capabilities and address multi-agent risks : Extend oversight and control to regimes where humans cannot directly evaluate correctness/safety, and address risks that arise from interacting AI systems. Preference will be given to proposals from collaborations among multiple PIs and labs.
For Aim 3, we are considering grouping projects together to expedite rapid empirical progress on effective superhuman oversight. More broadly, we encourage collaboration across this agenda and expect to support shared compute and targeted convenings, where helpful. We invite applicants to apply to either or both funding tiers.
Applicants may submit more than one proposal to each tier. Tier 1 : Up to $1M (1-3 years) Tier 2 : $1-5M+ (1-3 years) Although we expect to fund projects at both tiers, we are most interested in ambitious Tier 2 proposals that, if successful, would change what the field believes is possible for understanding, measuring, or controlling risks from frontier AI systems.
Schmidt Sciences aims to support the compute needs of ambitious and risky AI research. Applicants may request either funding for compute or access to Schmidt Sciences’ computing resources (subject to availability and terms). The computing resources offer access to cutting edge GPUs and CPUs, accompanied by large-scale data storage and high-speed networking.
Please see the application form for more information.
Beyond compute, Schmidt Sciences offers a range of support: Software engineering support through the Virtual Institute for Scientific Software API credits with frontier model providers Opportunities to engage with the program’s community through convenings and workshops We invite individual researchers, research teams, research institutions, and multi-institution collaborations across universities, national laboratories, institutes, and non-profit research organizations.
We are open globally and encourage collaborations across geographic boundaries. Indirect costs must be at or below 10% to comply with our policy . Proposals will be evaluated by Schmidt Sciences staff and external reviewers holistically.
Key considerations include: Research Agenda Fit . Does the proposal clearly engage with the intention behind the scientific questions and objectives in the research agenda? Scientific Quality and Rigor .
Is the proposed work technically sound, well-motivated, and capable of producing generalizable insight? Potential Impact . If successful, would it materially advance the science of trustworthy AI and is there a plausible argument for why this will meaningfully reduce risks from frontier AI systems (ideally through ambitious, field-shaping contributions)?
Feasibility and Scope . Is the project appropriately scoped for the requested budget and duration? Team Expertise .
Is the team well-suited to execute the proposed work, with relevant technical expertise, sufficient capacity, and a level of time commitment commensurate with the ambition of the project? Cost Effectiveness . Is the proposed budget reasonable and well-justified given the project’s goals and planned activities?
Tiers 1 and 2 have the same selection criteria, with a higher bar for tier 2 projects. For Tier 2, priority will be given to projects which are demonstrably a primary focus for the lead investigator(s).
Common reasons proposals are non-competitive Proposals lack a core focus Proposals suggest tools/benchmarks/evaluations without a credible validity argument (e.g., construct validity, predictive validity, robustness under optimization pressure) Proposals describe vague methods (“we will explore…”) instead of concrete activities, experiments, analyses, and baselines Proposals do not state clearly what would happen if the project succeeds, and what we learn if it fails Feb 18 2026 07:00 AM (EST) May 17 2026 11:59 PM (EDT)
Portal login or registration may be required to access the full application.
Key questions and narrative sections extracted from the solicitation.
Research Agenda Fit: Does the proposal clearly engage with the scientific questions and objectives in the research agenda?
Scientific Quality and Rigor: Is the proposed work technically sound, well-motivated, and capable of producing generalizable insight?
Potential Impact: Would it materially advance the science of trustworthy AI and reduce risks from frontier AI systems?
Feasibility and Scope: Is the project appropriately scoped for the requested budget and duration?
Team Expertise: Is the team well-suited to execute the proposed work?
Cost Effectiveness: Is the proposed budget reasonable and well-justified?
Scoring criteria used to review proposals for this grant.
Based on current listing details, eligibility includes: Individual researchers, research teams, research institutions, and multi-institution collaborations across universities, national laboratories, institutes, and non-profit research organizations. Open globally. Indirect costs must be at or below 10%. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates Tier 1: Up to $1M; Tier 2: $1M-$5M+ (1-3 years) Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is May 17, 2026. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.
Schmidt Sciences' Unconventional Compute RFP funds research on hardware fundamentally different from conventional CPUs and GPUs, co-designed with training and inference methods suited for their constraints. Open to universities and nonprofit research institutions globally, this pilot program seeks evidence that unconventional hardware can solve real-world problems beyond benchmark metrics. Track I awards $50,000-$150,000 for 6-12 month projects; Track II awards $150,000-$750,000 for 12-18 months. The lightweight application requires only a short online form and a five-page project narrative. Deadline is April 30, 2026.
Schmidt Sciences' Science of Trustworthy AI RFP supports technical research aimed at improving understanding, prediction, and control of risks from advanced AI systems while enabling their safe deployment. The program funds research across three core aims: (1) characterizing and forecasting misalignment in frontier AI systems, (2) developing generalizable measurements and interventions for AI safety, and (3) overseeing superhuman-capability AI systems and addressing multi-agent risks. Tier 1 awards provide up to $1 million over 1-3 years, while Tier 2 awards range from $1-5 million or more over 1-3 years. Schmidt Sciences also offers compute access, software engineering support through the Virtual Institute for Scientific Software, API credits with frontier model providers, and community engagement opportunities for funded researchers.