1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
This listing may be outdated. Verify details at the official source before applying.
Find similar grantsSchmidt Futures AI Safety Fundamental Research is sponsored by Schmidt Futures. Invests in fundamental AI safety research with opportunities for educational programs and training on safe AI development.
Get alerted about grants like this
Save a search for “Schmidt Futures” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
Science of Trustworthy AI - Schmidt Sciences Science of Trustworthy AI Advancing the Fundamental Science of Trustworthy AI The science of trustworthy AI is nascent. Today, many demonstrations of unsafe behavior are isolated, and mitigations do not generalize reliably across model families, deployment contexts, or capability regimes.
Therefore, we aim to support basic technical research that improves our ability to understand, predict, and control risks from frontier AI systems while simultaneously enabling their trustworthy deployment. Opportunities for Funding 2026 Science of Trustworthy AI RFP Science of Trustworthy AI Every day, AI technology is becoming more consequential. As a result, the impact of safety failures are potentially incredibly harmful.
We need mature decision-relevant evaluations. Today’s evaluations often fail exactly where we need them most: under distribution shift, long-horizon interaction, tool use, and optimization pressure. Many tests are brittle, highly correlated, or easy to “train to”, and stylized scenarios can be misleading if they do not reflect deployment-like contexts.
We need a rigorous science of evaluation with construct validity, predictive validity, and clear evidence standards for when results justify real-world decisions. The research capacity needed for trustworthy AI lags the pace of deployment.
Frontier systems are being deployed rapidly, but the infrastructure for trustworthy assessment and oversight is not keeping pace—especially for large, ambitious projects that require substantial compute, multidisciplinary expertise, and time. Accelerating progress will require sustained support for high-ambition, field-shaping research rather than incremental work. Academics are underleveraged in trustworthy AI research.
Currently, safety research for the largest AI models is primarily conducted by leading AI labs. Still, despite vast private capital flowing into AI development, commercial incentives for foundational, pre-product safety science are often weaker than incentives for capability and product improvements, especially when benefits are diffuse or long-horizon.
Foundational advances in the science of trustworthy AI therefore function as a global public good, motivating targeted philanthropic support for academic and nonprofit researchers.
Deepen our understanding of safety properties of AI systems Build a rigorous science of evaluation with construct and predictive validity Advance trustworthy AI approaches resistant to obsolescence from fast-evolving technology Support a global community of researchers advancing the science of trustworthy AI Our research agenda organizes priorities around three connected aims: Characterize and forecast misalignment in frontier AI systems Why frontier AI training-and-deployment safety stacks still result in models learning effective goals that fail under distribution shift, pressure, or extended interaction.
Develop generalizable measurement and intervention Advance the science of evaluations with decision-relevant construct and predictive validity, and develop interventions that control what AI systems learn (not just what they say).
Oversee AI systems with superhuman capabilities and address multi-agent risks Develop oversight and control methods for settings where direct human evaluation of correctness or safety isn’t feasible, and address risks that emerge from interacting AI systems.
Understanding Safety in AI Systems Inference-time Compute RFP Dr. Sanjeev Arora, Princeton University Dr. Eugene Bagdasarian & Dr. Shlomo Zilberstein, University of Massachusetts Amherst Dr. Yoshua Bengio, Mila - Quebec Artificial Intelligence Institute Dr. Nicolas Flammarion, EPFL Swiss Federal Technology Institute of Lausanne Dr. Adam Gleave and Kellin Pelrine, FAR.
AI, and Dr. Thomas Costello, American University and MIT Dr. Tatsu Hashimoto, Stanford University Dr. Matthias Hein, University of Tübingen and Dr. Jonas Geiping, ELLIS Institute Tübingen Dr. Zhijing Jin, University of Toronto & Dr. Mrinmaya Sachan, ETH Zürich Dr. Daniel Kang, University of Illinois, Urbana-Champaign Dr. Mykel Kochenderfer, Stanford University Dr. Zico Kolter, Carnegie Mellon University Dr. Sanmi Koyejo, Stanford University Dr. David Krueger, University of Cambridge Dr. Anna Leshinskaya, University of California-Irvine Dr. Bo Li, University of Illinois Urbana-Champaign Dr. Sharon Li, University of Wisconsin-Madison Dr. Evan Miyazono and Alexandre Rademaker, Atlas Computing Dr. Karthik Narasimhan, Princeton University Dr. Arvind Narayanan, Princeton University Dr. Aditi Raghunathan, Dr. Aviral Kumar and Dr. Andrea Bajcsy, Carnegie Mellon University Dr. Maarten Sap and Dr. Graham Neubig, Carnegie Mellon University Dr. Dawn Song, University of California-Berkeley Dr. Huan Sun, Dr. Yu Su and Dr. Zhiqiang Lin, The Ohio State University Dr. Florian Tramèr, ETH Zurich Dr. Ziang Xiao, Johns Hopkins University and Dr. Susu Zhang, University of Illinois Urbana-Champaign Quantifying and Mitigating Privacy Risks in Multi-Agent Systems Tracing and Eliminating Harmful Capabilities Across Model Generations A Dual-Framework for Grounding AI Reasoning: Measuring Faithfulness via Code Execution and Debugging Causally Grounded Inference-Time Intervention for Robust Model Alignment Emerging Risks from Inference-Time Compute Evaluating Preparedness via Model Organism Spectrum Faithfulness and Safety of Latent Chain-of-Thought Reasoning Fundamental Limitations of the Test Time Compute Paradigm Generalization and Hidden Tendencies in LLMs Improving the Legibility of Inference-Time Reasoning for Scalable Oversight INSPECT: Inference-time Safety for Physically Embodied Chain-of-Thought in Robotics Investigating Hawthorne Effects in Large Reasoning Models Large Language Model Safety in Inference-Time Motivated Reasoning Preventing Reward Hacking at Inference-Time with Reinforcement Learning Quantifying, Attributing, and Improving Faithfulness in Chains of Thought Reasoning with Foresight: Safe Inference via Q-Value Guided Decoding Safely Confident Reasoning Safety in RL-Enabled Goal-Directed Agents Toward Safe Reasoning: Test-Time Unlearning of Sensitive Knowledge Beyond Final Answers Utility Engineering and Moral Scaffolding for Safer Reasoning Models Will Chain-of-Thought Monitoring Significantly Improve Safety?
Reasoning Up the Instruction Ladder: Inference-Time Deliberation for Safer AI Current Advisory Board Members Associate Professor of Computer Science Technical Staff at Open AI Senior Program Officer at Open Philanthropy Fund Manager, AI Safety Tactical Opportunities Fund More AI programs and initiatives Science of Trustworthy AI Accelerate Programme for Scientific Discovery The technical storage or access that is used exclusively for statistical purposes.
The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes. Manage {vendor_count} vendors
Scoring criteria used to review proposals for this grant.
Based on current listing details, eligibility includes: Academic researchers, nonprofits, think tanks Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates $500,000 - $10,000,000 Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is rolling deadlines or periodic funding windows. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.