1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
The AI Science Safety Program is a grant from Schmidt Sciences that funds foundational technical research aimed at improving the ability to understand, predict, and control risks from frontier AI systems while enabling their trustworthy deployment.
The 2026 Science of Trustworthy AI Request for Proposals supports high-ambition, field-shaping research in areas including rigorous AI evaluation methodologies with construct and predictive validity, safety under distribution shift and long-horizon interaction, and oversight infrastructure for large-scale AI systems.
The program specifically targets gaps where academic researchers are underleveraged relative to industry labs in safety research. Eligible applicants include research institutions and organizations. Schmidt Sciences seeks to accelerate progress on safety research that keeps pace with the rapid deployment of frontier AI systems.
Get alerted about grants like this
Save a search for “Schmidt Sciences” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
Science of Trustworthy AI - Schmidt Sciences Science of Trustworthy AI Advancing the Fundamental Science of Trustworthy AI The science of trustworthy AI is nascent. Today, many demonstrations of unsafe behavior are isolated, and mitigations do not generalize reliably across model families, deployment contexts, or capability regimes.
Therefore, we aim to support basic technical research that improves our ability to understand, predict, and control risks from frontier AI systems while simultaneously enabling their trustworthy deployment. Opportunities for Funding 2026 Science of Trustworthy AI RFP Science of Trustworthy AI Every day, AI technology is becoming more consequential. As a result, the impact of safety failures are potentially incredibly harmful.
We need mature decision-relevant evaluations. Today’s evaluations often fail exactly where we need them most: under distribution shift, long-horizon interaction, tool use, and optimization pressure. Many tests are brittle, highly correlated, or easy to “train to”, and stylized scenarios can be misleading if they do not reflect deployment-like contexts.
We need a rigorous science of evaluation with construct validity, predictive validity, and clear evidence standards for when results justify real-world decisions. The research capacity needed for trustworthy AI lags the pace of deployment.
Frontier systems are being deployed rapidly, but the infrastructure for trustworthy assessment and oversight is not keeping pace—especially for large, ambitious projects that require substantial compute, multidisciplinary expertise, and time. Accelerating progress will require sustained support for high-ambition, field-shaping research rather than incremental work. Academics are underleveraged in trustworthy AI research.
Currently, safety research for the largest AI models is primarily conducted by leading AI labs. Still, despite vast private capital flowing into AI development, commercial incentives for foundational, pre-product safety science are often weaker than incentives for capability and product improvements, especially when benefits are diffuse or long-horizon.
Foundational advances in the science of trustworthy AI therefore function as a global public good, motivating targeted philanthropic support for academic and nonprofit researchers.
Deepen our understanding of safety properties of AI systems Build a rigorous science of evaluation with construct and predictive validity Advance trustworthy AI approaches resistant to obsolescence from fast-evolving technology Support a global community of researchers advancing the science of trustworthy AI Our research agenda organizes priorities around three connected aims: Characterize and forecast misalignment in frontier AI systems Why frontier AI training-and-deployment safety stacks still result in models learning effective goals that fail under distribution shift, pressure, or extended interaction.
Develop generalizable measurement and intervention Advance the science of evaluations with decision-relevant construct and predictive validity, and develop interventions that control what AI systems learn (not just what they say).
Oversee AI systems with superhuman capabilities and address multi-agent risks Develop oversight and control methods for settings where direct human evaluation of correctness or safety isn’t feasible, and address risks that emerge from interacting AI systems.
Understanding Safety in AI Systems Tests of Compositional Generalization as an “Upper Bound” on AI Safety Risks Unlocking Multi-Agent System Safety with Dynamic Islands of Trust LLM-Derived Guardrail for Frontier Models as a Step Towards a Cautious Scientist AI Robust LLM-based Scoring of Agent Alignment Deception & Misinformation: Elicitation and Testing A Meta-analysis Approach to Understanding LM Capabilities Long-Term Safety Behavior of Agents Mechanistic Interpretability to Detect Test Set Contamination Benchmarks for AI Agents and Cybersecurity Adaptive Stress Testing for Automated Unsupervised Large Language Model Testing and Failure Classification Understanding the Mechanism of Adversarial Transfer Across AI Models Beyond Simple Scaling: A Multi-Dimensional Family of Scaling Laws for Interventions in Pre-Training and Model Safety Optimization What Counts as Contamination?
How Generalization Could Confound Evaluation Mechanistic Cognitive Alignment for Morally Guided Decision-Making Multiagent-Based T&E Environment Construction A Conformal Safety Assurance Framework for Large Language Models Formal Verification of Software Robustness and Controllability of Language Model-Based Agents Built for Automating Tasks in Software Engineering Five Nines of Reliability for AI Agents Multi-Agent AI Safety via Dynamic Games OpenAgentSafety: Measuring and Mitigating Safety Harms of LLM-based AI Agent Interactions Exploring AI Safety in Free-Form, Evolutionary Multi-Agent Systems Evaluating and Defending Multimodal Computer-Use Agents under Adversarial Attacks with a Realistic Online Environment Formalizing Prompt Injection Defenses BenchCraft: An AI-powered Toolkit to Create Valid and Efficient Benchmarks with Measurement Theory and Human-AI Collaboration Quantifying and Mitigating Privacy Risks in Multi-Agent Systems Tracing and Eliminating Harmful Capabilities Across Model Generations A Dual-Framework for Grounding AI Reasoning: Measuring Faithfulness via Code Execution and Debugging Causally Grounded Inference-Time Intervention for Robust Model Alignment Emerging Risks from Inference-Time Compute Evaluating Preparedness via Model Organism Spectrum Faithfulness and Safety of Latent Chain-of-Thought Reasoning Fundamental Limitations of the Test Time Compute Paradigm Generalization and Hidden Tendencies in LLMs Improving the Legibility of Inference-Time Reasoning for Scalable Oversight INSPECT: Inference-time Safety for Physically Embodied Chain-of-Thought in Robotics Investigating Hawthorne Effects in Large Reasoning Models Large Language Model Safety in Inference-Time Motivated Reasoning Preventing Reward Hacking at Inference-Time with Reinforcement Learning Quantifying, Attributing, and Improving Faithfulness in Chains of Thought Reasoning with Foresight: Safe Inference via Q-Value Guided Decoding Safely Confident Reasoning Safety in RL-Enabled Goal-Directed Agents Toward Safe Reasoning: Test-Time Unlearning of Sensitive Knowledge Beyond Final Answers Utility Engineering and Moral Scaffolding for Safer Reasoning Models Will Chain-of-Thought Monitoring Significantly Improve Safety?
Reasoning Up the Instruction Ladder: Inference-Time Deliberation for Safer AI Current Advisory Board Members Associate Professor of Computer Science Technical Staff at Open AI Senior Program Officer at Open Philanthropy Fund Manager, AI Safety Tactical Opportunities Fund More AI programs and initiatives Science of Trustworthy AI Accelerate Programme for Scientific Discovery The technical storage or access that is used exclusively for statistical purposes.
The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes. Manage {vendor_count} vendors
Based on current listing details, eligibility includes: Research institutions and organizations Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates Funding amounts vary based on project scope and sponsor guidance. Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is rolling deadlines or periodic funding windows. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.
Academic Grant Program is sponsored by NVIDIA. NVIDIA's Academic Grant Program seeks proposals from full-time faculty members at accredited academic institutions using NVIDIA technology to advance work in Simulation and Modeling, Data Science, and Robotics and Edge AI. Proposals for the NVIDIA Graduate Fellowship Program are also invited, focusing on AI, robotics, and autonomous vehicles.
The Public Diplomacy Section (PDS) is pleased to invite eligible applicants to submit program ideas to implement the American Cybersecurity Enhancement Program (ACEP) for Thai Entrepreneurs. PDS Bangkok prioritizes selecting the best-qualified proposal from applicants that show clear alignment with and capability to advance shared goals and U.S. government priorities and interests, highlighting U.S. innovation, entrepreneurship, and leadership. Applicants must demonstrate their intent to effectively and efficiently administer U.S. government funds in a way that strengthens the bilateral relationship between the United States and Thailand. This notice is subject to the availability of funding. Goal - The ACEP aims to introduce and leverage American technology, innovation, and standards to improve cybersecurity systems and create a more secure and safer digital environment in Thailand, thereby strengthening partnership between Thailand and the United States. This program will assist and prepare Thai entrepreneurs in mitigating the risks and damages of cyberattacks, stolen data, and financial losses. Objectives - The ACEP focuses on enhancing Thai entrepreneurs’ knowledge and skills in cybersecurity and introducing more secure systems by learning from American approaches and companies. This program also creates opportunities for Thai businesses to gain firsthand experience in implementing advanced cybersecurity measures. It will also encourage and create favorable conditions for U.S. business and economic partnership in Thailand. Target Audience - 45-60 beginning to mid-level entrepreneurs and SMEs that have been in business for 1 to 5 years with an interest in improving data safeguarding and cybersecurity systems. Proposed program activities should demonstrate strong ties to U.S. expertise, technology, and companies. This can include partnerships with U.S. organizations, the involvement of U.S. experts in the project, or collaboration with U.S. businesses Funding Opportunity Number: OFOP0001959. Assistance Listing: 19.040. Funding Instrument: CA. Category: O. Award Amount: $35K – $60K per award.
NVIDIA Graduate Fellowship Program is a grant from NVIDIA providing up to $60,000 per award to PhD students conducting research that advances accelerated computing and its applications. Now in its 25th year, the program invites nominations from doctoral students pushing the boundaries of artificial intelligence, robotics, autonomous vehicles, and related fields. Recipients receive not only research funding but also access to NVIDIA technology, products, and engineering expertise, along with a mandatory in-person summer internship. Students are nominated by their faculty advisors and selected based on academic achievement and research area alignment.