1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
This listing may be outdated. Verify details at the official source before applying.
Find similar grantsCompute for AI Safety is sponsored by Various. Provides compute grants specifically for AI safety research, offering GPU credits, cloud computing access, or dedicated hardware for alignment experiments, interpretability research, and safety evaluations.
Get alerted about grants like this
Save a search for “Various” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
Compute for AI Safety Compute Grant (Apply Now) | AI Safety Directory Last updated : March 29, 2026 Several organizations provide compute grants specifically for AI safety research, recognizing that access to computational resources is a key bottleneck for safety researchers. These grants provide GPU credits, cloud computing access, or dedicated hardware for alignment experiments, interpretability research, and safety evaluations.
Programs include offerings from cloud providers, AI labs, and safety-focused organizations. Nonprofit research organizations alignment interpretability evaluation red teaming Access to computational resources is one of the most significant bottlenecks limiting AI safety research.
Several organizations have recognized this challenge and established programs specifically providing compute grants — including GPU credits, cloud computing access, and dedicated hardware — for AI safety researchers.
These programs are critical for enabling alignment experiments, interpretability analysis, red-teaming evaluations, and safety benchmarking on large-scale models that would otherwise be prohibitively expensive for academic and independent researchers.
Compute grant programs for AI safety are offered by major cloud providers including Google Cloud, Amazon Web Services, and Microsoft Azure, as well as by hardware manufacturers like NVIDIA and safety-focused organizations like the Center for AI Safety. These programs collectively provide millions of dollars worth of computing resources annually to the safety research community.
By removing the compute bottleneck, these programs enable researchers to conduct experiments at scales relevant to understanding and improving the safety of frontier AI systems. Application processes vary by provider, but most accept proposals through the sponsoring organization's website.
Applications typically require a description of the safety research to be conducted, the computational resources needed, and the expected research outputs. Applicants should clearly explain why significant compute is necessary for their specific safety research question and how the requested resources will be used.
Most compute grant programs accept applications on a rolling basis, though some operate on quarterly or annual review cycles. Researchers can apply to multiple programs simultaneously to maximize their available resources. Applications should include realistic estimates of compute requirements, including the types of GPUs or TPUs needed, expected hours of usage, and the models or datasets to be used.
Programs typically award credits valid for six to twelve months. What Makes a Strong Application Strong compute grant applications clearly articulate why large-scale computation is essential for the proposed safety research.
Applications that describe specific experiments requiring significant compute — such as interpretability analysis of large models, adversarial robustness testing at scale, or alignment evaluation across model sizes — are most compelling. Applicants should demonstrate familiarity with the computing infrastructure they are requesting.
Applications that would produce publicly available results, tools, or benchmarks are generally prioritized, as compute grants aim to maximize the public benefit of the provided resources. Researchers with a track record of productive compute usage and published safety research are competitive.
Clear plans for efficient resource utilization, including estimated costs per experiment and prioritization of compute allocation, strengthen applications. Frequently Asked Questions Can independent researchers without institutional affiliation access compute grants? Some programs accept independent researchers, though many require affiliation with an academic institution or nonprofit organization.
Researchers without institutional affiliation can often work through fiscal sponsors. Check specific program eligibility requirements carefully. How much compute do safety research grants typically provide?
Compute grants range from $5,000 to $100,000 or more in cloud credits, depending on the program and the research needs. Some programs provide direct access to GPU hardware rather than cloud credits. The amounts are typically sufficient for meaningful safety experiments.
Can I combine compute grants from multiple providers? Yes. Researchers frequently combine compute grants from different providers to meet their research needs.
There is generally no restriction on receiving compute credits from multiple sources simultaneously. Some researchers use different providers for different aspects of their research. Center for AI Safety Research Grants CAIS grants for technical safety research, governance, and AI risk reduction initiatives.
Open Philanthropy AI Safety Research Grants Major funder of AI safety research supporting alignment, governance, and technical safety work globally. AI safety company building reliable, interpretable AI systems and the Claude family of AI assistants. AI research and deployment company working on safe and beneficial artificial general intelligence.
Machine Intelligence Research Institute Nonprofit conducting foundational mathematical research on AI alignment and safety. Alignment Research Center Nonprofit researching AI alignment techniques, including eliciting latent knowledge and scalable oversight. Open-source LLM vulnerability scanner that probes AI models for prompt injection, toxicity, and other weaknesses.
Microsoft's open-source CLI tool for security assessment of machine learning models. Microsoft's open-source Python framework for red-teaming and risk identification in generative AI systems. Open-source framework for evaluating, testing, and red-teaming LLM prompts and applications.
LLM Guardrails: The Complete Guide to AI Safety Guardrails (2026) Everything you need to know about LLM guardrails — what they are, why they matter, top tools, implementation patterns, and best practices for securing AI systems.
Prompt Injection Attacks: Types, Examples & Defenses A comprehensive guide to prompt injection attacks — how they work, the different types, real-world examples, and defense strategies for securing LLM applications. Blue Teaming in AI Security: Strategy, Tools & Best Practices A complete guide to AI blue teaming — the defensive operations function for monitoring, detecting, and responding to security threats against AI systems.
Get weekly AI security & safety updates New tools, frameworks, and research delivered to your inbox.
Based on current listing details, eligibility includes: AI safety researchers, academic institutions, nonprofit research organizations. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates Varies Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is rolling deadlines or periodic funding windows. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.
NIH Research Project Grant (Parent R01 Clinical Trial Not Allowed) is sponsored by National Institutes of Health (NIH) - various Institutes and Centers including NIAMS, NINDS, NCCIH. The R01 grant mechanism supports investigator-initiated research projects. Many NIH institutes, including NIAMS and NINDS, welcome pain research applications through their parent R01 announcements. This can include cellular and molecular research on pain mechanisms, CNS and PNS mechanisms of pain perception and modulation, and development of animal models of pain relevant to chronic musculoskeletal central sensitization.
The Displaced Livelihoods Initiative (DLI) Round V is the fifth and final funding round supporting research to improve livelihoods for displaced populations and host communities globally. Grants range from 4,000 to 250,000 dollars across multiple categories including exploratory research, impact evaluations, and evidence use. This round prioritizes inclusion, especially refugee-led organizations and researchers with lived experience of displacement. Research must align with priority areas such as coping strategies, small business development, income-generating activities, gender roles, and relationships between displaced populations and host communities. Mandatory Expressions of Interest are due May 15, 2026.
2026 Pride Fund for LGBTQIA+ Business Owners is sponsored by Not specified (mentioned in context of various grants for small businesses). The 2026 Pride Fund for LGBTQIA+ Business Owners provides microgrants to LGBTQIA+-owned small businesses across the U. S. Grant funds can be used flexibly for expenses such as operations, equipment, staffing, or growth.
Research on Circular Economy, Smart Manufacturing, and Energy-Efficient Microelectronics is sponsored by U.S. Department of Energy (DOE) Advanced Materials & Manufacturing Technologies Office (AMMTO). This funding opportunity supports innovative technology R&D across the manufacturing sector with a focus on circular economy, smart manufacturing, and energy-efficient microelectronics. While the stated deadline for full applications has passed, AMMTO frequently issues similar solicitations, and this highlights a relevant area of interest for the DOE.
America's Seed Fund (SBIR/STTR) - Cybersecurity and Authentication is sponsored by U.S. National Science Foundation (NSF). Supports startups and small businesses to translate research into products and services, including cybersecurity and authentication, to secure national defense and protect the public. Includes research requiring privacy and security-preserving resources for artificial intelligence.