1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
This listing may be outdated. Verify details at the official source before applying.
Find similar grantsPage indicates an annual cycle with rolling applications accepted; no fixed deadline date shown.
Meta Research Awards: AI Safety and Security is sponsored by Meta. Meta Research Awards: AI Safety and Security is a grant from Meta that funds academic research on building AI systems that are safe, fair, and transparent.
Get alerted about grants like this
Save a search for “Meta” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
Meta Responsible AI Research Awards Grant (Up to $150K · Recurring) | AI Safety Directory Meta Responsible AI Research Awards Última actualización : March 10, 2026 Fecha límite Annual cycle Alcance geográfico Global Meta's Responsible AI Research Awards fund academic research on building AI systems that are safe, fair, and transparent.
The program supports work on content safety, misinformation detection, algorithmic fairness, and the societal impacts of large language models. Award recipients gain access to Meta's research teams, datasets, and computational resources to advance responsible AI research with real-world applicability.
content safety fairness responsible ai llm security evaluation Amazon Responsible AI Grants Amazon grants for AI fairness, explainability, and responsible deployment research. Google DeepMind Safety Research DeepMind-funded external research on alignment, interpretability, and AI system reliability.
Organizaciones relacionadas Behavioral AI platform protecting against AI-generated phishing, business email compromise, and social engineering. Independent UK institute ensuring AI and data technologies serve the public interest through ethics and policy research. AI red-teaming company specializing in adversarial attacks, jailbreaks, and continuous LLM security testing.
Adversarial Robustness Toolbox Open-source library by IBM Research for defending ML models against adversarial attacks. Herramientas relacionadas Adversarial Robustness Evaluation Test Automated evaluation framework for assessing RAG system reliability, faithfulness, and context relevance. IBM's open-source toolkit for detecting and mitigating bias in machine learning models with 70+ fairness metrics.
Anthropic's open-source framework for evaluating LLM safety across honesty, harmlessness, and helpfulness. Real-time AI gateway providing enterprise guardrails for LLM safety, prompt injection detection, and policy enforcement.
LLM Guardrails: The Complete Guide to AI Safety Guardrails (2026) Everything you need to know about LLM guardrails — what they are, why they matter, top tools, implementation patterns, and best practices for securing AI systems.
Prompt Injection Attacks: Types, Examples & Defenses A comprehensive guide to prompt injection attacks — how they work, the different types, real-world examples, and defense strategies for securing LLM applications. Blue Teaming in AI Security: Strategy, Tools & Best Practices A complete guide to AI blue teaming — the defensive operations function for monitoring, detecting, and responding to security threats against AI systems.
Reciba actualizaciones semanales de seguridad de la IA Nuevas herramientas, marcos normativos e investigaciones entregados en su correo. ← Volver a todas las becas
Based on current listing details, eligibility includes: Academic researchers, university faculty, and research institutions focused on content safety, algorithmic fairness, and transparent AI systems. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates Up to $150,000 Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is rolling deadlines or periodic funding windows. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.
2026 Myopia Research Grant is a grant from the American Academy of Optometry Foundation (AAOF) and Meta Reality Labs Research that funds longitudinal research on myopia development and progression. The program offers three awards of $500,000 each, focused on open-science studies using the VEET device to collect observational visual environment data and examine its relationship to myopia outcomes. Two of the three awards are reserved exclusively for Fellows of the American Academy of Optometry; the third is open to any qualified researcher with an institutional affiliation. For-profit affiliations are not eligible. Institutions must accept direct costs only. Applications for the 2026 cycle are due May 31, 2026.
Meta Research PhD Fellowship 2026 is a competitive fellowship from Meta Research providing two years of paid tuition and fees plus a $42,000 annual stipend to outstanding PhD students conducting research in AI and related fields. Fellows receive an annual total of over $84,000 in support, a paid visit to Meta headquarters for the Fellowship Summit, and opportunities to intern with Meta research teams. Eligible applicants are full-time PhD students at accredited universities working on AI, machine learning, computer vision, NLP, or related research areas. The application deadline is September 20, 2026.
Academic Grant Program is sponsored by NVIDIA. NVIDIA's Academic Grant Program seeks proposals from full-time faculty members at accredited academic institutions using NVIDIA technology to advance work in Simulation and Modeling, Data Science, and Robotics and Edge AI. Proposals for the NVIDIA Graduate Fellowship Program are also invited, focusing on AI, robotics, and autonomous vehicles.
Manufacturing USA Institute – AI for Resilient Manufacturing is sponsored by National Institute of Standards and Technology (NIST). NIST is seeking applications to establish and operate a Manufacturing USA institute focused on leveraging artificial intelligence to strengthen the resilience of U.S. manufacturers, particularly concerning supply chain networks. The institute will conduct applied R&D projects and cultivate a skilled workforce.