Free · No account required · Powered by AI across the world's largest grants + funders database
Currently focused on US federal, state, and foundation grants.
Free · No account required · Powered by AI across the world's largest grants + funders database
Currently focused on US federal, state, and foundation grants.
AI for Science & Safety Nodes is sponsored by Foresight Institute. AI for Science & Safety Nodes is sponsored by Foresight Institute. AI for Science & Safety Nodes is sponsored by Foresight Institute. AI for Science & Safety Nodes is sponsored by Foresight Institute. AI for Science & Safety Nodes is sponsored by Foresight Institute.
Official opportunity description and requirements excerpt:
AI for Science & Safety Nodes - Foresight Engage / Grants / AI for Science & Safety Nodes AI for Science & Safety Nodes Funding, compute, and office space in San Francisco and Berlin. New hubs for AI-powered science and safety Two new hubs – in San Francisco and Berlin – offer project funding, office and community spaces, and local compute for ambitious researchers and builders who use AI to advance science and safety. Ecosystem for decentralized, AI-driven progress Artificial intelligence is accelerating the pace of discovery across science and technology. But today’s AI ecosystem risks centralizing compute, talent, and decision-making power – concentrating capabilities in ways that could undermine both innovation and safety. To counter this development, we are building a decentralized network of Nodes dedicated to AI-powered science and safety. Each Node combines grant funding with office and community spaces, programming and in-house compute to accelerate project development. The goal is to empower researchers and builders with a mission-aligned ecosystem, where AI-driven progress remains open, secure, and aligned with human flourishing. You’re welcome to apply for all three types of support, or select one or two based on your needs. Please note that we prioritize projects that want to be active members of our hubs. Funding-only applications are considered only in exceptional cases. Use this form to apply. Application deadlines are at the last day of each month. The AI Nodes will open in San Francisco and Berlin in early 2026. The rest of this page outlines what kind of projects we are excited to support and the terms for our grants. To keep up with and leverage increasing AI capabilities, we give priority to projects that use AI as the primary engine for progress across our focus areas. The goal is to enable science and safety to accelerate in tandem with AI – for the safe and beneficial evolution of intelligence. We are excited to fund and support work in the following areas. Traditional security paradigms, often reactive, piecemeal and human-driven, cannot scale to match the speed, scale, and complexity of AI-supported attacks. We seek to support self-improving defense systems where AI autonomously identifies vulnerabilities, generates formal proofs, red-teams, and strengthens the world’s digital infrastructure. To ensure that AI progress occurs openly without sacrificing privacy, we want to support work that applies AI to enhance confidential compute environments, scale privacy mechanisms for handling data, and design infrastructure that distributes trust. This includes projects building a local, private compute stack and AI setup. 3. Decentralized & Cooperative AI
Get alerted about grants like this
Save a search for “Foresight Institute” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
AI for Science & Safety Nodes - Foresight Engage / Grants / AI for Science & Safety Nodes AI for Science & Safety Nodes Funding, compute, and office space in San Francisco and Berlin. New hubs for AI-powered science and safety Two new hubs – in San Francisco and Berlin – offer project funding, office and community spaces, and local compute for ambitious researchers and builders who use AI to advance science and safety.
Ecosystem for decentralized, AI-driven progress Artificial intelligence is accelerating the pace of discovery across science and technology. But today’s AI ecosystem risks centralizing compute, talent, and decision-making power – concentrating capabilities in ways that could undermine both innovation and safety. To counter this development, we are building a decentralized network of Nodes dedicated to AI-powered science and safety.
Each Node combines grant funding with office and community spaces, programming and in-house compute to accelerate project development. The goal is to empower researchers and builders with a mission-aligned ecosystem, where AI-driven progress remains open, secure, and aligned with human flourishing. You’re welcome to apply for all three types of support, or select one or two based on your needs.
Please note that we prioritize projects that want to be active members of our hubs. Funding-only applications are considered only in exceptional cases. Use this form to apply.
Application deadlines are at the last day of each month. The AI Nodes will open in San Francisco and Berlin in early 2026. The rest of this page outlines what kind of projects we are excited to support and the terms for our grants.
To keep up with and leverage increasing AI capabilities, we give priority to projects that use AI as the primary engine for progress across our focus areas. The goal is to enable science and safety to accelerate in tandem with AI – for the safe and beneficial evolution of intelligence. We are excited to fund and support work in the following areas.
Traditional security paradigms, often reactive, piecemeal and human-driven, cannot scale to match the speed, scale, and complexity of AI-supported attacks. We seek to support self-improving defense systems where AI autonomously identifies vulnerabilities, generates formal proofs, red-teams, and strengthens the world’s digital infrastructure.
Based on current listing details, eligibility includes: Individual researchers, students, and technologists working on technical safety and epistemic tools using AI. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates $10,000 - $100,000 Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is rolling deadlines or periodic funding windows. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Application snapshot: target deadline rolling deadlines or periodic funding windows; published funding information $10,000 - $100,000; eligibility guidance Individual researchers, students, and technologists working on technical safety and epistemic tools using AI.
Use the official notice and source links for final requirements, attachment checklists, allowable costs, and submission instructions before applying.
To ensure that AI progress occurs openly without sacrificing privacy, we want to support work that applies AI to enhance confidential compute environments, scale privacy mechanisms for handling data, and design infrastructure that distributes trust. This includes projects building a local, private compute stack and AI setup. 3.
Decentralized & Cooperative AI We fund work that builds decentralized intelligence ecosystems – where AI systems can cooperate, negotiate, and align – so societies remain resilient in a multipolar world. We are especially interested in projects that enable peaceful human–AI co-existence and create new AI-enabled mechanisms for cooperation. 4.
AI for Science & Epistemics In addition to applying AI to specific problems, we need better platforms, tools and data infrastructure to accelerate AI-guided scientific progress generally. Similarly, to get our sense-making ready for rapid change, we are interested in funding work that applies AI to improve forecasting and general epistemic preparedness. 5.
AI for Neuro, Brain-Computer Interfaces & Whole Brain Emulation We are interested in work that uses frontier models to map, simulate, and understand biological intelligence – building the foundations for hybrids between human and artificial cognition, from brain-computer interfaces to whole brain emulation. We care about this domain specifically for its potential to improve humanity’s defensive position as AI advances. 6.
AI for Longevity Biotechnology We want to fund work that applies AI to make progress on scientific frontiers in longevity biotechnology – from biostasis and replacement, to gene therapy and exosomes. 7. AI for Molecular Nanotechnology We support work that uses AI to make progress on scientific frontiers in molecular nanotechnology – from design and simulation, to construction and assembly of nanomachines.
Grants connected to the hubs Grantees are invited to build together in Berlin or San Francisco. To create community among mission-aligned projects, we strongly prioritize applicants who want to be an active part of our spaces (free of charge). We will accept “funding-only” projects only in exceptional cases.
Grantees are invited to events advancing the frontier of their field. Grantees are expected to join one of our travel-paid workshops in Berlin or San Francisco to connect with other grantees who are building relevant projects. In addition, you can propose and expect plenty of other events, sprints, and other collaborations in the nodes throughout the year.
Local, private compute is available for eligible projects. Tell us in the application how much compute you need and for what purpose, and we will provide eligible projects with a compute budget or access to local compute resources, especially for privacy-oriented projects. How much funding can be requested?
We award around $3M in total funding annually. Grants typically range from $10,000 to $100,000, with higher amounts being awarded to the AI safety-oriented focus areas, and smaller to longevity biotech and molecular nanotech projects. What is required of applicants?
To create community among mission-aligned projects, we strongly prefer applicants who plan to use the nodes in San Francisco or Berlin. We will accept “funding-only” projects only in exceptional cases. What are the application deadlines?
Application deadlines are on the last day of every month. We review applications on a monthly basis until the nodes are at capacity, so we recommend that you apply as soon as you are ready. By completing this application form – also linked at the top of this page.
What is the review process? The approximate review time is two months after each application deadline. You can request fast processing, but we may not be able to honor it.
Review time for smaller funding amounts may be faster. Proposals are first reviewed in-house for fit and quality. Strong submissions are sent to technical advisors for further evaluation.
If your proposal advances, we may follow up with written questions or a short call. Unfortunately, due to the number of applications we receive, we are unable to provide individual feedback to unsuccessful applicants. We accept applications from individuals, teams, and organizations.
Both non-profit and for-profit organizations are welcome to apply, but for-profits should be prepared to motivate why they need grant funding. What are the evaluation criteria? Impact on reducing existential risks from AI: the extent to which the project can reduce existential risks associated with AI, focusing on achieving significant advancements in AI safety within short timelines.
Feasibility within short AGI timelines: the project’s ability to achieve meaningful progress within the anticipated short timeframes for AGI development. We prioritize projects that can demonstrate concrete milestones and deliverables in the next 1-3 years. Alignment with our focus areas: the degree to which the project addresses one or more of the focus areas outlined on this page.
Capability to execute: the qualifications, experience, and resources of the applicant(s) to successfully carry out the proposed work. Strong teams with proven expertise in AI safety or related fields will be prioritized. High-risk, high-reward potential: the level of risk involved in the project, balanced with the potential for substantial, transformative impact on the future of AI safety.
We encourage speculative, high-risk projects with the potential to drive significant change if successful. Preference for open source : We prefer open source projects, unless there are specific reasons preventing it. Please note that the AI safety criteria do not apply to the AI for longevity biotechnology and molecular nanotechnology focus areas.
What are the funding terms? We fund both short-term and longer projects. Grants are typically paid in one lump sum.
However, for larger projects spanning multiple years, payments may be made in tranches, with each subsequent tranche contingent upon the successful completion and reporting of previous milestones. We can fund overhead costs up to 10% of direct research costs, where these directly support the funded work.
Successful applicants must pass our due diligence process, which includes confirming your connections to Foresight Institute, sharing any ongoing criminal proceedings, bankruptcy, etc. , and sharing an itemized budget, project plan and organizational documents. Grants are subject to basic reporting requirements.
Grantees are expected to submit brief progress updates at regular intervals, describing use of funds and progress against agreed milestones. Tax obligations vary by country and organization type. Applicants are responsible for understanding and complying with any applicable tax requirements.
Our long-term vision is a global, decentralized network of AI Nodes dedicated to the use of AI to further science and human safety. We’re starting with our own hubs in San Francisco and Berlin, but are also interested in collaborating to set up independent Nodes led by others elsewhere.
We’re envisioning each Node serving as its region’s independent hub: equipped with local compute infrastructure to support a community of researchers and builders advancing secure, safe, private cooperative AI and AI for science. Nodes in the network could share compute, talent, and progress, where aligned, while retaining their own character and governance: forming a meaningful alternative to centralized AI development.
For the right teams, we can offer limited incubation support such as compute know-how, smaller seed funding, plus integration into our wider network. Are you already building an AI Node, or interested in establishing one in your area? Fill out the form below as an expression of interest to start a conversation.
Further questions or feedback?
Please contact us at [email protected] Funded before current program Funded before current program Cooperative AI Foundation Institute for Advanced Consciousness Studies NYU and University of Basel SciAI Center – Cornell University University of Massachusetts Lowell / Harvard University The Cronin Group, University of Glasgow Biomolecular Nanotechnology Lab Center for AI Risk Management & Alignment Carnegie Mellon University Washington University in St.
Louis Decentralized Cooperation Foundation (DCF) Institute of Life Sciences Salk Institute for Biological Studies University of Pennsylvania Christian Schroeder de Witt Georgia Institute of Technology This program is supported by private funders as well as Protocol Labs, Gigafund, and 100 Plus Capital. Fund the science of the future. 50 California Street, Suite 1500
© 2026 Granted AI