1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
Hoffman-Yee Grant is a grant from Stanford Institute for Human-Centered Artificial Intelligence (HAI) that funds interdisciplinary research teams at Stanford University advancing human-centered AI across three focus areas: developing novel intelligence technologies, designing AI that augments rather than replaces humans, and understanding and guiding the societal impact of AI.
Proposals must address significant scientific, technical, or societal challenges requiring interdisciplinary collaboration. Letters of Intent were due January 28, 2026. The 2026 cycle particularly invites proposals leveraging AI to drive advances in scientific discovery.
Eligible applicants are Stanford faculty-led interdisciplinary teams.
Get alerted about grants like this
Save a search for “Stanford Institute for Human-Centered Artificial Intelligence (HAI)” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
© Stanford University. Stanford, California 94305. Hoffman-Yee Research Grants | Stanford HAI Executive and Professional Education Government and Policymakers Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Hoffman-Yee Research Grants Open. Letters of Intent due on January 28, 2026. Sciences (Social, Health, Biological, Physical) The Institute for Human-Centered AI (HAI) is seeking to fund interdisciplinary Stanford project teams in support of the HAI research focus areas : Intelligence — research that aims to develop novel technologies inspired by the depth and versatility of human intelligence.
Augment Human Capabilities — research that aims to design and create AI technologies that augment humans rather than replace them. Human Impact — research that aims to understand and guide the global societal impact of AI technologies for the greater good. Proposals should address significant scientific, technical, or societal challenges requiring an interdisciplinary team to make significant progress.
We are looking for bold approaches with the potential to achieve lasting solutions that positively impact the way AI is applied, developed, or studied. HAI hopes to foster a culture of AI research in which technological advancements are inextricably linked to research about their potential societal impacts.
This year, we are particularly interested in funding proposals that leverage AI to drive advancements in scientific discovery, transforming education, and robotics. Robotics research related to flagship projects in the Stanford Robotics Center (SRC) funded through the Hoffman-Yee program will have access to space and equipment in the SRC.
Each of the winning teams will receive up to $500,000 in year one with the opportunity to receive up to $2,000,000 more over the following two years. Teams will compete for year two and three funding through a presentation at a public symposium, private interview, and progress report. A subset of the teams will be selected for subsequent funding.
We expect to award six to eight grants. Link copied to clipboard! Stanford HAI Awards $2.
75M in Hoffman-Yee Grants This year’s winners propose innovative, bold ideas pushing the boundaries of artificial intelligence. Stanford HAI Announces Hoffman-Yee Grants Recipients for 2024 Six interdisciplinary research teams received a total of $3 million to pursue groundbreaking ideas in the field of AI.
Stanford HAI Announces Four Hoffman-Yee Grantees The second round of funding will sponsor teams that leverage AI to focus on real-world problems in health care, education, and society. 2023 Hoffman-Yee Symposium conference Sep 19, 2023 9:00 AM - 5:30 PM Four Research Teams Awarded New Hoffman-Yee Grant Funding Your browser does not support the video tag.
DSPy: Compiling Declarative Language Model Calls into State-of-the-Art Pipelines Omar Khattab, Matei Zaharia, Christopher Potts Your browser does not support the video tag. The ML community is rapidly exploring techniques for prompting language models (LMs) and for stacking them into pipelines that solve complex tasks.
Unfortunately, existing LM pipelines are typically implemented using hard-coded “prompt templates”, i.e. lengthy strings discovered via trial and error. Toward a more systematic approach for developing and optimizing LM pipelines, we introduce DSPy, a programming model that abstracts LM pipelines as text transformation graphs, or imperative computational graphs where LMs are invoked through declarative modules.
DSPy modules are parameterized, meaning they can learn how to apply compositions of prompting, finetuning, augmentation, and reasoning techniques. We design a compiler that will optimize any DSPy pipeline to maximize a given metric, by creating and collecting demonstrations.
We conduct two case studies, showing that succinct DSPy programs can express and optimize pipelines that reason about math word problems, tackle multi-hop retrieval, answer complex questions, and control agent loops. Within minutes of compiling, DSPy can automatically produce pipelines that outperform out-of-the-box few-shot prompting as well as expert-created demonstrations for GPT-3. 5 and Llama2-13b-chat.
On top of that, DSPy programs compiled for relatively small LMs like 770M parameter T5 and Llama2-13b-chat are competitive with many approaches that rely on large and proprietary LMs like GPT-3. 5 and on expert-written prompt chains. DSPy is available at https://github.
com/stanfordnlp/dspy Optimizing Instructions and Demonstrations for Multi-Stage Language Model Programs Krista Opsahl-Ong, Michael J Ryan, Josh Purtell, David Broman, Christopher Potts, Matei Zaharia, Omar Khattab Your browser does not support the video tag.
Language Model Programs, i.e. sophisticated pipelines of modular language model (LM) calls, are increasingly advancing NLP tasks, but they require crafting prompts that are jointly effective for all modules. We study prompt optimization for LM programs, i.e. how to update these prompts to maximize a downstream metric without access to module-level labels or gradients.
To make this tractable, we factorize our problem into optimizing the free-form instructions and few-shot demonstrations of every module and introduce several strategies to craft task-grounded instructions and navigate credit assignment across modules.
Our strategies include (i) program- and data-aware techniques for proposing effective instructions, (ii) a stochastic mini-batch evaluation function for learning a surrogate model of our objective, and (iii) a meta-optimization procedure in which we refine how LMs construct proposals over time. Using these insights we develop MIPRO, a novel algorithm for optimizing LM programs.
MIPRO outperforms baseline optimizers on five of seven diverse multi-stage LM programs using a best-in-class open-source model (Llama-3-8B), by as high as 13% accuracy. We have released our new optimizers and benchmark in DSPy at [http://dspy. ai](http://dspy.
ai). Equitable Implementation of a Precision Digital Health Program for Glucose Management in Individuals with Newly Diagnosed Type 1 Diabetes Priya Prahalad, David Scheinker, Manisha Desai, Victoria Y Ding, Franziska K Bishop, Ming Yeh Lee, Johannes Ferstad, Dessi P Zaharieva, Ananta Addala, Ramesh Johari, Korey Hood, David Maahs Your browser does not support the video tag.
Few young people with type 1 diabetes (T1D) meet glucose targets. Continuous glucose monitoring improves glycemia, but access is not equitable.
We prospectively assessed the impact of a systematic and equitable digital-health-team-based care program implementing tighter glucose targets (HbA1c < 7%), early technology use (continuous glucose monitoring starts <1 month after diagnosis) and remote patient monitoring on glycemia in young people with newly diagnosed T1D enrolled in the Teamwork, Targets, Technology, and Tight Control (4T Study 1).
Primary outcome was HbA1c change from 4 to 12 months after diagnosis; the secondary outcome was achieving the HbA1c targets. The 4T Study 1 cohort (36. 8% Hispanic and 35.
3% publicly insured) had a mean HbA1c of 6. 58%, 64% with HbA1c < 7% and mean time in the range (70-180 mg dl-1) of 68% at 1 year after diagnosis. Clinical implementation of the 4T Study 1 met the prespecified primary outcome and improved glycemia without unexpected serious adverse events.
The strategies in the 4T Study 1 can be used to implement systematic and equitable care for individuals with T1D and translate to care for other chronic diseases. Smart Start—Designing Powerful Clinical Trials Using Pilot Study Data Johannes Ferstad, Priya Prahalad, Dessi P Zaharieva, Emily Fox, Manisha Desai, Ramesh Johari, David Scheinker, David Maahs Your browser does not support the video tag.
Digital health interventions may be optimized before evaluation in a randomized clinical trial. Although many digital health interventions are deployed in pilot studies, the data collected are rarely used to refine the intervention and the subsequent clinical trials.
We leverage natural variation in patients eligible for a digital health intervention in a remote patient-monitoring pilot study to design and compare interventions for a subsequent randomized clinical trial. Our approach leverages patient heterogeneity to identify an intervention with twice the estimated effect size of an unoptimized intervention.
Optimizing an intervention and clinical trial based on pilot data may improve efficacy and increase the probability of success. (Funded by the National Institutes of Health and others; ClinicalTrials. gov number, NCT04336969 .)
Angèle Christin, Michael S. Bernstein, Jeffrey Hancock, Chenyan Jia, Jeanne Tsai, Chunchen Xu Your browser does not support the video tag.
Christopher K Tokita, Kevin Aslett, William P Godel, Zeve Sanderson, Joshua A Tucker, Jonathan Nagler, Nathaniel Persily, Richard Bonneau ReMix: Optimizing Data Mixtures for Large Scale Imitation Learning Joey Hejna, Chethan Anand Bhateja, Yichen Jiang, Karl Pertsch, Dorsa Sadigh Your browser does not support the video tag. Increasingly large robotics datasets are being collected to train larger foundation models in robotics.
However, despite the fact that data selection has been of utmost importance to scaling in vision and natural language processing (NLP), little work in robotics has questioned what data such models should actually be trained on.
In this work we investigate how to weigh different subsets or "domains'' of robotics datasets during pre-training to maximize worst-case performance across all possible downstream domains using distributionally robust optimization (DRO). Unlike in NLP, we find that these methods are hard to apply out of the box due to varying action spaces and dynamics across robots.
Our method, ReMix, employs early stopping and action normalization and discretization to counteract these issues. Through extensive experimentation on both the Bridge and OpenX datasets, we demonstrate that data curation can have an outsized impact on downstream performance.
Specifically, domain weights learned by ReMix outperform uniform weights by over 40% on average and human-selected weights by over 20% on datasets used to train the RT-X models.
Hoffman-Yee Symposium 2025 conference Oct 14, 2025 9:30 AM - 4:10 PM Stanford AI Scholars Find Support for Innovation in a Time of Uncertainty Stanford HAI offers critical resources for faculty and students to continue groundbreaking research across the vast AI landscape. Letters of Intent (LOI) are due January 28, 2026 at 11:59pm PT. Please submit using this application form .
LOI will be used by the review committee to select teams to submit full proposals (due April 1, 2026). Award recipients will be notified in June 2026. LOIs and full proposals must be self-contained with no links to additional information and must observe the maximum length limits listed below.
You are welcome to use AI to augment your ability to write the proposal. However, you are responsible for the accuracy and novelty of the content you submit. Provide a non-technical summary of the proposed project (2 pages maximum including all information, 12 pt.
font, at least ½” margins). In essence, explain what you want to do, why it is important, and who is on your team to make it happen. Please cover the following: Problem statement and approach: What is the problem, and what are your approaches to and objectives in solving it?
How is it done today, and what are the limits of current practice? What is new in your approach, and why do you think it will be successful? What parts of the HAI research focus areas (outlined above) does your project cover?
HAI is working to develop a framework on how to conduct human-centered AI research. One way we think about it is here . In what ways, if any, does your research approach relate to this framework?
Who cares? If you succeed, what difference will it make? What are the risks and payoffs?
Who is on your team and how does their expertise help? Approximately how much will it cost? (a detailed budget is not required) High level timeline and 3-5 planned project milestones Full Proposal Outline and Format ( for teams selected in the first round ) 12 pt.
font, at least ½” margins Abstract (maximum of 500 words) Provide a non-technical summary of the proposed project. The summary should address the same questions as above from the LOI. Research Project Proposal (maximum of 5 pages including figures not including references) Describe the project in sufficient technical detail that it can be assessed by domain experts.
Provide background and motivation, research objectives and methods, potential impact, and pathway to implementation of the solution. Collaboration Plan (maximum of 1 page) Given the different vocabulary/objectives/timeframe of different disciplines, describe the processes that will be implemented to facilitate sustained, meaningful collaboration among your team.
This section must show how the resulting collaboration leads to results that are much greater than the sum of the individual contributions, and the essential contributions that each PI brings. We are looking for programs where the collaboration leads to new ideas and new learnings and not just a pipeline of research results.
Participant List (length only limited by the number of participants) Provide a list of the project team members including: Affiliation (e.g., department name if inside Stanford, organization name if outside Stanford) Project Role (e.g., Principal Investigator, Co-Principal Investigator, senior team member, postdoctoral scholar, graduate student, etc.) Suggested minimum number of total PIs/Co-PIs per project proposal is 4 and the suggested maximum number of total PIs/Co-PIs is 6.
Projects are expected to be genuinely interdisciplinary and that this will be demonstrated by the participation of at least PIs from at least three different departments or schools. Other participating faculty and non-academic team members should be listed as “senior team member”.
PI/Co-PIs should only be people who plan to be deeply involved in the project Unassigned project team members can be identified by a generic title and number (e.g., Graduate Student 1, Graduate Student 2, etc.). 3-Year Budget Justification (maximum of 1 page) Describe how the 3-year project funds will enable the success of the project team. The project budget may not exceed $500,000 for year one and $1M for each of year two and three.
This is not a detailed, line-item budget; finalists will be asked to submit a detailed budget at a later date. See an example here . Ethics and Society Review (ESR) statement : (1-2.
5 pages PDF) The ESR process aims to create space for project teams to stop and think about the potential ethical dilemmas and societal challenges that could follow from their work. As part of the ESR process, our panel may ask for more detail in response. Please utilize the ESR Statement Instructions for details about: what goes into an ESR statement Email a single PDF with your full proposal to hai-grants@lists.
stanford. edu by April 1, 2026. Principal Investigators (PIs/Co-PIs) must be Stanford faculty members and be eligible per Stanford policy .
A faculty member can be associated with (e.g., PI, co-PI, or senior personnel) no more than two proposals, and each faculty member may serve as the PI for only one proposal. To be considered, proposals must satisfy the project proposal guidelines.
Both Letters of Intent (LOI) and full proposals will be evaluated based on the following criteria: Likelihood of the project initiating and sustaining meaningful interdisciplinary collaborations across the University and beyond. Projects are encouraged to span at least two of the three HAI focus areas .
Boldness, ingenuity, and potential for the transformative impact of the proposed research, especially in comparison to research typically supported by existing funding mechanisms. Project’s capability to educate, train, and prepare the next generation of leaders to take on the AI challenges of the future. Proposals will be evaluated in multiple rounds of review according to the Selection Criteria stated above.
The Hoffman-Yee Proposal Review Committee comprises individuals from different disciplines across the University. Reviewers will keep proposals and the information that they contain strictly confidential. Full proposals will go through a scientific review and an ethics review .
Six months after receipt of funds, each winning team will be expected to present their work at an HAI Directors meeting. Additionally, at the end of year-one, winning teams will be expected to present at a public symposium and meet with a selection committee for a private interview. Based on the symposium presentation and private interview, a subset of the teams will be selected for subsequent funding for the next two years.
Winning teams must agree to the terms for inventions, patents, and licensing set forth in the Stanford University Research Policy Handbook and be willing to participate in HAI activities including, but not limited to, research seminars, periodic workshops, and the review of proposals for future grant programs.
Funded projects are expected to acknowledge the support of “the Stanford Institute for Human-Centered Artificial Intelligence (HAI)” on any publications, talks, presentations, or websites that result from or mention the research supported by this grant. For any questions related to your project proposal, please contact us at hai-grants@lists. stanford.
edu
Key questions and narrative sections extracted from the solicitation.
Problem Statement & Approach (LOI, part of 2-page max): Define the problem and solution approaches; describe current practice limitations; explain novel aspects of proposed methodology; describe alignment with HAI research focus areas (Understanding AI, Augmenting Human Capabilities, or Human Impact); explain connection to human-centered AI research framework.
Impact (LOI, part of 2-page max): Identify stakeholder importance; describe expected outcomes and significance; provide risk-benefit analysis.
Team, Budget, Timeline (LOI, part of 2-page max): Describe team composition and relevant expertise; provide approximate budget (detailed budget not required for LOI); provide high-level timeline with 3-5 project milestones.
Full Proposal: Self-contained document observing stated length limits with no external links (submitted only if LOI is selected for advancement).
Scoring criteria used to review proposals for this grant.
Based on current listing details, eligibility includes: Principal Investigators must be Stanford Academic Council Faculty or Medical Center Line Faculty meeting Stanford PI eligibility policies. Each faculty member may be associated with no more than two proposals total and serve as PI on only one. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates Up to $500,000 in year one, up to $2,000,000 additional over years 2-3 Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is January 28, 2026. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.