1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
No deadline information visible on the page; most fellows are selected through Vista's courses and AI Law and Policy Workshop rather than an open application cycle.
Vista Institute AI Policy Fellowship is sponsored by Vista Institute for AI Policy. The Vista Institute for AI Policy offers grant-based fellowships supporting independent research or research assistance positions focused on AI policy. The program targets students and recent graduates who want to contribute to AI governance and policy research.
Get alerted about grants like this
Save a search for “Vista Institute for AI Policy” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
Vista sponsors students and recent graduates to undertake independent research with mentor guidance or to serve as research assistants with law professors and other AI policy experts. Most fellows are selected through Vista's courses and the AI Law and Policy Workshop. While we anticipate funding few unsolicited proposals, you are encouraged to reach out here if you have a project idea seeking funding or mentorship support.
FALL 2025 RESEARCH FELLOWS Hilal Aka is a joint JD–MPP candidate at Georgetown University Law Center and Harvard Kennedy School, focusing on AI, technology policy, and national security. She previously worked at the Center for a New American Security and the Information Technology and Innovation Foundation on U.S. technology and innovation policy, and as an economic consultant on antitrust matters.
She holds a BA in Economics and Mathematics from Wellesley College. Hilal is building a corporate legal compliance benchmark for AI systems with Joe Kwon and Prof. Noam Kolt, testing how AI agents handle context-dependent illegality across areas such as securities regulation, bankruptcy, and insider trading rules. Joel Naoki Christoph is a PhD researcher in economics at the European University Institute.
His work focuses on macroeconomics, AI governance, and international security. As a Vista AI Law and Policy Fellow, he works with Professor Gabriel Weil on the law and economics of AI liability, with emphasis on judgment proofness, insurance design, and administrative penalties for high-impact AI risks.
He has previously held research fellowships at the Centre for the Governance of AI, the Harvard Kennedy School, and the Atlantic Council, working on compute governance and the political economy of emerging technologies. Colette Le Brannan graduated with a B. S.
from Stanford in 2019 and a J. D. from Yale Law School in 2023.
She has experience in child welfare, environmental, and insurance law. She is particularly interested in litigation and how AI will fit into or influence liability frameworks. Colette is working with Dr. Anat Lior on multiple projects relating to AI class action litigation, AI agency and respondeat superior, and guidelines addressing the unauthorized practice of law involving AI.
Joe Kwon is building a corporate law compliance benchmark for AI systems with Hilal Aka and Prof. Noam Kolt. Joe currently works on technical AI policy and governance. Previously, he worked as a research engineer in industry, and on AI and cognitive science research in academia.
Under the guidance of Prof. Peter Salib, Mark is conducting research examining the role of safety evaluations in proving developer fault in AI system-caused harm. Mark is also a Junior Research Scholar at ILINA, a Research Fellow at the Centre for AI Risk Management and Alignment (CARMA) and a Researcher at the University of Cape Town AI Initiative.
At ILINA, his current research focuses on the role of law and policy in strengthening model evaluations. At CARMA, he is working with Abra Ganz to determine what whistleblower protections US-based AI safety evaluation organisations have when reporting concerns about frontier AI companies.
At UCT, he is part of a group conducting Africa-oriented model safety evaluations at the African Hub on AI Safety, Peace and Security to guide the policy and governance of highly capable AI within the continent. Mark holds an undergraduate law degree (top student, first class honors) from Strathmore University.
Through the Vista Fellowship, Matt is researching the regulatory, institutional, and technical frictions slowing AI's transition from digital applications to physical-world impact in the biotech sector with Abi Olvera (Golden Gate Institute). He is also currently conducting research into cloud export control policy with Onni Aarne (IAPS), and works part-time as a Senior Program Associate at ERA.
Previously, Matt was a research fellow at Convergence Analysis, where he published a report mapping the economic factors shaping AI diffusion across sectors. Matt has a private-sector hardware background, most recently as a product manager at IBM. He holds an MPhil in technology policy from the University of Cambridge and a BA in mathematics and economics from Washington University in St.
Louis. Patrick is currently working part time with Nikhil Mulani of Augur on a report on public and private investment across the frontier AI supply chain—from raw materials to finished models—including recommendations for the US government. Previously, he was a summer fellow with ERA Cambridge where he researched China's efforts to indigenize EUV technology for its semiconductor supply chain.
Patrick is also a facilitator for the Center for AI Safety 'AI Safety, Ethics, and Society' course. He has a policy degree from Columbia SIPA and has previously interned across the US government and the UN system. Prior to pivoting to AI, Patrick has variously worked as a classroom teacher, a full-time public health researcher, and a conservationist in New Mexico.
Starting January 2026, Patrick will be a Winter Fellow with GovAI in London. FALL 2024 RESEARCH FELLOWS Vista sponsored Janelle to serve as a Research Assistant to professors Matthew Tokson and Yonathan Arbel. Vista sponsored Joey to serve as a Research Assistant to professor Gabriel Weil.
[](https://www. linkedin. com/company/100486598/admin/feed/posts/) The Vista Institute for AI Policy is a fiscally sponsored project of Rethink Priorities, EIN 84-3896318.
Based on current listing details, eligibility includes: Open to students and recent graduates globally with interest in AI policy research. Supports both independent research projects and research assistance positions within the institute. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates See official notice Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is rolling deadlines or periodic funding windows. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.