1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
Trust & Safety Research Award is sponsored by Google Research. This award supports research efforts across disciplines related to trust and safety in technology.
Get alerted about grants like this
Save a search for “Google Research” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
Trust, Safety, Security, and Privacy Research Trust, Safety, Security, and Privacy Research Google is committed to supporting researchers who are working to create a positive societal impact with technology. Our Trust, Safety, Security, & Privacy Research Award focuses on work to improve digital trust, safety, privacy, and security across the online ecosystem.
We’re seeking research proposals and will provide unrestricted gifts to support research efforts across disciplines and areas of interest related to trust, safety, security, and privacy in technology. We welcome proposals from disciplines including, but not limited to, computer science, legal studies, public policy, social sciences, psychology, and human-computer interaction. See 2025 recipients .
Please check back later for details on future application cycles. This year’s call can be focused in any area of trust, safety, security, or privacy research, where frontier AI is not central to the research . We have four areas of primary interest: Scams & Financial Fraud — How to characterize, improve detection, and reduce harm across the ecosystem of online scams.
Of particular interest are- Detailed investigations into longer-running scams (more than a single moment, e.g., romance scams, pig butchering, etc.) Reporting dynamics: improving scam reporting processes, limitations of scam reporting Protecting At-Risk Groups — At-risk groups are those which may have a greater risk of experiencing harm online or may have a more difficult time recovering.
By understanding their experiences we can make the internet safer for everyone.
Of particular interest: Work focused on the technology needs and use patterns of teenagers; for example, appropriate technology use from a child development perspective Tests of how solutions developed for one at-risk group may have limited efficacy for other groups Studies that focus on the intersection of two risk factors (e.g. low-income health care workers or female politicians and public figures) Frameworks and Taxonomies - instill structures into spaces that can serve as the foundations for policy development or improved technical enforcement.
Of particular interest: Systemic reviews to create a taxonomy of harm types in subareas such as mental health or financial harms Frameworks to describe specific dynamics in sociotechnical systems that can lead to harms Design patterns for Safety-by-Design approaches to emerging technology development Computational Thinking & Literacy Understanding and improving the digital literacy of people who are making decisions or sharing information about AI: politicians, legislators, journalists, particularly through AI transparency artifacts (i.e., model cards) Repeatable methods to evaluate Digital/AI literacy programs and adapt them across contexts We will also accept proposals on topics including: user and measurement studies, content moderation, hate speech, phishing and malware, software vulnerability and exploits, tailored advertising and profiling, harassment, violent extremism, applied cryptography, differential privacy, impacts of manipulated or synthetic media, hardware security and side-channel analysis, regulatory impacts (such as from General Data Protection Regulation, Digital Services Act, etc.), or other areas of trust and safety, privacy, or security research and practice.
We will pay particular attention to proposals that have a collaborative focus, with proposals submitted from research teams with PIs from two different countries or different disciplines. Submissions to this call may have AI elements, but proposals focused on frontier AI systems or solutions should be submitted to the AI for Privacy, Safety, and Security call. Award amounts vary by topic up to $100K USD.
Funding is intended to support the advancement of the proposed research, with an intended coverage of about one year of work. Funds will be disbursed as unrestricted gifts to the university or degree-granting research institution and are not intended for overhead or indirect costs. In the case of cross-institutional collaborations, we will distribute funds to a maximum of two institutions per proposal.
Open to professors (assistant, associate, etc.) at a university or degree-granting research institution. Applicants may only serve as Principal Investigator (PI) or co-PI on one proposal per round. There can be a maximum of 2 PIs per proposal.
Proposals must be related to computing or technology. Faculty merit: Faculty is accomplished in research, community engagement, and open source contributions, with potential to contribute to responsible innovation. Research merit: Faculty's proposed research is aligned with Google Research interests, innovative, and likely to have a significant impact on the field.
Proposal quality: The research proposal is clear, focused, and well-organized, and it demonstrates the team's ability to successfully execute the research and achieve a significant impact. AI ethics principles: The research proposal strongly aligns with Google's AI Principles . We will host info sessions with live Q&A.
RSVP to attend here .
Based on current listing details, eligibility includes: Researchers in computer science, legal studies, public policy, social sciences, psychology, human-computer interaction, and other relevant disciplines. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates Unrestricted grants Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is rolling deadlines or periodic funding windows. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.
Google Academic Research Awards (GARA) - AI for Privacy, Safety, and Security Research Award is sponsored by Google Research. This award supports early-career faculty members conducting innovative research in computer science, artificial intelligence, and related fields, specifically focusing on work that leverages frontier AI models to improve digital safety, privacy, and security.
Research Scholar Program is a grant from Google Research providing unrestricted gifts of up to $60,000 to support early-career faculty members conducting innovative research in computer science, artificial intelligence, and related fields. Eligible applicants are professors who received their PhD within the past seven years and hold faculty positions at accredited universities worldwide, including the United Kingdom. The program provides funding, mentorship opportunities, and engagement with Google researchers. No specific application deadline is posted; applicants should monitor the Google Research website for open cycles.
Research on Circular Economy, Smart Manufacturing, and Energy-Efficient Microelectronics is sponsored by U.S. Department of Energy (DOE) Advanced Materials & Manufacturing Technologies Office (AMMTO). This funding opportunity supports innovative technology R&D across the manufacturing sector with a focus on circular economy, smart manufacturing, and energy-efficient microelectronics. While the stated deadline for full applications has passed, AMMTO frequently issues similar solicitations, and this highlights a relevant area of interest for the DOE.
NIST Small Business Innovation Research (SBIR) Phase II Program - Quantum Information Science is sponsored by National Institute of Standards and Technology (NIST). This program allocates funding to small businesses for prototyping innovative technologies in areas including quantum information science, artificial intelligence, and semiconductors. These Phase II awards follow successful Phase I feasibility studies.