1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
This listing may be outdated. Verify details at the official source before applying.
Find similar grantsApplication deadline: March 31, 2026 (12:00 midnight PT)
CRA Trustworthy AI Research Fellowship for Early Career Scholars is sponsored by Computing Research Association (CRA), funded by Microsoft. This fellowship supports early-career computing researchers advancing trustworthy artificial intelligence through interdisciplinary collaboration with humanistic social sciences.
Get alerted about grants like this
Save a search for “Computing Research Association (CRA), funded by Microsoft” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
CRA Trustworthy AI Research Fellowship for Early Career Scholars - CRA CRA Trustworthy AI Research Fellowship for Early Career Scholars Preparing the Next Generation of Trustworthy AI Research Leaders The CRA Trustworthy AI Research Fellowship for Early Career Scholars , funded by Microsoft, supports early-career computing researchers who are advancing trustworthy artificial intelligence (AI) through interdisciplinary collaboration with humanistic social sciences.
Inspired by the 2022 National Academies report Fostering Responsible Computing Research , the fellowship is designed to help scholars integrate ethical, societal, and human-centered perspectives directly into AI research — strengthening both technical innovation and its broader social impact. Following a successful inaugural year, CRA has launched the second cohort of CRA Trustworthy AI Research Fellows for the 2026-2027 program cycle.
The fellowship brings together a national cohort of early-career scholars for structured training, sustained peer engagement, mentoring, and collaborative work focused on responsible and socially grounded AI research.
What the Fellowship Offers Over one year, Fellows participate in: A virtual kickoff meeting in June 2026 A four-day, in-person Trustworthy AI Field School (July 26-31, 2026, Cambridge, MA), co‑located with the Sloan Foundation’s Metascience & AI Summer School and Northeastern University’s AI + Data Ethics (AIDE) Summer Training Program Quarterly virtual convenings throughout the fellowship Opportunities to engage in or lead Trustworthy AI initiatives at CRA partner institutions, including CRA member and CAHSI institutions Opportunities for mentorship from established scholars and practitioners advancing trustworthy computing Selected Fellows receive a $17,000 stipend , along with support for travel, lodging, and meals associated with in-person fellowship activities.
Through the fellowship, participants: Advance their own research at the intersection of computing and humanistic social sciences Contribute to shared frameworks and language for trustworthy AI Build durable professional networks with peers and mentors across academia, industry, and policy Help shape scalable models for interdisciplinary training in responsible computing Who the Fellowship Is For The CRA Trustworthy AI Research Fellowship is designed for early-career computing researchers who: Are working within 1-3 years post-PhD (conferred between May 1, 2023 and July 1, 2025) Have interdisciplinary training or research experience in a humanistic social science Are interested in community-based, public-interest, or socially responsible AI research Hold a tenure-track faculty or visiting researcher position (e.g., postdoc or fellowship) at a U.S. institution of higher education The application portal for the 2026-2027 cohort of the CRA Trustworthy AI Research Fellowship is now open.
Application deadline: March 31, 2026 (12:00 midnight PT) For questions, contact Janine Myszka at jmyszka@cra. org. Applications will open on February 2, 2026, for the 2026-2027 CRA Trustworthy AI Research Fellowship for Early Career Scholars , funded by Microsoft .
This fellowship supports early-career computing researchers with interdisciplinary training or research experience in humanistic social sciences who are working to integrate ethical, societal, and human-centered considerations into artificial intelligence (AI) research and development. March 31, 2026 (12:00 midnight PT) The application is intentionally short and straightforward.
Applicants will be asked to submit: Educational and professional background (must have PhD conferral date between May 1, 2023 and July 1, 2025) Short written responses (up to 500 words each) to the following questions: What secondary training or research experience have you had in humanistic social sciences? (Please include field and accreditation type, if any.) Why are you interested in becoming a CRA Trustworthy AI Research Fellow?
Please describe your experience with or interest in community-based approaches to trustworthy AI research and public interest technologies. No letters of recommendation are required. All materials must be submitted through the official online application portal.
If you have questions about eligibility or the application process, please contact Janine Myszka at jmyszka@cra. org. CRA Trustworthy AI Research Fellowship Field School The CRA Trustworthy AI Research Fellowship Field School is a core component of the one-year CRA Trustworthy AI Research Fellowship.
This four-day, in-person program equips Fellows — early-career computing scholars advancing trustworthy and responsible AI — to deepen engagement with the humanistic social sciences and strengthen the societal impact of their research. Fellows participate in intensive tutorials, research workshops, and mentorship sessions with leaders across computing, humanistic social sciences, ethics, and policy.
Past mentors include Bobby Kleinberg, James Mickens, Desmond Patton, and Moshe Vardi. The Field School emphasizes interdisciplinary exchange, methodological fluency, and collaborative agenda-setting.
A Unique Interdisciplinary Environment In 2026, the Field School will be co-located in Cambridge, Massachusetts with the Sloan Foundation’s Metascience & AI Summer School and Northeastern University’s AI + Data Ethics (AIDE) Summer Training Program.
This creates a multi-cohort environment for shared keynotes, lightning talks, informal discussion, and cross-program collaboration — while preserving a focused cohort experience centered on trustworthy AI research.
By the end of the Field School, Fellows will have: Expanded methodological fluency across computing and humanistic social sciences Deeper interdisciplinary professional networks spanning academia, industry, and policy Clearer research trajectories for advancing trustworthy and responsible AI Momentum to contribute to scalable models for interdisciplinary AI training and research The Field School serves as a catalyst for sustained collaboration throughout the fellowship year and beyond, reinforcing CRA’s commitment to developing the next generation of leaders in trustworthy AI research.
Seeking Early-Career Computing Scholars Passionate About Integrating Social Science Insights with Trustworthy AI Research The CRA Trustworthy AI Research Fellowship is designed for early-career computing researchers who combine strong technical expertise with interdisciplinary experience in the social sciences. Ideal candidates are committed to advancing trustworthy AI and eager to lead collaborative, cross-disciplinary work.
To be eligible for the next cohort, applicants must meet all of the following criteria: Are 1-3 years post-PhD (conferred between May 1, 2023 and July 1, 2025) Disciplinary Background: Hold a primary doctoral degree in a computing-related discipline, such as: Or other closely related computing fields Interdisciplinary Experience: Have secondary training or substantial research experience in at least one social science discipline, including but not limited to: Communication and media studies Studies of vulnerable populations Science and technology studies (STS) Institutional Affiliation: Hold a faculty, postdoctoral, or visiting researcher position at an institution of higher education in the United States.
For questions about eligibility, please contact Janine Myszka at jmyszka@cra. org . Answers to Commonly Asked Questions Who is eligible to apply for this fellowship?
Early-career scholars (1-3 years post-PhD, conferral date between May 1, 2023 and July 1, 2025) with a primary doctoral degree in a computing-related field and interdisciplinary training or research experience in a social science field are eligible. Applicants must hold a tenure-track faculty or visiting researcher position (e.g., postdoc or fellowship) at a U.S. institution of higher education.
For more details, please visit the Eligibility tab. What costs are covered by the fellowship? Fellows receive a $17,000 stipend plus support for travel, lodging, and meals associated with the four-day, in-person Trustworthy AI Field School.
What if my discipline isn’t explicitly listed? Applicants whose interdisciplinary training or research experience closely aligns with the fellowship’s goals are encouraged to apply and should clearly outline how their background aligns with the fellowship’s focus in their application. When is the next opportunity to apply?
Applications for the 2026-2027 CRA Trustworthy AI Research Fellowship are now open, and must be submitted by March 31, 2026 (12:00 midnight PT). For further questions, please contact Janine Myszka at jmyszka@cra. org.
Supporting a Vibrant, Connected, and Socially Responsible Computing Research Community The Computing Research Association (CRA) catalyzes computing research by uniting industry, academia, and government.
CRA counts among its members nearly 300 North American organizations active in computing research and works with these organizations to represent the computing research community and to effect change that benefits both computing research and society at large.
By leading the computing research community, informing policymakers and the public, and promoting the development of an innovative and responsible computing research workforce, CRA is able to carry out its mission of catalyzing computing research.
Connect with the CRA Trustworthy AI Research Fellowship Team If you have questions or need additional information about the CRA Trustworthy AI Research Fellowship, please contact our team via Janine Myszka at jmyszka@cra. org . Taslima Akter is an Assistant Professor of Computer Science at the University of Texas San Antonio.
Her research focuses on accessibility and privacy for blind and low-vision individuals engaging with AI technologies. With secondary training in accessibility studies and a strong record of interdisciplinary collaboration, she centers community-informed design.
Akter earned her PhD and MS in Computer Science from Indiana University Bloomington, where her dissertation examined how to reduce privacy risks for blind and low-vision users of camera-based assistive technologies. She holds a BS in Computer Science and Engineering from the Bangladesh University of Engineering and Technology (BUET).
Her work draws on participatory methods to surface how AI systems may reinforce stigma, bias, and inequity, particularly for disabled communities. She will use the CRA Trustworthy AI Research Fellowship to co-develop inclusive AI frameworks that prioritize equity and lived experience, and to help build a shared foundation for trustworthy AI that reflects the needs and values of marginalized users.
Diana Freed is an Assistant Professor of Computer and Data Science at Brown University, a Visiting Scholar at Harvard Law School’s Petrie-Flom Center, and a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard. Her work focuses on human-centered security, privacy, and AI governance in healthcare and social services, with an emphasis on accountability and equity.
She holds an MA in Counseling and Clinical Psychology and completed post-graduate clinical training, which informs her interdisciplinary research.
She will use the CRA Trustworthy AI Research Fellowship to deepen her work on algorithmic fairness and social impact, particularly in high-stakes domains such as healthcare, legal services, and digital safety, where AI intersects with vulnerable populations and complex systems of trust and accountability. Vinitha Gadiraju is an Assistant Professor of Computer Science at Wellesley College.
Her research investigates the relationships between disabled people and generative AI technologies, focusing on perception, trust, and harm mitigation in interactions with tools such as chatbots. Her work draws on perspectives from human-computer interaction, education, and sociology and follows a community-based research approach.
Gadiraju earned her PhD and MS in Computer Science from the University of Colorado Boulder, where her research explored how technology can support visually impaired children and their learning through participatory and naturalistic design methods. She completed her BS in Computer Science with a minor in Psychology at the University of Oregon.
Her experience includes work at Google Research’s People + AI Research (PAIR) initiative, where she led focus groups examining how large language models perpetuate harms toward the disability community.
Through the CRA Trustworthy AI Research Fellowship, Gadiraju aims to explore trust, harm identification, user safety, and relationship formation in sensitive AI use contexts, such as health navigation and companionship, and advocate for disability-inclusive AI development.
Dhruv “DJ” Jain is an Assistant Professor of Computer Science and Engineering at the University of Michigan, with a courtesy appointment in the School of Information and an affiliate appointment in the Medical School. His research spans human-computer interaction, accessible computing, and Deaf/disability studies.
He co-designs AI systems with Deaf and disabled communities, using participatory design, autoethnography, and mixed-methods fieldwork to surface how AI technologies intersect with lived experience. Jain earned his PhD and MS in Computer Science & Engineering from the University of Washington, where he also completed a minor in Disability Studies.
He holds an MS from the MIT Media Lab and a BS in Computer Science & Engineering from the Indian Institute of Technology Delhi, where he also minored in Sociology. His work focuses on audio-based AI, human-centered guardrails, and equitable design practices that enhance accessibility and mitigate bias.
Through the CRA Trustworthy AI Research Fellowship, Jain aims to help integrate accessibility and Deaf/disability studies into broader conversations around AI development, contributing to more inclusive, socially responsible, and human-centered AI systems. Yasmine Kotturi is an Assistant Professor of Human-Centered Computing in the Information Systems Department at the University of Maryland, Baltimore County.
She designs and builds sociotechnical systems that foster worker resilience by centering relational, community-driven practices —particularly by scaffolding peer support among people navigating precarious forms of employment and entrepreneurship.
Her research combines human-computer interaction with insights from labor studies and feminist theory to develop approaches such as community-based software engineering that shift power in how AI systems are built and used. Kotturi collaborates closely with community partners to develop AI-powered tools and infrastructures grounded in community expertise, including projects like BizChat ( https://bizchat-io. vercel.
app/ ). She earned her PhD and MS in Human-Computer Interaction from Carnegie Mellon University, where she specialized in computer science, and holds a BS in Cognitive Science from the University of California, San Diego. Through the CRA Trustworthy AI Research Fellowship, Kotturi aims to transform computing pedagogy and equip future technologists to navigate the ethical and societal stakes of AI development.
Calvin Liang is a Mancosh Postdoctoral Fellow in Communication Studies at Northwestern University. His research investigates how AI mediates intimacy and advances health equity. He brings interdisciplinary expertise from human-centered design, communication studies, and human factors engineering.
Liang holds a PhD in Human Centered Design & Engineering from the University of Washington, an MS in Human Factors Engineering, and a BS in Engineering Psychology from Tufts University. Through the CRA Trustworthy AI Research Fellowship, he aims to gain guidance in responsibly developing AI systems that support digital intimacy and health equity.
Lindsay Sanneman is an Assistant Professor in the School of Computing and Augmented Intelligence at Arizona State University. Her research on Transparent Value Alignment explores how AI systems can align with human goals through explainability and mutual understanding. She is especially interested in bridging technical innovation with human-centered evaluation to ensure trustworthy outcomes in real-world settings.
She earned her PhD in Autonomous Systems from the Massachusetts Institute of Technology, where she also collaborated across disciplines to incorporate perspectives from cognitive psychology and human factors into her work.
Through the CRA Trustworthy AI Research Fellowship, she aims to integrate social science perspectives into AI alignment and transparency research, and help build interdisciplinary frameworks for more responsible AI systems. Jayshree Sarathy is a Senior Research Fellow and incoming Assistant Professor of Computer Science at Northeastern University.
Her work integrates Science and Technology Studies (STS) and computing to examine what responsible and trustworthy AI looks like in the context of public-sector infrastructures. She focuses on bridging epistemic gaps around data, evaluating the social implications of AI, and building tools and materials that align technical development with ethical commitments.
She earned her PhD and SM in Computer Science from Harvard University and her BS in Computer Science from Yale University. Her interdisciplinary approach draws on training in both computer science and the social sciences, and is informed by collaborations with organizations such as the U.S. Census Bureau and the Wikimedia Foundation.
Through the CRA Trustworthy AI Research Fellowship, she aims to build scalable frameworks that reflect both computational and social understandings of technology. Lucretia Williams is a Research Scientist at Howard University’s Institute of Human-Centered AI and director of the ATHENA Lab (Advancing Technologies in Health, Education, and New Ventures in AI).
Her research spans AI ethics, health, and education, and is rooted in community-based design approaches. She focuses on ensuring that AI technologies are transparent, safe, and culturally responsive—designed with, not just for, communities historically excluded from technological innovation.
She earned her PhD in Informatics from the University of California, Irvine, where her dissertation explored the design and evaluation of culturally responsive digital mental health technology for racial-ethnic minorities. She also holds a BS in Psychology, with a minor in Business Administration, from Howard University.
Across her work, she integrates qualitative and participatory methods to reimagine what it means for AI to be trustworthy in practice. Through the CRA Trustworthy AI Research Fellowship, she aims to expand her multidisciplinary training and collaborate with scholars, practitioners, and policymakers to contribute to the national discourse on the societal implications of AI and the development of ethical, inclusive systems.
CRA Trustworthy AI Research Fellowship in Computing Research News University of California San Diego Saint Mary’s College of California University of Texas at El Paso Indiana University Bloomington
Key questions and narrative sections extracted from the solicitation.
What secondary training or research experience have you had in humanistic social sciences? (Please include field and accreditation type, if any.)
Why are you interested in becoming a CRA Trustworthy AI Research Fellow?
Please describe your experience with or interest in community-based approaches to trustworthy AI research and public interest technologies.
Based on current listing details, eligibility includes: Early-career computing researchers 1-3 years post-PhD (conferral between May 1, 2023 and July 1, 2025), with primary doctoral degree in computing-related field and interdisciplinary training in humanistic social sciences. Must hold faculty, postdoctoral, or visiting researcher position at a U.S. institution of higher education. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates $17,000 stipend plus travel, lodging, and meals for in-person activities Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is March 31, 2026. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.