1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
The 2025 call for proposals has concluded; award recipients have until December 31, 2025 to complete projects. No future call dates are announced.
Research Grants from the Notre Dame-IBM Technology Ethics Lab fund researchers and teams working on ethical and effective human-AI collaboration in real-world settings. The 2025 Call for Proposals focused on designing solutions that leverage AI to augment human tasks while ensuring those collaborations are ethical, inclusive, and beneficial to society.
Areas of interest include AI deployed in critical sectors such as healthcare, education, and public services, with attention to medium- and long-term societal impacts. Eligible applicants include university faculty and research teams. Funded projects must be completed by December 31, 2025 for the 2025 call cycle.
Get alerted about grants like this
Save a search for “Notre Dame–IBM Technology Ethics Lab” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
Notre Dame–IBM Technology Ethics Lab › 2025 Call for Proposals Projects The Lab's 2025 Call for Proposals focused on how to design effective solutions for safe and ethical human-AI collaboration in real-world settings. The objective is to foster developments that leverage AI to augment human tasks and ensure that these collaborations are ethical, inclusive, and beneficial to society at large.
As AI systems become increasingly sophisticated, there is a pressing need to understand and optimize how they can work alongside humans in various domains and to anticipate potential medium- to long-term impacts of such applications in critical sectors. Information about the award winners is included below. Teams will have until December 31, 2025, to complete their projects.
Digital Afterlives? Mourning, Memory, and Grief Tech Principal investigators: Joseph Davis (University of Virginia) and Micah E.
Lott (Boston College); Notre Dame collaborator: Paul Scherz, Theology; additional team member: William Hasselberger (Catholic University of Portugal) New AI tools—“ghostbots”—can analyze data from a specific deceased person, like text messages, emails, and videos, to create an interactive digital companion that simulates them.
This project will examine the reasons behind the rise of ghostbots, the ethical and religious concerns raised by the technology, and the role of grief and mourning in human lives.
Organized the Digital Afterlives: AI, Memory, Mourning conference, Universidade Catolica Portuguesa, October 2025 Enhancing Human-AI Collaboration and Policy in Emergency Response: Ethical Deployment of AI-Enabled Drones Principal investigator: Ricardo Morales (Brown University); Notre Dame collaborator: Jane Cleland-Huang, Computer Science and Engineering; additional team members: Kaitlin Harris (US Air Force/SAF/AQRE); Demetrius Hernandez (University of Notre Dame), and Tristian Hernandez The ability to integrate AI-enabled drones into emergency response operations offers the potential to significantly enhance the situational awareness and decision-making of emergency response teams.
However, balancing AI autonomy with human oversight introduces complex ethical and operational challenges. This project aims to inform future Federal Aviation Administration regulations by developing ethical guidelines for the deployment of AI drones in emergency response.
Navigating the black box: Operational lenses for AI-enabled drone governance , MIT Science Policy Review, August 2025 Digital Moral Twins: From Bioethical Principles to AI Ethics and Back Again Principal investigator: Jeffrey P.
Bishop (Saint Louis University); Notre Dame collaborator: Paul Scherz, Theology; additional team members: Emily Dumler-Winckler (Saint Louis University), Lydia Dugdale, MD (Columbia University Medical Center), Jason T. Eberl (Saint Louis University), S.
Matthew Liao (New York University), and -Devan Stahl (Baylor University) In the case of an incapacitated medical patient, surrogate decision-makers often struggle to predict the patient’s desired healthcare choices. This project seeks to evaluate one proposed solution to this problem: a personalized patient preference predictor (i.e. “P4”) AI technology.
Drawing on the most recent advances in the field of bioethics, this project will also assess whether core bioethical principles can be applied to the emerging field of artificial intelligence.
Building AI Text Classifiers with Peacebuilders: A Human-AI Collaboration to Improve Conflict Analysis and Resolution Principal investigator: Allan Cheboi (Build Up); Notre Dame collaborator: Lisa Schirch, Peace Studies; additional team members: Julie Hawke and Will O’Brien (University of Notre Dame) By inviting peacebuilders into the process of designing and developing AI text classifier technologies, developers can not only increase these practitioners’ awareness of AI and willingness to integrate it into peacebuilding work, but also improve the quality and relevance of AI-generated classifications for peacebuilding around the world.
(Digital) Companionship in the Digital Age: On Human-AI Relationships and the Ethical Landscape Surrounding Artificial Others Principal investigators: Robert Clowes (NOVA University of Lisbon) and Kesavan Thanagopal (University of Notre Dame); Notre Dame collaborator: Diego Gómez-Zará, Computer Science AI companion apps like Replika, Character.
AI, and Kuki allow users to create “artificial others” that can provide conversation, emotional support, and judgment-free interactions. Are the simulated responses of AI companions ethically problematic? Can AI companions be viewed as “persons” in a philosophical sense?
And what are the ethical responsibilities of app developers when releasing such technology into society?
Organized the 1st Symposium on Generative Companionship in the Digital Age: On Human-AI Relationships and the Ethical Landscape Surrounding Artificial Others , University of Twente, July 1-3, 2025 Image Descriptions Are Less Reliable Than They Appear: Support for Blind Users Assessing Capabilities of AI-Powered Access Technology Principal investigator: Amy Pavel (University of Texas at Austin); Notre Dame collaborator: Toby Li, Computer Science; additional team member: Meng Chen (University of Texas at Austin) Millions of blind and sight-impaired people across the world now use AI technologies such as ‘vision language models’ to access visual information in their daily lives.
But these models can produce errors that can go unnoticed by users, as they are difficult to identify without sight. How do we ensure the meaningful agency of those utilizing AI tools when direct verification of AI outputs is impractical or impossible?
Surfacing Variations to Calibrate Perceived Reliability of MLLM-generated Image Descriptions , accepted to The 27th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS), October 2025 2024 Call for Proposals Projects The focus of the 2024 CFP is “The Ethics of Large-Scale Models.
” Large-scale models are drawing significant widespread public interest, driven by an increased focus on applications that leverage this technology, such as ChatGPT, LLaMa, DALL-E, or Midjourney. Large-scale models are artificial intelligence (AI) systems pre-trained on large datasets for general-purpose use, often adapted for a specific task using a fine-tuning process.
Their adoption and impact are accelerating, driven by their potential in various contexts. With the increased adoption of large-scale models, multiple stakeholders have raised concerns regarding the ethical issues surrounding their design, development, and use.
Examples of areas for research and scholarship include, but are not limited to, the following: Society, Community, and Culture Interdisciplinary (Developing trustworthy systems, design principles, human-AI collaboration, ROI) Information about the award winners is included below.
Awardees: Emillie de Keulenaar (University of Groningen); Notre Dame Faculty Collaborator: Lisa Schirch, Keough School of Global Affairs This project looks at how large language models reproduce conflicts imbued in minority language training datasets, and then proposes frameworks for training and fine-tuning models for bridge-building applications. First, it examines how five closed and open-source LLMs — GPT 3.
5 and 4, Llama2, BERT, Mistral and Gemini — respond to hundreds of historically, politically, socially or morally controversial questions, or "controversy prompts", in one majority language (English) and at least two minority languages with a conflict history (for example, Armenian and Azerbaijani, Arabic and Hebrew or other).
By comparing results, I look at the discursive ways in which LLMs respond to controversy prompts and pinpoint where there is significant divergence, gaps and biases that obstruct the exchange of information, context and other elements necessary for dialogue across language groups.
Then, I formulate a framework to retrain or fine-tune LLMs with additional datasets and bridge-building methods, which I formulate in consultation with peacebuilding and computation experts at the Notre Dame-IBM Technology Ethics Lab, the Council for Tech and Social Cohesion and the UN DPPA Innovation Cell.
Retrained or fine-tuned models can be used for content moderation in search, recommendation and other applications, which, when queried with sensitive and conflict-prone keywords ("genocide in Gaza", "Nagorno Karabakh", etc.), may prioritise content that facilitates contextual, historical and other understanding across languages.
Contextualising AI Ethics in Higher Education: Comparing the Ethical Issues Raised by Large-Scale Models in Higher Education Across Countries and Subject Domains Awardees: Wayne Holmes, Caroline Pelletier (Institut "Jožef Stefan"); Notre Dame Faculty Collaborator: Ying (Alison) Cheng, Psychology Many Higher Education (HE) institutions have published policies on the use of large- scale models (LSMs), a set of Artificial Intelligence (AI) technologies, in teaching and learning.
Such policies appear often to focus exclusively on academic integrity but rarely acknowledge variations in the meaning of ‘integrity’ across different cultural contexts and disciplines. In response, this study aims to examine more contextual understandings of LSM/AI ethics in HE. The project will begin with a systematic comparison of LSM policies across two national contexts in which they are used extensively (US/UK).
This will be followed by interviews of teaching faculty to examine how those policies are interpreted and enacted across a range of HE subject domains, from arts to natural sciences. Finally, the outcomes of the policy review and interviews will inform a scalable, mixed-methods survey of faculty from the two countries and the varied subject domains, to reveal generalisable insights.
Cultural Context-Aware Question-Answering Systems: An Application to the Colombian Truth Commission Documents Awardees: Luis Gabriel Moreno Sandoval (Pontificia Universidad Javeriana); Notre Dame Faculty Collaborators: Matthew Sisk and Anna Sokol, Lucy Family Institute for Data & Society, and Maria Prada Ramírez, Kroc Institute for International Peace Studies The goal of this project is to create a Question-Answering System for the Colombian Truth Commission, the first-ever digital archive of the peace process.
However, the archive contains a vast amount of data, making manual analysis techniques impractical. Therefore, the system is necessary to facilitate navigation and understanding of documents. This initiative ensures global access to the archive and explores digital approaches used in peace processes.
It contributes valuable knowledge to improve future peacebuilding and conflict resolution efforts worldwide.
Engaging End Users in Surfacing Harmful Algorithmic Behaviors in Large-Scale AI Models Awardees: Wesley Hanwen Deng, Motahhare Eslami, Ken Holstein, Jason Hong (Carnegie Mellon University); Notre Dame Faculty Collaborator: Toby Jia-Jun Li, Computer Science and Engineering Traditional methods of testing AI models for harmful algorithmic behaviors, such as algorithm auditing, can fail to detect major issues given these methods’ reliance on small groups of AI experts.
Recent research has shown that end-users, armed with their relevant cultural knowledge and lived experience, can surface harmful algorithmic behaviors that are overlooked by expert-led AI auditing and red teaming. However, there remains a notable absence of tools, guidelines, and processes to facilitate public participation in surfacing harmful algorithmic behaviors in large-scale AI models.
To bridge this gap, we propose designing, developing, and evaluating a user-centered, interactive tool that effectively engages end-users in onboarding, exploring, reporting, and discussing the potential harmful AI behaviors exhibited in large-scale AI models. Our goal is to enhance meaningful public engagement to cultivate a responsible and ethical landscape for large-scale AI models.
Ethical Deployment of Generative AI systems in the Public Sector: A Practitioner's Playbook Awardees: Dhanyashri Kamalakkannan, Shyam Krishnakumar, Titiksha Vashist (The Pranava Institute); Notre Dame Faculty Collaborator: Georgina Curto Rex Large scale AI systems, particularly multimodal Large Language Models(LLMs), hold immense potential in transforming how governments regulate, deliver essential services, and interface with citizens.
LLM-powered technology solutions are expected to play an important role in making public services more accessible, enhance and personalise public service delivery of goods such as education and health, improve hiring and personnel management, and make policy processes more participatory across governments.
However, deployment of generative AI solutions in the public sector, whether to improve public service delivery, augment state capacity, or create new public goods, varies in its larger purpose from end-user or enterprise applications and give rise to substantially different ethical challenges when compared to its application in the private sector.
Public deployments of AI need to be citizen-centric, and keep the public good at the core through enabling democratic values like trust, accountability, transparency, protection of rights and ensuring adequate public oversight mechanisms.
This project seeks to create an ethical framework for public deployment of Generative AI which will translate ethical principles and existing global and national-level guidelines into a practical and accessible practitioner’s playbook which key decision-makers in government and companies can use as an ethical fitness check to mitigate potential harms before deployment of AI across the public sector.
Ethical LLM-based Approach to Improve Early Childhood Development in Children with Cancer in LMICs Awardees: Horacio Márquez-González (Hospital Infantil de México Federico Gómez); Notre Dame Faculty Collaborator: Nitesh Chawla, Computer Science and Engineering, and Angélica Garcia Martínez, Lucy Family Institute for Data & Society This project aims to develop a prototype integrating Large Language Models (LLM) and Automated Speech Recognition (ASR) technologies to resolve communication barriers between caregivers and teachers, inside and outside the National Institute of Pediatrics, Hospital Infantil de México Federico Gómez (HIMFG) in México City.
This prototype will allow health workers and teachers to address critical early childhood development (ECD) dimensions in children with cancer in accessing community health, nutrition, education, and parental care programs. Success relies on actively engaging teachers and caregivers in assessing needs (phase 1) and ensuring continuous improvement (phase 2), fostering inclusivity, and impacting prioritized groups positively.
This holistic approach to health, nutrition, education, and parenting care services is anticipated to improve social and economic development in population subgroups. The ethical implications of this technology will be explored, and we’ll discuss expansion to other countries/regions.
Generative AI and the Social Value of Artifacts: The Case for Saving Photo Morgues Awardees: Kafui Attoh (CUNY School of Labor and Urban Studies), Jamie Kelly (Vassar College); Notre Dame Faculty Collaborator: Don Brower, Center for Research Computing How LLMs Modulate our Collective Memory and its Ethical Implications Awardees: Jasna Čurković Nimac (Catholic University of Croatia); Notre Dame Faculty Collaborator: Nuno Moniz, Notre Dame-IBM Technology Ethics Lab and Lucy Family Institute for Data & Society This project will assess the impact of large languages models’ (LLMs) such as GPT on the formation of collective memory.
In traditional terms, collective memory is a dynamic product of data selectively provided mainly from the institutions of memory (archives, museums, media, schools). However, AI changes how we access and use data in a public space.
The main research of this project pivots around the social and educational uses of AI or how it reshapes the process of declarative memory-making and influences an ethically sustainable and impartial social framework for the construction of collective memory. Practically, how may our use of LLMs be shaping our collective memory concerning critical historical events?
Our hypothesis suggests that the probabilistic matrix of tools such as GPT are prone to officialise the most represented narratives in their training data, opening troubling avenues of action in cognitive warfare, with a significant potential to shape our collective memory.
To test this hypothesis, we will examine several controversial world events in different languages used by GPT to understand their differences concerning historical factuality and representation across languages and, if so, what are the practical and ethical implications in education, digital policy, and peacebuilding. How Well Can GenAI Predict Human Behavior?
Auditing State-of-the-Art Large Language Models for Fairness, Accuracy, Transparency, and Explainability (FATE) Awardees: Jon Chun, Katherine Elkins (Kenyon College); Notre Dame Faculty Collaborator: Yong Suk Lee, Keough School of Global Affairs This research project targets a pivotal issue at the intersection of technology and ethics: surfacing how Large Language Models (LLMs) reason in high-stakes decision-making over humans.
Our central challenge is enhancing the explainability and transparency of opaque black-box LLMs and our specific use-case is predicting recidivism—a real-world application that influences sentencing, bail, and early release decision. To the best of our knowledge, this is the first study to integrate and contrast three different sources of ethical decision: human, statistical machine learning (ML), and LLMs.
Methodologically, we propose a novel framework that combines state-of-the-art (SOTA) qualitative analyses of LLMs with SOTA quantitative performance of traditional statistical ML models. Additionally, we compare these two approaches with documented predictions by human experts.
This multi-model human-AI approach aims to surface both faulty predictions across all three as well as correlate patterns of both valid and faulty reasoning by LLMs. This configuration offers a more comprehensive evaluation of their performance, fairness, and reliability essential for building trust in LLMs.
The anticipated outcomes of our project include a test pipeline to analyze and identify discrepancies and edge cases in both predictions and the reasoning behind them. This pipeline includes automated API scripts, an array of simple to complex prompt engineering strategies, and well as various statistical analyses and visualizations.
The pipeline architecture will be designed to generalize to other use cases and accommodate future models and prompt strategies to provide maximal reuse for the AI safety community and future studies. This project not only seeks to advance the field of XAI but also to foster a deeper understanding of how AI can be aligned with ethical principles.
By highlighting the intricacies of AI decision-making in a context fraught with moral implications, we underscore the urgent need for models that are not only technologically advanced but also ethically sound and transparent.
Impact of Generative Artificial Intelligence - ChatGPT - on Higher Education in the Global South: Ethics and Sustainability Awardees: Helen Titilola Olojede, Felix Kayode Olakulehin (National Open University of Nigeria); Notre Dame Faculty Collaborator: Nitesh Chawla, Computer Science and Engineering LLMs and a Well-Rounded Approach to Human Flourishing Awardees: Avigail Ferdman (Technion-Israel Institute of Technology); Notre Dame Faculty Collaborator: Don Howard, Philosophy Human flourishing ought to be an important ethical concern in Large-Scale Models, but it has yet to receive systematic scholarly attention.
This research offers to address this gap, by combining perfectionism—an ethical approach to human flourishing—with an analysis of Large Language Model (LLM) environments. According to developmental perfectionism, humans flourish when they develop and exercise their capacities (to know, create, be sociable, exercise willpower) in well-rounded ways. Capacities are shaped by affordances—action possibilities in the environment.
The research will analyze properties of LLM environments (e.g. content generation; speedy data analysis; user interface), according to their affordances (or constraints) for the competent exercise of a well-rounded combination of human capacities.
This will provide a new lens from which to evaluate the goodness of LLM design, deployment and use, as well as an opportunity to offer an LLM ethics that goes beyond its current focus on risks and harms.
Mitigating Ethical Risks in Large Language Models through Localized Unlearning Awardees: Alberto Blanco-Justicia, Josep Domingo-Ferrer, Najeeb Jebreel, David Sánchez (Universitat Rovira i Virgili); Notre Dame Faculty Collaborator: Nuno Moniz, Notre Dame-IBM Technology Ethics Lab and Lucy Family Institute for Data & Society During training, large language models (LLMs) can memorize sensitive information or capture biased/harmful patterns present in their training data, which can then be delivered to end users at inference time.
These undesirable behaviors undermine societal values and raise ethical risks. The overarching goal of our proposal is to develop an effective and efficient localized unlearning method that mitigates ethical risks in LLMs without compromising their utility.
To achieve this goal, we plan to: i) precisely locate the minimal internal components of LLMs responsible for undesirable behaviors; ii) implement efficient target interventions on these components to unlearn those behaviors; and iii) evaluate our method using standard LLMs and data sets. Our expected outcome is to make LLMs more ethically compliant.
Research-Based Theater: An Innovative Method for Communicating and Co-Shaping AI Ethics Research & Development Awardees: Anastasia Aritzi, Christoph Lütge, Franziska Poszler (Peter Löscher Chair of Business Ethics & Institute for Ethics in Artificial Intelligence, Technical University of Munich); Notre Dame Faculty Collaborator: Carys Kresny, Film, Television, and Theatre Seeing the World through LLM-Colored Glasses - Detecting Biases and Deficiencies in Language Model Presentation of Underrepresented Topics Awardees: Muhammad Ali, Ricardo Baeza-Yates, Shiran Dudy, Resmi Ramachandranpillai, Thulasi Tholeti (Northeastern University Institute for Experiential AI); Notre Dame Faculty Collaborator: Toby Jia-Jun Li, Computer Science and Engineering Sources of information on the internet such as Wikipedia have exhibited long-standing disparities in representation across demographic dimensions.
Women and gender non-conforming individuals, racial and ethnic minorities, and people from the Global South have all faced difficulties in finding members of their communities or related topics in resources that are considered “definitive” tools for information. The creation of large language models (LLMs) like ChatGPT has introduced a new mediator of information that is increasingly being used in education and beyond.
This project will study differences in how LLMs present information about underrepresented topics. Through queries about public figures and geographic locations, we will build metrics to measure disparities in discovery rate, consistency, and sentiment of model responses.
These metrics will allow us to better understand the real-world implications of adoption of LLM tools and how future access to information might be skewed or limited by this new technology.
Technology Transfer and Culture in Africa: Large Scale Models in Focus Awardees: Catherine Botha, Franklyn Echeweodor, Anthony Isong, Edmund Ugar (University of Johannesburg); Notre Dame Faculty Collaborator: Jaimie Bleck, Political Science The proposed project comprises a focused, multi-disciplinary investigation of how technology transfer impacts on culture in Africa in the context of large scale models.
The project will yield three deliverables: one national workshop, one international conference held in South Africa and one journal special issue devoted to the theme. The impact of technology transfer on culture is an underexplored theme in the literature, and the impact of large scale models is only recently attracting much attention, but not from the perspective of technology transfer.
We contend that the theme would benefit from a multi-disciplinary interrogation, to direct policy and law-making within the African context, as well as benefit makers of technologies. A carefully considered theoretical grounding to policy and other decision- making in the area of technology transfer and its impact on culture is, in our view, a first step in understanding this rich topos.
The Ethics of Using Large-Scale Models: Investigating Literacy Interventions for Generative AI Awardees: Ranjit Singh, Emnet Tafesse (Data & Society Research Institute); Notre Dame Faculty Collaborator: Karla Badillo-Urquiola, Computer Science and Engineering This project will explore literacy as a precondition for the ethical use of large-scale generative AI (GAI) models.
We will investigate how literacy interventions for students and parents become a site for empirical ethics by building their capacity to handle novel concerns around the increasing use of AI in ordinary settings. We focus on two kinds of literacy efforts: (1) events for community college students that take a gamified approach to testing GAI models; and (2) surveys that document families’ anxieties and aspirations around GAI.
While the first effort utilizes events as interventions that collect data on students’ concerns, the second effort collects data on the concerns of families that will, in turn, inform the design of new interventions. Our analysis will reflect on the nature of the critical thinking skills that these interventions produce, and how these skills mutually shape the ethics of using large-scale models.
The Influence of Virtual Avatar Race and Gender on Trust and Performance: Understanding How the Appearance of LLM-Enabled Avatars Influences Work in Virtual Reality Awardees: Lisa van der Werff, Theo Lynn (Dublin City University); Notre Dame Faculty Collaborator: Timothy Hubbard, Management & Organization This project investigates the interactions between individuals and Large Language Models (LLMs)-embodied virtual avatars within virtual reality (VR), focusing on the influence of avatar race and gender.
As new technologies like LLMs, VR, and conversational virtual avatars converge, they redefine the future of work, enabling unique collaborations between humans and AI. This project aims to understand how workplace diversity within these technological advancements impact worker dynamics.
Through a series of laboratory experiments, we will explore whether and how the race and gender of LLM-embodied avatars affect user interactions, trust levels, and task performance. The study leverages a trust lens to examine biases and differences in engagement with avatars, challenging existing notions of diversity and inclusion.
By addressing ethical considerations and providing evidence-based insights, the project seeks to inform future design and policy decisions regarding the deployment of virtual avatars in professional settings, informing ethical integration of AI in the workplace. 2023 Call for Proposals Projects The focus of the 2023 CFP is “Auditing AI.
” As AI becomes more sophisticated, auditing it will only involve increasingly complicated ethical, social, and regulatory challenges. Dimensions that require auditing must be identified, agreed upon, and measured. AI auditors must be trained.
Policies must be developed to govern the operations, credentialing, and impact of audits.
Potential areas for research and scholarship included the following: Regulatory frameworks for AI audits Methodologies for AI audits Skills for future AI auditors Teaching methodologies for AI audits How AI audits may impact various sectors and industries Suggested best practices for AI audits Adoption and deployment of AI audits Information about the award winners is included below. AI Audits for Who?
Asian Perspectives on Rebuilding Public Trust via Community Ethics and Conflict Resolution Mechanisms Awardees: Mark Findlay (Singapore Management University), Sharanya Shanmugam (Singapore Management University), Zhang Wenxi (Singapore Management University), Willow Wong (Singapore Management University) The governance of artificial intelligence (AI) to mitigate societal and individual harm through ethics-by-design calls for equal attention to responsible data use before public trust can be conferred to AI technologies.
Since trust is fundamentally rooted in community relationships, AI regulators seeking public acceptance toward AI innovation must attend to community-centric pathways to integrate data subjects’ voices in AI ethical decision-making.
While traditional actuarial methods in financial audits can indicate a diverse range of evidence used to determine legal compliance, the researchers suggest that community interests and data subjects’ voices should not be absent in AI audit models. This research proposal will explore Singaporean (and Asian) perspectives on AI regulation to inform the motivations for using AI audits to rebuild public trust.
Research analysis on the proposed scope and methodologies of AI audits will be followed by recommendations on the relevant skillsets for future AI auditors. Algorithm Auditing, the Book Awardee: Christian Sandvig (University of Michigan) Would-be algorithm auditors presently have little guidance to begin learning about their research method.
This project proposes to use an experimental writing process—the “book sprint”—to produce a short book about algorithm auditing. The book would be written by successful auditors and their legal advisors in a concise, accessible style, and published by a respected press.
As a crossover title it would aim at both potential auditors (including algorithm designers, investigative journalists, and academic researchers) and others who wish to understand this area (including policymakers, regulators, and the general public). It will be grounded in the longstanding social scientific audit study literature, also known as correspondence studies or paired testing, giving it a distinctive voice.
These foundational ideas will be updated and applied to contemporary systems by leading researchers in computing, employing real-world examples from cutting-edge contemporary audits.
Audit4SG: Toward an Ontology of AI Auditing for a Web Tool to Generate Customizable AI Auditing Methodologies for AI4SG Awardees: Cheshta Arora (Independent Researcher), Debarun Sarkar (Independent Researcher) This project aims to develop an ontology of AI auditing, which will be used to build an auditing web tool.
The target users of the tool are external AI auditors of AI4SG (AI for Social Good), who will be able to generate customizable AI auditing methodologies. Along with the custom AI auditing methodology generated by the user, the tool will provide the user with a report card noting the pros and cons of the chosen AI auditing methodology.
The web tool will be a proof of concept whose underlying ontology will contribute to the field of relational ethics in AI auditing that can account for diverse interests, multi-directional processes, multi-scalar networks of actors, institutions, data, algorithms, infrastructure, values, and knowledges.
For heuristic purposes and based on the team’s domain expertise, the project will limit itself to three domains of AI4SG: economic empowerment, education, and equality and inclusion.
Best Practices for Communicating and Writing AI Audits Awardee: John Gallagher (University of Illinois Urbana-Champaign) This project aims to provide AI auditors with the discipline-specific training required to reach multiple expertise levels in their reporting, while navigating potential pitfalls of ambiguity.
It will achieve this goal via the analysis and synthesis of one-on-one interviews conducted with 107 machine-learning scientists and researchers (118 interviews, over 86 recorded hours). This dataset, gathered for an unfunded project, contains unanalyzed responses to direct questions about communication with AI scientists, domain experts (non-AI), and the public.
This large amount of process-based information requires trained human coders to identify best practices and themes. Drawing upon communication frameworks from the field of writing studies, these findings will be synthesized into training modules and disseminated to the academic community.
Building a Model of Participation of Children, Families, and Communities in AI Audits for Educational Services in Brazil Awardees: Bruno Bioni (Data Privacy Brasil Research Association), Marina Garrote (Data Privacy Brasil Research Association), Marina Meira (Data Privacy Brasil Research Association), Júlia Mendoça (Data Privacy Brasil Research Association) This project will concentrate on methodologies for meaningful participation of children, families, and communities in AI audits, focusing on educational technologies in Brazil.
Both awareness and tools to promote participation are lacking and needed, given i) the penetration of AI-based solutions in schools in Brazil and ii) the current process to regulate AI nationally, which has so far paid little attention to this issue.
The same reasons for concern also reveal the good timing for the project, with a community of technology and children's rights activists and scholars that has become increasingly more engaged in recent years. The second goal is to move forward in building a model for stakeholder participation geared toward this specific public.
The scope is justified by a favorable context on three fronts: hits and misses in law-mandated participation models, a robust framework for child protection, and a problematic but timely process of AI regulation. The activities will consist of workshops and qualitative research to produce a roadmap proposal.
A Capability Approach to Ethics-Based Auditing in Medical AI Awardees: Mark Graves (AI & Faith), Emanuele Ratti (University of Bristol) Recently, it has been proposed to address the ethical challenges posed by AI tools by taking inspiration from auditing processes.
This approach has been called ethics-based auditing (EBA), and it is based on an underlying conception of ethics that significantly draws from the recent principled turn of AI ethics, which is notoriously fraught with difficulties. This project proposes an alternative framework for EBA that is not based on AI principlism. In particular, it aims at conceptualizing EBA on the basis of the capability approach.
Rather than checking for compliance to vague principles, EBAs should investigate the impact of AI tools to capabilities. The team formulates a preliminary characterization of capability-based EBA in medical AI. Deliverables will consist of a manuscript delineating the framework, a prototype AI tool that can be used for both internal and external auditing, and a conference paper.
Course Development for EU AI Act
Based on current listing details, eligibility includes: Researchers and teams, often involving university faculty. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates Not specified, teams have until December 31, 2025, to complete projects for the 2025 call. Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is rolling deadlines or periodic funding windows. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.