• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Looking before we leap: Expanding ethical review processes for AI and data science research

May 21, 2023

🔬 Research Summary by Ismael Kherroubi Garcia, trained in business management, and philosophy of the social sciences. He is the founder and CEO of Kairoi, the AI Ethics and Research Governance Consultancy.

[Original paper by Mylene Petermann (Ada Lovelace Institute), Niccolo Tempini (Senior Lecturer in Data Studies at the University of Exeter’s Institute for Data Science and Artificial Intelligence), Ismael Kherroubi Garcia (Kairoi), Kirstie Whitaker (Alan Turing Institute), Andrew Strait (Ada Lovelace Institute)]


Overview: Products and services built through AI and data science research can substantially affect people’s lives, so research must be conducted responsibly. In many corporate and academic research institutions, a primary mechanism for assessing and mitigating risks is using Research Ethics Committees (RECs), also known in some regions as Institutional Review Boards (IRBs). This report explores academic and corporate RECs’ role in evaluating AI and data science research, providing recommendations to businesses, academia, policymakers, and funders working in this context.


Introduction

AI tools are widely applicable, and their different uses raise different controversies. Consider the case of ChatGPT being deployed in an experiment by a nonprofit healthcare provider. In short, the experiment consisted of mental health supporters using ChatGPT to “write more supportive responses quickly.” Meanwhile, the nonprofit’s founder claimed the study to be “exempt” from requiring informed consent from users. This stance has fuelled countless debates over social media. These are also the types of questions RECs are best equipped to manage.

Since the 1960s, RECs have been empowered to review research before it is undertaken and to reject proposals unless the proposed research design meets certain ethical standards. However, RECs have generally been established to handle biomedical research. Most academic and corporate RECs’ current role, scope, and function need to be revised for the novel challenges that AI and data science research pose.

Through an extensive literature review, workshops, and interviews with experts from academia and industry, we identify six major challenges RECs face when working in the context of AI and data science research. We make eight recommendations to research institutes, industry, and the broader AI and data science ecosystem.

Key Insights

Challenges faced by RECs

  1. Many RECs lack the resources, expertise, and training to address the risks that AI and data science pose appropriately
  2. Traditional research ethics principles are not well suited for AI research, as they assume a closer researcher-subject relationship, as found in biomedical research
  3. Specific principles for AI and data science research are still emerging and are not consistently adopted by RECs
  4. Multi-site and public-private partnerships can exacerbate existing challenges of governance and consistency in decision-making processes
  5. RECs struggle to review potential harms and impacts that arise throughout AI and data science research
  6. Corporate RECs lack appropriate transparency concerning their processes

Recommendations

For academic and corporate RECs

#1: Incorporate broader societal impact statements from researchers. 

AI and data science research communities have called for researchers to incorporate moral considerations at various stages of their work, from peer review to conference submissions. RECs can support these efforts by incentivizing researchers to engage in reflexive exercises to consider and document the broader societal impacts of their research.

#2: RECs should adopt multi-stage ethics review processes of high-risk AI and data science research.

Many challenges that AI and data science raise will arise at different research stages. RECs should experiment with requiring multiple evaluation stages for high-risk research. For example, a REC can evaluate projects at both the point of data collection and the point of publication.

#3: Include interdisciplinary and experiential expertise in REC membership.

Many of AI and data science research’s risks can only be understood by engaging with diverse experiences and expertise. RECs must be interdisciplinary to address the myriad issues that AI and data science can pose in different domains. RECs must incorporate the perspectives of those impacted by the research and its outputs.

For academic and corporate research institutions

#4: Create internal training hubs for researchers and REC members, and enable cross-institutional knowledge sharing. 

Cross-institutional knowledge-sharing can ensure institutions do not develop standards of practice in silos. Training hubs should collect and share information on the ethical issues and challenges AI and data science research might raise, including case studies supporting reflexive exercises. In addition to our report, we have developed a resource of six case studies highlighting the ethical challenges RECs might face.

#5: Corporate labs must be more transparent about their decision-making and engage more with external partners.

Corporate labs face specific challenges regarding AI and data science reviews. While many are better resourced and have experimented with broader societal impact thinking (compared to academic RECs), some of these labs have faced criticism for being opaque about their decision-making processes. Many of these labs make consequential decisions about their research without engaging with local, technical, or experiential expertise that resides outside their organizations.

For funders, conference organizers, and the broader research ecosystem

#6: Develop standardized principles and guidance for AI and data science research.

National research governance bodies like UKRI should work to create a new set of ‘Belmont 2.0’ principles that offer standardized approaches, guidance, and methods for evaluating AI and data science research. Developing these principles should draw on diverse perspectives from different disciplines and communities impacted by AI and data science research, including multinational perspectives –  particularly from regions historically underrepresented in the development of past research ethics principles.

#7: Actors across the research ecosystem should incentivize a responsible research culture.

AI and data science researchers lack the incentives to reflect on and document the societal impacts of their research. Different actors in the research ecosystem can encourage ethical behavior. Funders, for example, can create requirements that researchers develop societal impact statements to receive a grant. Meanwhile, conference organizers and journal editors can encourage researchers to include such statements when submitting research. By creating incentives throughout the research ecosystem, ethical reflection can become more desirable and be rewarded.

#8: Policymakers should increase funding and resources for ethical AI and data science research reviews. 

There is an urgent need for institutions and funders to support RECs, including paying for the time of staff and funding external experts to engage in questions of research ethics. The traditional approach to RECs has treated their labor as voluntary and unpaid. RECs must be properly resourced to meet AI and data science challenges.

Between the lines

There is no need to reinvent the wheel for AI and data science research. RECs and broader research governance departments have been around for decades. Our report highlights the opportunity to tap into this rich resource. And we have come a long way since RECs first emerged.

We now know we need a shift in academia from a culture of “publish-or-perish” to one of research integrity. We also have evidence that diverse teams fuel innovation.  These are aspects our report points to and which RECs in general – not just in AI and data science – can work to improve.
More specifically, we know data science education needs more focus on ethics. We know AI systems risk exacerbating racial and societal inequalities. We know there are many sets of moral principles we can write into AI ethics frameworks. However, we must move on from debating values to operationalizing responsible AI practices. While RECs will need adapting and adequate resourcing, they can help drive the movement toward responsible AI and data science research.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Research summary: Machine Learning Fairness - Lessons Learned

    Research summary: Machine Learning Fairness - Lessons Learned

  • Unprofessional Peer Reviews Disproportionately Harm Underrepresented Groups in STEM (Research Summar...

    Unprofessional Peer Reviews Disproportionately Harm Underrepresented Groups in STEM (Research Summar...

  • Ten Simple Rules for Good Model-sharing Practices

    Ten Simple Rules for Good Model-sharing Practices

  • Project Let’s Talk Privacy (Research Summary)

    Project Let’s Talk Privacy (Research Summary)

  • Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance

    Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance

  • Responsibility assignment won’t solve the moral issues of artificial intelligence

    Responsibility assignment won’t solve the moral issues of artificial intelligence

  • Universal and Transferable Adversarial Attacks on Aligned Language Models

    Universal and Transferable Adversarial Attacks on Aligned Language Models

  • Friend or foe? Exploring the implications of large language models on the science system

    Friend or foe? Exploring the implications of large language models on the science system

  • Distributed Governance: a Principal-Agent Approach to Data Governance - Part 1 Background & Core Def...

    Distributed Governance: a Principal-Agent Approach to Data Governance - Part 1 Background & Core Def...

  • Automated Interviewer or Augmented Survey? Collecting Social Data with Large Language Models

    Automated Interviewer or Augmented Survey? Collecting Social Data with Large Language Models

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.