• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Engaging the Public in AI’s Journey: Lessons from the UK AI Safety Summit on Standards, Policy, and Contextual Awareness

November 29, 2023

✍️ Column by Connor Wright, our Partnerships Manager.


Overview: The Montreal AI Ethics Institute is a partner organization with Partnership on AI (PAI). Our Partnerships Manager, Connor, attended their UK AI Safety Summit fringe event in London on the 24th and 25th of October, 2023. Impressed by the variety in both speakers and thoughts, his main takeaway came from the importance of public engagement when deploying AI systems. That is in strong alignment with the mission of MAIEI in democratizing AI ethics literacy, which aims to build civic competence so that the public is better prepared and informed about the nuances of AI systems to properly engage in how AI systems are governed from a technical and policy perspective.


Introduction

With speakers ranging from all female panels to members of the House of Lords, I was seriously impressed by the variety of speakers and diversity in thought offered. Marrying the conference with their safe foundation model guidance release, there was an emphasis on AI standards, policy, and research. Below, I will dive into my main takeaways from the event and show how public engagement will be the key to long-term AI success.

Key Insights

Public engagement

Without a doubt, what resonated most strongly with me was the Forum’s discussion around public engagement. In particular, the reference to how public engagement is more than just focus groups and surveys. For example, David Leslie of the Alan Turing Institute noted how they consulted citizen juries in their 2019 work. They learned how the local community really cared about fairness, safety, and bias mitigation being prioritized in AI systems, which they had not appreciated before. That is to say, it’s one thing knowing how a convoluted neural network (CNN) works, but it’s another knowing how it impacts people. This links to a presentation by Keoni Mahelona, which emphasized the importance of the groundwork for AI services being deployed in indigenous languages to be done by those who speak the language – Google Translate performed nowhere near well enough when translating such languages. 

Such calls for better public engagement reveal how there are no templates or core examples to use or follow when it comes to public engagement. This results in more work needing to be done to establish these processes, meaning a greater need to incentivize those needing to do the work. Sources for doing so could come from leadership buy-in and metrics within the business to show the impact of public engagement. How to foster said buy-in and metrics remains to be seen.

The importance of context

What struck me during conversations surrounding regulating AI models was how, when evaluating models, we focus too much on capabilities and not enough on context. Instead of measuring how the AI deploys in the field, we focus on its performance in the lab, creating unforeseen problems when the model is deployed. Rather, focusing on context can generate a deeper understanding of the raison d’etre of the released technologies and how companies and governments act regarding AI.

To illustrate, Lucy Poole of the Australian Government remarked how their approach to AI is influenced by distance. Rather than be forerunners of the AI process, they prefer to be “fast-followers,” learning from the mistakes of others and attempting to avoid doing the same. In this way, they have the luxury of reflecting on AI technologies before deploying them, which starkly contrasts with other countries such as the US. By considering the environment in which the AI technologies are situated, we can better understand how they will perform and why they exist in the first place.

Policy and standards

Given how this event was part of the fringe events surrounding the UK AI Safety Summit, there was a strong emphasis on the ways forward for policy and standard setting in the AI space. Unfortunately, adhering to standards and overall compliance in general is the last thing businesses want to consider. Furthermore, AI standards are like plug sockets whereby they differ from country to country (such as different emphasis on privacy). Hence, starting any AI development process by establishing in which direction you want to go with AI will help ease that burden by clarifying any necessary considerations needed to take that direction. 

In doing so, Jaisha Wray advised that we “be quick, but don’t hurry” – act swiftly but not in a hastily manner in which we make mistakes. By not hurrying, we can also emphasize the dependency of AI on humans for its data, better allowing us to focus on human rights frameworks when it comes to AI.

Between the lines

I was left very impressed by the organization and quality of speakers at this event, given how it provided a platform for all different kinds of thoughts at a time when it is so needed. Above all, I appreciated the emphasis on the importance of public engagement – panelists made sure to mention that this groundwork is hard but that this should not distract from its importance. In this way, we can opt for public engagement proportional to the risk and scale of AI technology – the more serious the calculated technological impact, the stronger the imperative to engage the consumer/recipients of the technology. Public engagement should not be a tick-box exercise. The organizations and companies that follow this line of action most thoroughly will be the most successful in the long run.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems

    Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems

  • The Future of Teaching Tech Ethics

    The Future of Teaching Tech Ethics

  • Enough With “Human-AI Collaboration”

    Enough With “Human-AI Collaboration”

  • AI Deception: A Survey of Examples, Risks, and Potential Solutions

    AI Deception: A Survey of Examples, Risks, and Potential Solutions

  • A Case Study: Increasing AI Ethics Maturity in a Startup

    A Case Study: Increasing AI Ethics Maturity in a Startup

  • Discursive framing and organizational venues: mechanisms of artificial intelligence policy adoption

    Discursive framing and organizational venues: mechanisms of artificial intelligence policy adoption

  • A Beginner’s Guide for AI Ethics

    A Beginner’s Guide for AI Ethics

  • GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Langu...

    GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Langu...

  • People are not coins: Morally distinct types of predictions necessitate different fairness constrain...

    People are not coins: Morally distinct types of predictions necessitate different fairness constrain...

  • Use case cards: a use case reporting framework inspired by the European AI Act

    Use case cards: a use case reporting framework inspired by the European AI Act

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.