• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Engaging the Public in AI’s Journey: Lessons from the UK AI Safety Summit on Standards, Policy, and Contextual Awareness

November 29, 2023

✍️ Column by Connor Wright, our Partnerships Manager.


Overview: The Montreal AI Ethics Institute is a partner organization with Partnership on AI (PAI). Our Partnerships Manager, Connor, attended their UK AI Safety Summit fringe event in London on the 24th and 25th of October, 2023. Impressed by the variety in both speakers and thoughts, his main takeaway came from the importance of public engagement when deploying AI systems. That is in strong alignment with the mission of MAIEI in democratizing AI ethics literacy, which aims to build civic competence so that the public is better prepared and informed about the nuances of AI systems to properly engage in how AI systems are governed from a technical and policy perspective.


Introduction

With speakers ranging from all female panels to members of the House of Lords, I was seriously impressed by the variety of speakers and diversity in thought offered. Marrying the conference with their safe foundation model guidance release, there was an emphasis on AI standards, policy, and research. Below, I will dive into my main takeaways from the event and show how public engagement will be the key to long-term AI success.

Key Insights

Public engagement

Without a doubt, what resonated most strongly with me was the Forum’s discussion around public engagement. In particular, the reference to how public engagement is more than just focus groups and surveys. For example, David Leslie of the Alan Turing Institute noted how they consulted citizen juries in their 2019 work. They learned how the local community really cared about fairness, safety, and bias mitigation being prioritized in AI systems, which they had not appreciated before. That is to say, it’s one thing knowing how a convoluted neural network (CNN) works, but it’s another knowing how it impacts people. This links to a presentation by Keoni Mahelona, which emphasized the importance of the groundwork for AI services being deployed in indigenous languages to be done by those who speak the language – Google Translate performed nowhere near well enough when translating such languages. 

Such calls for better public engagement reveal how there are no templates or core examples to use or follow when it comes to public engagement. This results in more work needing to be done to establish these processes, meaning a greater need to incentivize those needing to do the work. Sources for doing so could come from leadership buy-in and metrics within the business to show the impact of public engagement. How to foster said buy-in and metrics remains to be seen.

The importance of context

What struck me during conversations surrounding regulating AI models was how, when evaluating models, we focus too much on capabilities and not enough on context. Instead of measuring how the AI deploys in the field, we focus on its performance in the lab, creating unforeseen problems when the model is deployed. Rather, focusing on context can generate a deeper understanding of the raison d’etre of the released technologies and how companies and governments act regarding AI.

To illustrate, Lucy Poole of the Australian Government remarked how their approach to AI is influenced by distance. Rather than be forerunners of the AI process, they prefer to be “fast-followers,” learning from the mistakes of others and attempting to avoid doing the same. In this way, they have the luxury of reflecting on AI technologies before deploying them, which starkly contrasts with other countries such as the US. By considering the environment in which the AI technologies are situated, we can better understand how they will perform and why they exist in the first place.

Policy and standards

Given how this event was part of the fringe events surrounding the UK AI Safety Summit, there was a strong emphasis on the ways forward for policy and standard setting in the AI space. Unfortunately, adhering to standards and overall compliance in general is the last thing businesses want to consider. Furthermore, AI standards are like plug sockets whereby they differ from country to country (such as different emphasis on privacy). Hence, starting any AI development process by establishing in which direction you want to go with AI will help ease that burden by clarifying any necessary considerations needed to take that direction. 

In doing so, Jaisha Wray advised that we “be quick, but don’t hurry” – act swiftly but not in a hastily manner in which we make mistakes. By not hurrying, we can also emphasize the dependency of AI on humans for its data, better allowing us to focus on human rights frameworks when it comes to AI.

Between the lines

I was left very impressed by the organization and quality of speakers at this event, given how it provided a platform for all different kinds of thoughts at a time when it is so needed. Above all, I appreciated the emphasis on the importance of public engagement – panelists made sure to mention that this groundwork is hard but that this should not distract from its importance. In this way, we can opt for public engagement proportional to the risk and scale of AI technology – the more serious the calculated technological impact, the stronger the imperative to engage the consumer/recipients of the technology. Public engagement should not be a tick-box exercise. The organizations and companies that follow this line of action most thoroughly will be the most successful in the long run.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • An Empirical Study of Modular Bias Mitigators and Ensembles

    An Empirical Study of Modular Bias Mitigators and Ensembles

  • Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in...

    Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in...

  • A Look at the American Data Privacy and Protection Act

    A Look at the American Data Privacy and Protection Act

  • Cinderella’s shoe won’t fit Soundarya: An audit of facial processing tools on Indian faces

    Cinderella’s shoe won’t fit Soundarya: An audit of facial processing tools on Indian faces

  • The Short Anthropological Guide to the Study of Ethical AI

    The Short Anthropological Guide to the Study of Ethical AI

  • Labor and Fraud on the Google Play Store: The Case of Install-Incentivizing Apps

    Labor and Fraud on the Google Play Store: The Case of Install-Incentivizing Apps

  • AI Policy Corner: Transparency in AI Lab Governance: Comparing OpenAI and Anthropic’s Approaches

    AI Policy Corner: Transparency in AI Lab Governance: Comparing OpenAI and Anthropic’s Approaches

  • Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US

    Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US

  • Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

    Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

  • System Safety and Artificial Intelligence

    System Safety and Artificial Intelligence

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.