• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Engaging the Public in AI’s Journey: Lessons from the UK AI Safety Summit on Standards, Policy, and Contextual Awareness

November 29, 2023

✍️ Column by Connor Wright, our Partnerships Manager.


Overview: The Montreal AI Ethics Institute is a partner organization with Partnership on AI (PAI). Our Partnerships Manager, Connor, attended their UK AI Safety Summit fringe event in London on the 24th and 25th of October, 2023. Impressed by the variety in both speakers and thoughts, his main takeaway came from the importance of public engagement when deploying AI systems. That is in strong alignment with the mission of MAIEI in democratizing AI ethics literacy, which aims to build civic competence so that the public is better prepared and informed about the nuances of AI systems to properly engage in how AI systems are governed from a technical and policy perspective.


Introduction

With speakers ranging from all female panels to members of the House of Lords, I was seriously impressed by the variety of speakers and diversity in thought offered. Marrying the conference with their safe foundation model guidance release, there was an emphasis on AI standards, policy, and research. Below, I will dive into my main takeaways from the event and show how public engagement will be the key to long-term AI success.

Key Insights

Public engagement

Without a doubt, what resonated most strongly with me was the Forum’s discussion around public engagement. In particular, the reference to how public engagement is more than just focus groups and surveys. For example, David Leslie of the Alan Turing Institute noted how they consulted citizen juries in their 2019 work. They learned how the local community really cared about fairness, safety, and bias mitigation being prioritized in AI systems, which they had not appreciated before. That is to say, it’s one thing knowing how a convoluted neural network (CNN) works, but it’s another knowing how it impacts people. This links to a presentation by Keoni Mahelona, which emphasized the importance of the groundwork for AI services being deployed in indigenous languages to be done by those who speak the language – Google Translate performed nowhere near well enough when translating such languages. 

Such calls for better public engagement reveal how there are no templates or core examples to use or follow when it comes to public engagement. This results in more work needing to be done to establish these processes, meaning a greater need to incentivize those needing to do the work. Sources for doing so could come from leadership buy-in and metrics within the business to show the impact of public engagement. How to foster said buy-in and metrics remains to be seen.

The importance of context

What struck me during conversations surrounding regulating AI models was how, when evaluating models, we focus too much on capabilities and not enough on context. Instead of measuring how the AI deploys in the field, we focus on its performance in the lab, creating unforeseen problems when the model is deployed. Rather, focusing on context can generate a deeper understanding of the raison d’etre of the released technologies and how companies and governments act regarding AI.

To illustrate, Lucy Poole of the Australian Government remarked how their approach to AI is influenced by distance. Rather than be forerunners of the AI process, they prefer to be “fast-followers,” learning from the mistakes of others and attempting to avoid doing the same. In this way, they have the luxury of reflecting on AI technologies before deploying them, which starkly contrasts with other countries such as the US. By considering the environment in which the AI technologies are situated, we can better understand how they will perform and why they exist in the first place.

Policy and standards

Given how this event was part of the fringe events surrounding the UK AI Safety Summit, there was a strong emphasis on the ways forward for policy and standard setting in the AI space. Unfortunately, adhering to standards and overall compliance in general is the last thing businesses want to consider. Furthermore, AI standards are like plug sockets whereby they differ from country to country (such as different emphasis on privacy). Hence, starting any AI development process by establishing in which direction you want to go with AI will help ease that burden by clarifying any necessary considerations needed to take that direction. 

In doing so, Jaisha Wray advised that we “be quick, but don’t hurry” – act swiftly but not in a hastily manner in which we make mistakes. By not hurrying, we can also emphasize the dependency of AI on humans for its data, better allowing us to focus on human rights frameworks when it comes to AI.

Between the lines

I was left very impressed by the organization and quality of speakers at this event, given how it provided a platform for all different kinds of thoughts at a time when it is so needed. Above all, I appreciated the emphasis on the importance of public engagement – panelists made sure to mention that this groundwork is hard but that this should not distract from its importance. In this way, we can opt for public engagement proportional to the risk and scale of AI technology – the more serious the calculated technological impact, the stronger the imperative to engage the consumer/recipients of the technology. Public engagement should not be a tick-box exercise. The organizations and companies that follow this line of action most thoroughly will be the most successful in the long run.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Artificial Intelligence in healthcare: providing ease or ethical dilemmas?

    Artificial Intelligence in healthcare: providing ease or ethical dilemmas?

  • Responsible Use of Technology in Credit Reporting: White Paper

    Responsible Use of Technology in Credit Reporting: White Paper

  • On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models

    On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models

  • Research summary: Aligning Super Human AI with Human Behavior: Chess as a Model System

    Research summary: Aligning Super Human AI with Human Behavior: Chess as a Model System

  • Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

    Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

  • Clinical trial site matching with improved diversity using fair policy learning

    Clinical trial site matching with improved diversity using fair policy learning

  • Getting from Commitment to Content in AI and Data Ethics: Justice and Explainability

    Getting from Commitment to Content in AI and Data Ethics: Justice and Explainability

  • NIST Special Publication 1270: Towards a Standard for Identifying and Managing Bias in Artificial In...

    NIST Special Publication 1270: Towards a Standard for Identifying and Managing Bias in Artificial In...

  • Self-Consuming Generative Models Go MAD

    Self-Consuming Generative Models Go MAD

  • The State of AI Ethics Report (Volume 6)

    The State of AI Ethics Report (Volume 6)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.