✍️ Column by Connor Wright, our Partnerships Manager.
Overview: The Montreal AI Ethics Institute is a partner organization with Partnership on AI (PAI). Our Partnerships Manager, Connor, attended their UK AI Safety Summit fringe event in London on the 24th and 25th of October, 2023. Impressed by the variety in both speakers and thoughts, his main takeaway came from the importance of public engagement when deploying AI systems. That is in strong alignment with the mission of MAIEI in democratizing AI ethics literacy, which aims to build civic competence so that the public is better prepared and informed about the nuances of AI systems to properly engage in how AI systems are governed from a technical and policy perspective.
Introduction
With speakers ranging from all female panels to members of the House of Lords, I was seriously impressed by the variety of speakers and diversity in thought offered. Marrying the conference with their safe foundation model guidance release, there was an emphasis on AI standards, policy, and research. Below, I will dive into my main takeaways from the event and show how public engagement will be the key to long-term AI success.
Key Insights
Public engagement
Without a doubt, what resonated most strongly with me was the Forum’s discussion around public engagement. In particular, the reference to how public engagement is more than just focus groups and surveys. For example, David Leslie of the Alan Turing Institute noted how they consulted citizen juries in their 2019 work. They learned how the local community really cared about fairness, safety, and bias mitigation being prioritized in AI systems, which they had not appreciated before. That is to say, it’s one thing knowing how a convoluted neural network (CNN) works, but it’s another knowing how it impacts people. This links to a presentation by Keoni Mahelona, which emphasized the importance of the groundwork for AI services being deployed in indigenous languages to be done by those who speak the language – Google Translate performed nowhere near well enough when translating such languages.
Such calls for better public engagement reveal how there are no templates or core examples to use or follow when it comes to public engagement. This results in more work needing to be done to establish these processes, meaning a greater need to incentivize those needing to do the work. Sources for doing so could come from leadership buy-in and metrics within the business to show the impact of public engagement. How to foster said buy-in and metrics remains to be seen.
The importance of context
What struck me during conversations surrounding regulating AI models was how, when evaluating models, we focus too much on capabilities and not enough on context. Instead of measuring how the AI deploys in the field, we focus on its performance in the lab, creating unforeseen problems when the model is deployed. Rather, focusing on context can generate a deeper understanding of the raison d’etre of the released technologies and how companies and governments act regarding AI.
To illustrate, Lucy Poole of the Australian Government remarked how their approach to AI is influenced by distance. Rather than be forerunners of the AI process, they prefer to be “fast-followers,” learning from the mistakes of others and attempting to avoid doing the same. In this way, they have the luxury of reflecting on AI technologies before deploying them, which starkly contrasts with other countries such as the US. By considering the environment in which the AI technologies are situated, we can better understand how they will perform and why they exist in the first place.
Policy and standards
Given how this event was part of the fringe events surrounding the UK AI Safety Summit, there was a strong emphasis on the ways forward for policy and standard setting in the AI space. Unfortunately, adhering to standards and overall compliance in general is the last thing businesses want to consider. Furthermore, AI standards are like plug sockets whereby they differ from country to country (such as different emphasis on privacy). Hence, starting any AI development process by establishing in which direction you want to go with AI will help ease that burden by clarifying any necessary considerations needed to take that direction.
In doing so, Jaisha Wray advised that we “be quick, but don’t hurry” – act swiftly but not in a hastily manner in which we make mistakes. By not hurrying, we can also emphasize the dependency of AI on humans for its data, better allowing us to focus on human rights frameworks when it comes to AI.
Between the lines
I was left very impressed by the organization and quality of speakers at this event, given how it provided a platform for all different kinds of thoughts at a time when it is so needed. Above all, I appreciated the emphasis on the importance of public engagement – panelists made sure to mention that this groundwork is hard but that this should not distract from its importance. In this way, we can opt for public engagement proportional to the risk and scale of AI technology – the more serious the calculated technological impact, the stronger the imperative to engage the consumer/recipients of the technology. Public engagement should not be a tick-box exercise. The organizations and companies that follow this line of action most thoroughly will be the most successful in the long run.