By Abhishek Gupta
Let’s start with a brief history for this series of events that’s been running for over a year now and counts 1000+ people from the Montreal community as its members.
How it all started?
It started as an experiment in making a far and wide reaching topic accessible to a broad audience, often not coming from a background in artificial intelligence and/or ethics (and this is a great thing as I’ll highlight soon). What started off as a handful of people coming together once every two weeks hoping to learn more about this emerging topic of AI ethics quickly amplified into a strong movement involving a multitude mobilizing to shape policy, voice diverse views and influence decision makers in considering the ethics, privacy and inclusivity concerns when deploying AI-enabled solutions.
The most recent event saw us filling the room to capacity with people coming from very diverse walks of life which was reflected in the quality of insights and variety of subjects and angles that were brought up when thinking about the topic. A big shoutout to Daniel Tarantino from Arup and Global Shapers Montréal for hosting us, providing food and beverages to keep us energized through the discussions!
The session took the form of a brief introduction followed by fragmenting the attendees into small groups to discuss the reading material that had been shared trying to answer the two guiding questions for the day.
Why the sessions are run the way they are?
A quick note on the specifics of how the sessions are run because it will be an important framework around understanding the insights generated from the group and I hope this will serve as a resource for others looking to organize similar activities.
The choice to break up into small groups, even when there is a large attendance resulting in many groups, comes from my belief in how group dynamics in larger groups inhibit the sharing of everyone’s voice and this format of having a group of 3–4 people allows for higher accountability, encouraging everyone to actively contribute to the discussions.
The readings themselves are mandatory and quite a few people manage to complete them which allows for a richer discussion almost circumventing the need for starting the discussion from basics. The goal here is two-fold: allow for people to digest material and build competence and secondly, enhance value addition to the time that everyone spends at the event.
The guiding questions give a loose framework to the entire discussions encouraging people to stay on theme which becomes especially important when discussing something that has wide-ranging implications that can have the tendency to run off-topic.
Finally, the group discussions are followed by presentations by each of the groups on what it is that they discussed and there are questions and comments from the other attendees which provides another layer of nuance and context to the discussions.
Coming back to the most recent meetup, these were the guiding questions that I posed to the attendees:
1) What are the 3 concrete steps that citizens can take to better engage city officials in addressing issues raised in the articles?
2) What are 3 ways that we can enhance public competence when it comes to meaningfully discussing ethics, inclusion and privacy issues in using AI in smart cities?
The meetup readings covered both the positive and negative outcomes that are happening as some cities are adopting these solutions and the future scenarios that might arise as both the capabilities of the technology mature and they become more widely deployed.
Following are some of the insightful discussions that came out of the meetup:
One of the groups came with some interesting ideas around how to engage with their city officials, especially around the use of storytelling, scenario-planning and narrative building. There were also suggestions to host seminars in public spaces, especially in public libraries and more generally in spaces that are accessible and welcoming to all as a way to spark dialogue. Publishing of action plans and even embedding AI ethics into high-school curriculum were suggested as ideas.
We now have a regular attendee from a high-school in Montreal and it is always fascinating to hear what he has to say growing up in the world as a digital native and accepting AI-enabled solutions around him as an everyday occurrence.
One of the attendees had worked with the smart city department of Ville de Montréal targeting innovative projects. His team had written down future city scenarios and walked through those with city officials in the form of novels set in 2025 with controversial cases as triggers for discussions and policy making.
Will storytelling play a key role in shaping these discussions?
An intriguing point here is the emphasis on storytelling which is a great means of mass communication, movies serve as a case in point but all too often showcase dystopias which ultimately influence how people perceive the advent of AI in different parts of their lives. Perhaps, this is an opportunity to create novel narratives that highlight both the capabilities and limitations in cinematic format for wider reach. Something along the lines of a Bright Mirror vs. Black Mirror?
One of the things that I’ve been working on as a part of the initiatives at the Montreal AI Ethics Institute is on translating this knowledge and mapping it to the decision-making process as a tool to aid government workers. This was brought up at the meetup as well and there was a call from one of the groups for clearly articulating a participation process to involve residents in making choices on how these solutions are deployed in their cities. Inviting participation from youth in composing different taskforces will serve to add a degree of realism with how technology is used on the ground. The group strongly advised in placing accountability mechanisms in these public consultations because of their far-reaching implications. The selection of such a group could even follow the process of jury duty selection as a starting point. Another group had mentioned that there should be transparency requirements when deploying AI-based projects in smart cities.
Something that will increasingly play an important role will be enhancing thinking via encouraged cognitive dissonance — especially in today’s paradigm of highly polarized and groupthink-oriented landscape.
Education and room to make errors:
Another group brought around a seldom-voiced perspective of allowing policy makers and government officials more room to make mistakes — especially when trying out new AI technologies. They mentioned that perhaps there was an adherence to more conservative technologies because of the fear of public failure. While that point does bring up a necessary step in bringing benefits via AI, a complimentary piece around an agile regulatory system will be crucial to the success of these experiments. Capacity building is very essential when it comes to AI within these institutions as they are often “legislating horses as cars are coming onto the roads”.
Ethics education for students, professionals and government workers in the context of AI structured via town halls, training programs or classes was brought up as a crucial need to address the deficit that we have today in our understanding of these issues. A call for public officials to educate themselves on AI and ethics was raised because it is their job when making policies that affect millions of people.
The current pipeline in how ideas germinate in the populace and make their way to decision-makers roughly follows the path: activism->academia-> policy-making. Activism plays an important role of highlighting issues that are not yet popular and making the ideas more commonplace, accessible and inviting participation from a wider segment of society. Academia plays the role of framing hypotheses from these movements and gives empirical data to back them up.
There also seems to be a lack of awareness when it comes to initiatives happening in different parts of the world that are trying to address similar issues — a Github of sorts for these projects would serve to cross-pollinate ideas and build a network to share best practices. While some of the larger ideas would need the backing of public organizations to take on those risks, experimentation is quite necessary at all levels of society to surface novel solutions to address these ethical, privacy and inclusivity concerns.
Development of checklists that push for ethical adherence in such projects would be a necessary tool that runs in complement to the above experimentation.
One of the questions raised by the groups captivated the attention of all attendees:
Montreal has a lot of AI experts present locally so it is easy for city officials to consult them, but what about other cities where they don’t have easy access to this? How do we make this a process that scales globally?
Another important aspect when going through this experimentation phase is to set success metrics a priori that are consulted with the public to ensure that it is in line with the expectations, norms and values of that city.
An analogy that resonated with a lot of people was: when we are asked to vote for things that happen in the physical realm of the city, why shouldn’t that also be the case when algorithms might govern different aspects of the city?
Another frequently raised point on sharing responsibility and accountability with the technical folks that develop these solutions was brought up. Something that I liked as an idea was the fact that in Montreal, we have a lot of innovation labs experimenting with different ideas, perhaps they are the democratic tools that can showcase what a future smart city might look like?
Concluding thoughts
My biggest takeaway was that a topic that has so many implications and requires wide-ranging expertise can best be tackled by bringing together an eclectic group of people together and providing a loose framework within which they can debate and discuss their ideas.
There are certainly many lessons to be learned in terms of what to watch out for and how best to integrate informed and competent policy making when it comes to the use of AI-enabled solutions in a smart city context. My hope is that city officials can use this as a starting point for discussion and enlist the help of people from diverse background in ensuring that the evolution of our cities takes the path of strong ethics, privacy and inclusion.
Here are the readings mentioned in the article above:
Mandatory Readings:
1) What It Means To Lead An Inclusive City In 2018–And Into The Future https://www.fastcompany.com/40542775/what-it-means-to-lead-an-inclusive-city-in-2018-and-into-the-future
2) Mayor de Blasio Announces First-In-Nation Task Force To Examine Automated Decision Systems Used By The City https://www1.nyc.gov/office-of-the-mayor/news/251-18/mayor-de-blasio-first-in-nation-task-force-examine-automated-decision-systems-used-by
3) What Artificial Intelligence Reveals About Urban Change https://www.citylab.com/life/2017/07/what-ai-has-to-say-about-the-theories-of-urban-change/533211/
4) AI in Smart Cities: Privacy, Trust and Ethics https://newcities.org/the-big-picture-ai-smart-cities-privacy-trust-ethics/
5) Stop Saying ‘Smart Cities’ https://www.theatlantic.com/technology/archive/2018/02/stupid-cities/553052/?utm_source=twb
Adventurous Readings:
1) Inclusive AI: Technology and Policy for a Diverse Urban Future http://citris-uc.org/wp-content/uploads/2017/07/Inclusive-AI_CITRIS_2017.pdf
2) AI and the City https://medium.com/urban-us/ai-the-city-a4f40c1a13d7