Op-ed by Abhishek Gupta, founder of the Montreal AI Ethics Institute.
“So it’s not only important for you to participate in this technological future, but think about an alternative future where your imagination gets to shape what kind of technology we’re building.”
— Dr. Timnit Gebru (VentureBeat)
2020 certainly was a year for the books from many different perspectives. We don’t need a reminder of all the things that went wrong. It felt like the field of AI ethics was in itself a microcosm for everything that was going around. Towards the end of the year, I felt that the injustices and trouble that Dr. Gebru endured as a part of her work at Google really added more fuel to our own mission at the Montreal AI Ethics Institute in our calls for building civic competence in AI ethics. This is aptly captured in the quote above from Dr. Gebru that I’d like for us to use as a rallying cry as we embark into 2021.
Photo by Matt Botsford on Unsplash
What is civic competence?
Civic competence refers to the ability of people from everyday life and from all walks of life to be able to meaningfully participate in discussions on a particular subject. Meaningful participation here is showing up with a solution-mindset with a fundamental understanding of the issues, what has been tried before, and what realistic actions we can take to move the discussion forward in a productive manner.
At least, that is my working definition for it and what I am striving to help develop in the community through my work at the Montreal AI Ethics Institute.
What does it mean in the field of AI ethics?
The field of AI ethics, whether you are a consummate insider or a concerned citizen, is rife with many different discussions on the subjects of privacy, bias, fairness, accountability, transparency, governance, responsibility, and many more different ideas, some of which have overlaps with each other and others that don’t.
One of the things that is important to consider here is that the impact of AI will be felt by all, whether you choose to directly engage with it or not. So, if we want to shape the technical and policy measures in the domain such that we head towards a world where the technology creates positive change, we need to engage with this in a way that is nuanced and doesn’t apply a blanket approach to any of the areas. Such blanket approaches will lead to measures in the field that will either stifle innovation or leave the field too unregulated and unmonitored in a way that continues to inflict harm on people.
Why do we need it?
There are three main reasons why I think civic competence in AI ethics is important:
- the counterweight role
- a more meaningful way towards diversity
- ongoing basis for capturing harms in society from AI systems
1. The counterweight role
There are many AI ethics initiatives from governments, academia, industry, and civil society. Each of them run with particular agendas that shape the technical and policy measures that are used to govern the technology and its deployments.
But, they often miss out on one facet or another by virtue of their composition, funding sources, approaches to investigation, etc. Having broad-based civic competence can help many more eyes look critically at the issues and surface blindspots in ways that would just not be possible when we have a handful of people with (often) similar backgrounds.
2. A more meaningful way towards diversity
Coming from different walks of life, different lived experiences, cultures, languages, abilities, frame of mind, and the myriad other ways in which all of us are unique, creating a single instrument that can capture all of that richness is a recipe for failure.
We don’t often think about it but when you have a large number of people crowdsourcing there is often the impression that a lot of suggestions that we will get will be of low quality, perhaps because of unfamiliarity with the domain, trolling behaviour, etc. But, it also leads to some gems that would otherwise go unexplored and unarticulated. We’ve seen this happen first-hand with the workshops that we’ve hosted at the Montreal AI Ethics Institute where absolutely brilliant suggestions that “groups of experts” would have been expected to come up with (but don’t) emerge from the discussions.
There are many facets of diversity (and there are many scholars far more educated on this who articulate the dimensions of diversity) which just can’t be foreseen by a single individual or even group of individuals. Leveraging the crowd for that is certainly an approach that can help us be as thorough as possible.
3. Ongoing basis for capturing harms in society from AI systems
Finally, real-time monitoring of harms from AI systems is not an easy task. Certainly the case when we have many different ways that a multi-use technology like AI can be deployed in the field, both by benign and malicious actors.
Empowering people so that they are able to think more critically about what are common patterns of harm can help them become the crowdsourced watchdog that can surface places where things might be going wrong, sharing them with journalists (as an example), and drawing research and development attention to it so that we can start to address the problems.
How can I help?
There are many ways that we can achieve this and the list below is just a starting point to think about some of these issues:
- supporting grassroots organizations
- becoming familiar with the nuances
- showing up everyday even when you feel that the debate isn’t moving forward
1. Supporting grassroots organizations
There are many ways that you can play your part in helping to build civic competence in the space. An easy way to do so is to support your local organizations that are doing this sort of work. This includes both formal and informal efforts be that through donations to ticket sales or through more formal commitments by pledging sums to bolster their mission. Often I get asked if small amounts make a difference and for organizations that are run by volunteers or are self-funded, you can rest assured that every dollar does make a difference.
And you don’t always need to look at financial support as the only avenue; dedicating time and other in-kind support is also a great way to help your local efforts.
2. Becoming familiar with the nuances
A key thing that we will need to consider when it comes to the building civic competence is maintaining the level of nuance that is required to meaningfully move the conversation forward. This means paying careful attention to issues, not jumping to conclusions, and taking the time to move cautiously to have necessary interdisciplinary perspectives that can help us make informed decisions and rally for the right changes that are needed in the ecosystem .
This comes from engaging with each other and learning from a wider spectrum of folks compared to what one might be typically used to. This can be onerous but the long-term payout from this sort of investment will be well worth it.
3. Showing up everyday even when you feel that the debate isn’t moving forward
And this might be the hardest ask, but ultimately also the most impactful: we need to keep pushing for building civic competence even when short-term results look disappointing, progress looks slow, and there aren’t many people supporting your mission.
I have personally faced this with the work that we have done at the Montreal AI Ethics Institute. Pre-pandemic, our activities and workshops were largely limited to in-person events hosted with a range of community partners in Montreal. Since then, we have become a digital-first institute, bringing on board researchers from different disciplines and institutions, and welcoming participants and collaborators from around the world
I encourage everyone to keep this idea front and centre in all the work that you do in AI ethics in 2021 setting us up for success both in this year and the years to come.
To learn more about my work, you can visit: https://atg-abhishek.github.io