Contribution by Monika Viktorova (@mviktoro), a Tech strategist who grew up reading too much sci-fi and stays up thinking about ethics and AI way past her bedtime.
The Montreal AI Ethics Institute’s State of AI Ethics is a quarterly report that covers the most pressing and emergent issues for AI ethics and draws deep insights to help readers plan for the future of ethical tech. You can view all the State of AI Ethics Reports and Panels on this page, including resources mentioned in the live chat.
If there was a theme to close out a fraught year, the MAIEI State of AI Ethics panel captured it: worry. 2020 saw us grappling with the growing challenges of un- and under-regulated tech. The unabated spread of algorithmically-disseminated misinformation hobbled effective public health messaging responses to the global COVID-19 pandemic, likely increasing the transmission and death toll from the virus. The continued collapse of the media ecosystem, driven by consolidation of ad revenue from tech giants like Facebook, Google and Apple, left 52% of Trump voters believing erroneously that President Trump won the election. Proliferation of facial recognition technology use in policing, municipal surveillance, and travel prompted backlash worldwide.
Against this backdrop, panelists Abhishek Gupta, Amba Kak, Katya Kilnova, Danit Gal, and Rumman Chowdhury and moderator Victoria Heath discussed where the field of AI (and tech) ethics is today, and where it needs to go. Chowdhury, a pioneer in the AI ethics field, pointed to the growing public awareness of the very real challenges and dangers AI presents. She outlined how this change propelled a mindset shift away conceptualizing AI as science fiction. As more people understand how ubiquitous AI is in their daily lives and that their relationship to the technology isn’t always benign, public pressure on legislators to regulate AI grows.
While this is ostensibly a win for academics, researchers and civil society organizations who are at the forefront of activism to regulate AI, there is still a lot of work to do. Harnessing public pressure has proven successful in pushing back on some forms of AI-enabled surveillance and policing: think of the bans on facial recognition technology by cities like Portland, San Francisco, and Boston. Where public pressure, and sometimes even internal pressure, has been less successful is with the purveyors of AI, tech giants like Google, Facebook and Amazon. Most recently, Google’s firing of esteemed researcher and AI ethics pioneer Dr. Temnit Gebru has had a chilling effect on the hope for cross-collaboration between academia and industry.
Dr. Gebru’s dismissal stems from a dispute over the ability to present novel research around potential bias in a new type of language technology that is also used by Google’s search engine. Google’s firing of a prominent, Black woman researcher over research that is potentially unflattering for the company raises serious questions in the AI ethics field. How effective is the corporate adoption of ethical principles and diversity & inclusion mandates when confronted with competing financial and business pressures? How does a field that relies on self-policing and voluntary adoption of ethical practices make change? And crucially, as Dr. Gebru asked herself, how are the researchers and whistleblowers who raise the alarm about AI ethics violations protected from retaliation?
Although the event took place a day prior, the panelists touched on some of the underlying tensions that #Gebrugate exposed. Pushback for meaningful accountability, especially when it comes from outsiders like activists or members of the public, is often dismissed by those with financial interests in the technology. Larger organizations that focus on AI ethics but that closely partner with tech companies have been resistant to certain forms of criticism or accountability. This has also contributed to a siloing of the discipline, with the same discussions hosting the same 30-40 global experts happening over and over again. Meaningful engagement with the communities suffering the worst impacts of AI is discussed but rarely achieved.
Some of the isolation of the field from the people it is ostensibly trying to help is also a result of broader dynamics in tech. The labour of the Global South that powers AI through repetitive and low-paid work like entering and cleaning up data or creating datasets for model training – is made largely invisible. The sleek branding of tech companies who rely on this cheap labour erases the fact that humans are, as Katya Klinova, Program Lead at Partnership on AI, pointed out, making the AI “intelligent”. The dynamics of erasing labour, especially racialized labour, from a product, trickle into the assumptions about who should be making the guiding decisions about mitigating the potential downstream impacts of that product. The rooms of decision makers are still closed off to those whose labour supports the product and who are often most impacted by it.
As Amba Kak, Director of Global Policy and Programs at AI Now, and Danit Gal, Technology Advisor & AI Lead at the UN Secretary General’s Office, pointed out, while the AI ethics field has been saturated with principles, guidelines, and “best practices”, many of these efforts are either attempts to reinvent the wheel or leverage each other to the exclusion of diverse voices. AI ethics work is overwhelmingly Eurocentric, disseminated globally and prescriptively without being adapted for the context of different jurisdictions. Sometimes the conversations around AI ethics are missed by Western researchers, who tend to focus on official documents and can miss localized progress outside of government. Broadening the field of AI ethics, opening its doors to those who are marginalized or overlooked, and pushing for racial justice in the academic and practical work being done, is a critical need for the discipline. If AI ethics does not make strides to become more global, diverse and inclusive, it risks becoming irrelevant or even harmful.
This doesn’t mean we should lose hope. But it does mean that we, the collective we, researchers, legislators, tech workers, business leaders, consumers, members of the public, need to acknowledge the work that has to be done. We have to decide what kind of world we want to build: one that is increasingly alienating, discriminatory, fraught with conflict and unsustainable? Or one that benefits the many instead of the few, develops sustainably, equitably distributes the gains from technology and finally lives up to the lofty promises of AI as a force of good.
How to Get Involved
If you are looking to get involved with some of the groundbreaking work from the State of AI Ethics panelists, check out the callouts below or reach them on Twitter.
- Rumman Chowdhury is building Parity AI, a scientifically rigorous AI/ML model assessment platform that generates algorithmic and process fairness, transparency and explainability. Keep up on the Parity website or on Twitter at @ruchowdh
- Katya Klinova is thinking about the future of work and what might be practical steps for protecting workers in the changes coming from automation at Partnership on AI. Reach her on Twitter at @klinovakatya
- Amba Kak is building a community blog to create counter-narratives to western-centric #aiethics dialogue. Call for scholars and practitioners will be out at the end of 2020. For more news, follow her on Twitter at @ambaonadventure
- Danit Gal is looking to build on the expertise from the networks she’s built in the global south to understand what would be the lowest threshold of tangible tools needed to support what they’re doing. More news to come on Twitter at @DanitGal