✍️ Column by Muriam Fancy (@muriamfancy), our Network Engagement Manager.
The evolution of technology is at an exciting but also at an alarming pace. We have seen various cases displaying the implications of deploying technology without having foundational governance mechanisms to mould these technologies’ behaviour and consequences. Just as importantly, we do not have adequate governance mechanisms to protect communities, especially marginalized communities, from the lack of governance and regulation of AI.
The question of who is governing AI is significant. One of the primary reasons is that depending on the stakeholder who is setting the governance framework, they could be existing outside of the state legislative laws and policies. The discussion of private vs. public stakeholder governance is essential since the normative responsibilities and desired outcomes of deploying the technology vary greatly. Baseline design variables such as accuracy, fairness, and explainability, continue to evolve, change, and vary depending on who the AI application is serving. To create pan-sector definitions that do not work jointly together will inevitably cause mass scaleable harm.
As Mittelstadt notes, these stakeholders’ fiduciary responsibilities vary greatly, creating a divide and ultimately a gap in regulation, allowing communities to fall through the cracks and face these technologies’ harms. Most often, AI is developed by the private sector to be deployed by the public sector. Issues of user groups, definitions of AI, and gathering data do not align between these two sectors.
The motivations for these two sectors also vary greatly. With private sector companies looking to increase profit to meet their bottom line, and with the government’s lack of AI literacy, governments are often deploying technologies without conducting efficient risk assessments. Schiff et al. note that the ethical frameworks within these stakeholders also vary. The review process of analyzing these technologies significantly differs. Hence, the variables of concern are most likely not going to be similar and therefore make it difficult to deploy responsible AI.
In what way does the issue of who is governing AI have real-world implications? The case brought forward by Dr. Petra Molnar in her study of AI being deployed at the border demonstrates the diverging motivations between the private and public sector, both with a heavy racist discourse against migrant communities. She was referencing case examples of private companies utilizing AI to track the migration and separation of families to enforce deportation and detention practices in the US. The state government politicizing the case of migration made it possible for AI to be deployed without any governance and regulation to protect migrant communities.
Dr. Molnar found a similar issue with her study in Greece, with automated technology being deployed without consent, minors, and overall acting without governance from private and public stakeholders. The case of Clearview AI in Canada scrapped the social media data of millions of Canadians to label gender and age filtered in a machine learning algorithm. In which police forces also used the product. The gross human rights violations that came from deploying this product demonstrated the significant gap in Canadian law that does not apply to Clearview’s actions. However, the Privacy Commissioner in Canada did find various activities in violation of PIPEDA.
What is the way forward in the future for effective governance of these technologies? The first and foremost mechanisms should be inclusive participation from public citizens and those with varying statuses (i.e., refugee, immigrant, stateless). I would propose this step forward because, ultimately, public citizens and those residing in the state will face the consequences of the technology.
As Dr. Molnar notes in her paper, governance models by private and public sectors reflect the power structures within our society. Thus, these stakeholders, not including the voices and concerns of affected communities, reflect these bodies’ systematic racist and discriminatory norms, continuing to bridge the divide between public and private stakeholders and marginalized and affected communities. Like Dr. Molnar, this column would call to design an inclusive, participatory framework to ensure that public citizens and affected communities are included equitably in the conversations to develop design and deployment mechanisms.
The second mechanism is for better discussions between public and private sector actors. Foundational principles that will dictate how these technologies can exist in the state or internationally are understood and implemented differently by both actors. Joint conversations between these two sectors and the inclusive participation from public citizens and affected communities will help change the discourse and build holistic governance frameworks.
As Medeiros notes in her paper, the need for jointly developed mechanisms and discussion is because of differences in understanding of what regulation is and how that complies with policy and law. As Mederios notes, private stakeholders need to adopt state policies, which hopefully aim to protect public citizens and be thought through by private companies. If such joint effort does occur, there can be a baseline framework that we can continue to evolve as technology evolves to remain inclusive, human-centric, and focused on human rights at its core.