This event recap was co-written by Muriam Fancy (our Network Engagement Manager) and Connor Wright (our Partnerships Manager), who co-hosted our “The State of AI Ethics in Spain and Canada” virtual meetup in partnership with OdiseIA earlier in February.
Room 1
Having been surprised by the amount of synergy between the Spanish and Canadian approaches to AI ethics, listed below are what I believe to be the 5 main take-aways from the Room 1 discussion. Short and snappy for your viewing, and incisive for your learning, these takeaways really show the value of community engagement (just like we have done with our relaunch of the learning communities here). This is what I selected:
- The need for the right AI environment
Room 1 rightfully pointed out that creating an adequate environment where people can capitalise on their base level of AI knowledge to move forward in discussions on the subject. Such conversations, in order to fully benefit those participating, need to then involve an interdisciplinary element, owing to AIās vast outreach into so many different aspects of society. While self-study can lead you to many interesting and fruitful sources of knowledge, AI covers so many diverse topics that the amount of information out there becomes overwhelmingly difficult to engage with. In this way, engaging in a multi-disciplinary conversation can provide the much needed insights into these different areas AI touches upon, while also enlightening your own approach. In this way, establishing the right environment to allow this to take place is key.
Establishing this environment also helps to prioritise the next takeaway I have drawn out; namely further establishing AI ethics before fully relying on AI itself.
- Ethics before AI
As has been already demonstrated in areas such as IBMās facial recognition technology attempts to potential future hot topics such as AI in warfare (offered by our founder Abhishek Gupta), the need to further push for the adoption of different AI ethical systems is more than apparent. One of the Room 1 suggestions was for a system based on widely accepted human rights would help create the necessary consensus. Trying to establish an ethical system, which philosophers have been trying to do for millennia, may not be the optimal approach towards an AI ethics system, especially given the diversity of ethical values throughout different societies. As a result, potentially basing AI ethics off human rights will yield a better uptake across the globe.
What would then be required to pursue this uptake is to then establish international cooperation on these issues. So, what would this look like?
- What international cooperation would look like.
AI is a global problem, meaning that physical boundaries are of less importance just like with the eruption of the internet. In this sense, so too should the boundaries between different organizational entities. A multi-leveled international approach is what Room 1 advocated for the pursuit of the end goal of an AI ethics system, whereby universities, observatories (like OdiseIA) and governments will be crucial in facilitating the discussion throughout all levels of the country in question to best allow the governments at hand to communicate its AI needs.
In this sense, it seems thereās a lot of collaboration that needs to take place. So, what is international cooperation actually useful for?
- What international cooperation can help us do.
What international coordination brings in buckets are opportunities to generate an outward-looking perspective on AI issues and acknowledge its global nature, rather than just having countries solely focus on their immediate surroundings. In this way, while there will be language and cultural barriers, the importance of recognising AIās full scale will then allow for the entrance of international perspectives that previously may not have been accounted for. It is this outward perspective generated by international cooperation that Room 1 deemed essential in generating anything like a consensus surrounding the topic of AI ethics.
As mentioned above, the right environment needs to be in place before anything like international cooperation can take place. What this environment also includes is the acquiring of base level knowledge by participants, which Room 1 didnāt think was being pursued enough.
- Generating the understanding required to prepare future AI professionals
A consensus was reached on how there was more need for adequate preparation of upcoming AI professionals, centering around the raising of awareness of our interconnected AI is with vast areas of human life. Here, technologists need to be made aware of the consequences of their decisions, but also an effort from the social sciences, law and others needs to be made in order to understand the technical aspects of the topic. In this way, lawyers, economists, historians, environmentalists and more are to be made aware of the part they have to play in the AI discussion, just like technologists, coders and software developers have a place in helping to shape the resultant laws. In order for this to be appropriately done, the preparation cannot be based solely in a singular ethics course in an engineering programme and a singular engineering course in a social sciences degree. Doing so would be an injustice to both the disciplines involved and the mammoth task at hand.
Once the understanding of the multi-layered and interdisciplinary nature of AI is fully acknowledged, alongside the right environment, the important discussions surrounding AI ethics can really help to propel international cooperation forward. This will be of benefit to us all.
Room 2
It was clear from the onset with the presentation by OdeseIA that there are similar overlapping strategies and future actions between the two countries. Both countries have a similar aim of creating an environment that is conducive to designing, developing and deploying ethical AI. Despite the similarities, the policy and legal landscape these countries are existing in and therefore developing strategies and frameworks to engage in this conversation are quite different. The differences in culture, language, and national/international ties for Canada and Spain made for an interesting conversation!
- Strategies to balance innovation and regulation
The attendees that brought forward the Spain perspective noted the importance of finding a balance between creating policies and regulations to promote responsible AI while also creating strategies to promote the innovation and development of AI. Drawing the distinction between āhigh impactā algorithms and ālow impactā algorithms is important to note how certain technologies like facial recognition technology can be further regulated which can indicate future strategies for innovation with specific emphasis on regulation in certain sectors such as healthcare and technologies.
In comparison, the Canadian discourse in AI ethics is to promote and implement legal and policy frameworks of regulation in as many sectors in order to ensure the safety of public citizens. The cross-sectoral impacts of these technologies needs to be addressed, and would propose that the Canadian discourse and actions brought forward by leaders in the space pushes towards having greater regulation within sectors and on particular technologies.
Both countries require national legal and policy frameworks that focus on regulating AI in order to innovate and deploy more ethical technology. However, conversations between Canada and Spain can and should continue at forums and bodies such as the UN.
- AI fiction – what does this mean for the future?
In order to implement the conversations and ideas on designing, developing, and deploying ethical AI, we need professionals and public citizens to be prepared to be a part of this movement. Our friends in Spain noted that AI practitioners in Spain are not well prepared to move forward and promote a future with ethical AI. Members noted that students, and practitioners are unaware of some larger concepts in AI making it difficult to teach and access education of what constitutes ethical AI. This is largely due to the discourse of the fictionalization of these technologies by popular media. The gap in knowledge of these technologies is another barrier to engaging effectively with students and practitioners to prepare for a better future. The greatest fear with these technologies is privacy concerns, specifically on the issue of collecting personal information.
- Empowering the public through knowledge
The gap highlighted above needs to be addressed, a recommendation that was put forward was placing investment and an emphasis on engaging in public initiatives by public interesting community organizations. Members in the room note that the lack of knowledge on AI and the impacts of the technology either promotes the fictional discourse of the technology or rather public citizens do not question the impact of these technologies (such as gathering of personal data). The source of both consequences is due to the low level of digital literacy on AI and other technologies. Empowering the public through accessible education and forums to learn about AI is necessary to empower cross sectoral professionals and citizens to participate in the discussion on the development and impact of AI.
- How is your personal data being used?
A significant concern brought forward by our members in Spain is on the issue of privacy and personal data. Frameworks such as the GDPR provide a foundation for countries to develop greater national level policies on data protection (an issue that is currently being faced in Canada). Members in the group suggested that in addition to the recommendation on providing more accessible education, there should be greater transparency on how technology is utilizing AI and personal data. As members in the room discussed, this would help close the divide of understanding on such issues, making it easier for public citizens to point to cases of when AI was used and the type of data used as well.
- Practical AI Ethics
Itās clear that there are multiple stakeholders involved in both developing and deploying AI. The discourse for both countries is similar on this issue that there needs to be more conversations had between private and public sectors, but also with public citizens. To exclude public citizens from these conversations in developing frameworks to implement AI ethical practices to regulate or measure the risk of the AI technology would be harmful to ensure public safety and inclusion.
This is a concern that is not singular to one state, both Canada and Spain require more transparent and cross-sectoral discourse to effectively implement AI ethics frameworks before these technologies are deployed.