Summary contributed by Alexandrine Royer, administrative coordinator at The Foundation for Genocide Education.
*Authors of full paper & link at the bottom
Mini-summary: Facilitating international cooperation among AI leaders is a necessary component to ensuring the success of AI ethics and good governance. The authors affirm that misunderstanding between cultures, alongside cultural mistrust, is often more of a barrier to achieving cross-cultural cooperation than fundamental differences between regions. Even in cases where fundamental differences are present, the authors argue that productive cross-cultural cooperation remains attainable by concentrating on practical issues over abstract values and prioritizing certain areas of AI where agreements on principles and standards are both possible and necessary. They offer recommendations to facilitate cooperation, such as translation and multilingual publication of key documents, researcher exchange programs and development of research agendas on cross-cultural topics.
As AI development continues to expand rapidly across the globe, reaping its full potential and benefits will require international cooperation in the areas of AI ethics and governance. Cross-cultural cooperation can help ensure that positive advances and expertise in one part of the globe are shared with the rest and that no region is left disproportionately negatively impacted by the development of AI. At present, there are a series of barriers that limit the capacity that states have to conduct cross-cultural cooperation, ranging from the challenges of coordination to cultural mistrust. For the authors, misunderstandings and mistrust between cultures are often more of a barrier to cross-cultural cooperation rather than fundamental differences in ethical principles. Other barriers include language, a lack of physical proximity, and immigration restrictions which hamper on possibilities for collaboration. The authors argue that despite these barriers, it is still possible for states to reach a consensus on principles and standards for areas of AI.
The researchers Seán S. ÓhÉigeartaigh, Jess Whittlestone, Yang Liu, Yi Zeng and Zhe Liu define cross-cultural cooperation as different populations and cultural groups working together to ensure that AI is developed, deployed and governed in societally beneficial ways. They make the distinction that cross-cultural cooperation on AI does not entail that all parts of the world are to follow or be imposed the same standards. Rather, it involves identifying areas where global agreement is needed and necessary and others where cultural variation and a plurality of approaches is required and desirable.
The authors focus their areas of study on the misunderstandings between the “West”, Europe and North America, and in the “East”, East Asia. Both areas of the globe have been recognized for their fast-developing AI, but also their active steps in the development of ethical principles and governance recommendations for AI. The authors argue that the competitive lens in which technological progress is framed, one common example being discourses of an AI race between the US and China, creates greater cross-cultural misunderstandings and mistrust between the two regions. They posit that these misunderstandings and mistrust are some of the biggest barriers to international cooperation rather than fundamental disagreements on ethical and governance issues. These misunderstandings must be corrected before they become entrenched in intellectual and public discussions.
The history of political tensions between the US and China and their different founding philosophical traditions have led to the perception that Western and Eastern ethical traditions are fundamentally in conflict. This idea of an East/West divide, despite being oversimplistic and ignoring the differences in values within the regions, is repeatedly manifested in discourses surrounding the development of ethical AI. Claims of differences between the two regions, according to ÓhÉigeartaigh et al., often rest on unexamined concepts and a lack of empirical evidence.
One example is data privacy, where it is supposed that China’s laws are laxer compared to Europe and the US. The authors point to how this view may now be outdated, as Beijing recently banned 100 apps for data privacy infringements. They argue that those working in AI ethics must achieve a more nuanced understanding of how privacy may be prioritized differently when it comes to conflict with other key values, such as security. Another example is China’s much demonized social credit score system (SCS). The authors argue that Western publications fail to underscore how the measures in the SCS are largely aimed at tackling fraud and corruption in local governments, and how blacklisting and mass surveillance already exists in the US. Researchers in both regions will need to work reversing assumptions on building greater mutual understandings.
Part of the misunderstandings between the two regions is also due to the language barrier which limits the opportunities for shared knowledge. Researchers in China, Japan and Korea tend to have a greater knowledge of English, while only a small fraction of North American and European researchers know Mandarin, Japanese or Korean. Even in cases where key documents are translated, misunderstandings can arise due to subtleties in language. One case is the Beijing principles where China’s goal of AI leadership was misinterpreted as a claim of AI dominance. Commentators concentrated on this miswording instead of focusing on the overlapping principles of human privacy, dignity, and AI for the good of humankind listed in the document.
For ÓhÉigeartaigh et al., constructive progress in AI ethics and governance can be achieved without finding consensus on philosophical issues or needing to resolve decades of political tension between nations. The dialogue should shift to areas where cooperation between states is crucial, like in military technology and arms development, over areas where it may be more appropriate to respect a plurality of approaches, such as healthcare. The delineation of where global standards for AI ethics and governance are needed should be informed by diverse cross-cultural perspectives that consider the needs and desires of different populations. As in the case of the previous nuclear weapons ban treaty, overlapping consensus on norms and practical guidelines can also be achieved even when countries have different political or ethical considerations for justifying these principles.
For the authors, academia has a large role to play in facilitating cross-cultural cooperation on AI ethics and governance and identifying further areas where it is possible. Research initiatives that promote the free-flowing and intercultural exchange of ideas can help foster greater mutual understandings. Diverse academic expertise will also be needed to outline where fundamental differences do exist and whether value alignment is possible.
ÓhÉigeartaigh et al., are optimistic that academia and wider civil society can actively shape the principles behind binding international regulations. They refer to cases where both groups successfully intervened to shape issues of global importance, such as campaigns for the ban of lethal autonomous weapons and the abandonment of Google’s Project Maven.
To achieve greater cross-cultural cooperation, the authors offer a series of further recommendation and calls to action which include:
– Developing AI ethics and governance research agendas requiring cross-cultural cooperation. This is aimed at a global research community that can support international policy cooperation.
– Translating key papers and reports. This includes higher quality and several translations that explore the nuances and context of language.
– Alternate continents for major AI research conferences and ethics and governance conferences. This can allow for greater international and multilingual participation as well as reduce the cost and time commitment for scholars and AI experts to take part.
– Establish joint/or exchange programs for PhD students and postdocs. International fellowships will allow researchers to be exposed to different cultures early on in their careers and give the capacity to reach mutual understandings.
Greater efforts aimed at achieving a more nuanced understanding between AI superpowers will help reduce the mistrust and correct assumptions of fundamental differences. As encouraged by the authors, cross-cultural cooperation is possible even among countries with divergent ethical principles by delineating the areas that necessitate cooperation and those that can support a diversity of values as well as by concentrating on practical issues. In the absence of cross-cultural cooperation, the competitive pressures and tensions between states will lead to underinvestment in safe, ethical and socially beneficial AI. It will also be cause for concern when ensuring that applications of AI are set to cross-national and regional boundaries. The authors conclude that as AI systems become more capable and ubiquitous, cultivating deep cooperative relationships on AI ethics and governance should be an immediate and pressing challenge for the global community.
Original paper by Seán S. ÓhÉigeartaigh, Jess Whittlestone, Yang Liu, Yi Zeng and Zhe Liu: https://link.springer.com/article/10.1007/s13347-020-00402-x