• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Overcoming Barriers to Cross-Cultural Cooperation in AI Ethics and Governance

July 18, 2020

Summary contributed by Alexandrine Royer, administrative coordinator at The Foundation for Genocide Education.

*Authors of full paper & link at the bottom


Mini-summary: Facilitating international cooperation among AI leaders is a necessary component to ensuring the success of AI ethics and good governance. The authors affirm that misunderstanding between cultures, alongside cultural mistrust, is often more of a barrier to achieving cross-cultural cooperation than fundamental differences between regions. Even in cases where fundamental differences are present, the authors argue that productive cross-cultural cooperation remains attainable by concentrating on practical issues over abstract values and prioritizing certain areas of AI where agreements on principles and standards are both possible and necessary. They offer recommendations to facilitate cooperation, such as translation and multilingual publication of key documents, researcher exchange programs and development of research agendas on cross-cultural topics. 

Full summary:

As AI development continues to expand rapidly across the globe, reaping its full potential and benefits will require international cooperation in the areas of AI ethics and governance. Cross-cultural cooperation can help ensure that positive advances and expertise in one part of the globe are shared with the rest and that no region is left disproportionately negatively impacted by the development of AI. At present, there are a series of barriers that limit the capacity that states have to conduct cross-cultural cooperation, ranging from the challenges of coordination to cultural mistrust. For the authors, misunderstandings and mistrust between cultures are often more of a barrier to cross-cultural cooperation rather than fundamental differences in ethical principles. Other barriers include language, a lack of physical proximity, and immigration restrictions which hamper on possibilities for collaboration. The authors argue that despite these barriers, it is still possible for states to reach a consensus on principles and standards for areas of AI.  

The researchers Seán S. Ă“hÉigeartaigh, Jess Whittlestone, Yang Liu, Yi Zeng and Zhe Liu define cross-cultural cooperation as different populations and cultural groups working together to ensure that AI is developed, deployed and governed in societally beneficial ways. They make the distinction that cross-cultural cooperation on AI does not entail that all parts of the world are to follow or be imposed the same standards. Rather, it involves identifying areas where global agreement is needed and necessary and others where cultural variation and a plurality of approaches is required and desirable. 

The authors focus their areas of study on the misunderstandings between the “West”, Europe and North America, and in the “East”, East Asia. Both areas of the globe have been recognized for their fast-developing AI, but also their active steps in the development of ethical principles and governance recommendations for AI. The authors argue that the competitive lens in which technological progress is framed, one common example being discourses of an AI race between the US and China, creates greater cross-cultural misunderstandings and mistrust between the two regions. They posit that these misunderstandings and mistrust are some of the biggest barriers to international cooperation rather than fundamental disagreements on ethical and governance issues. These misunderstandings must be corrected before they become entrenched in intellectual and public discussions. 

The history of political tensions between the US and China and their different founding philosophical traditions have led to the perception that Western and Eastern ethical traditions are fundamentally in conflict. This idea of an East/West divide, despite being oversimplistic and ignoring the differences in values within the regions, is repeatedly manifested in discourses surrounding the development of ethical AI. Claims of differences between the two regions, according to Ă“hÉigeartaigh et al., often rest on unexamined concepts and a lack of empirical evidence. 

One example is data privacy, where it is supposed that China’s laws are laxer compared to Europe and the US. The authors point to how this view may now be outdated, as Beijing recently banned 100 apps for data privacy infringements. They argue that those working in AI ethics must achieve a more nuanced understanding of how privacy may be prioritized differently when it comes to conflict with other key values, such as security. Another example is China’s much demonized social credit score system (SCS). The authors argue that Western publications fail to underscore how the measures in the SCS are largely aimed at tackling fraud and corruption in local governments, and how blacklisting and mass surveillance already exists in the US. Researchers in both regions will need to work reversing assumptions on building greater mutual understandings. 

Part of the misunderstandings between the two regions is also due to the language barrier which limits the opportunities for shared knowledge. Researchers in China, Japan and Korea tend to have a greater knowledge of English, while only a small fraction of North American and European researchers know Mandarin, Japanese or Korean. Even in cases where key documents are translated, misunderstandings can arise due to subtleties in language. One case is the Beijing principles where China’s goal of AI leadership was misinterpreted as a claim of AI dominance. Commentators concentrated on this miswording instead of focusing on the overlapping principles of human privacy, dignity, and AI for the good of humankind listed in the document. 

For Ă“hÉigeartaigh et al., constructive progress in AI ethics and governance can be achieved without finding consensus on philosophical issues or needing to resolve decades of political tension between nations. The dialogue should shift to areas where cooperation between states is crucial, like in military technology and arms development, over areas where it may be more appropriate to respect a plurality of approaches, such as healthcare. The delineation of where global standards for AI ethics and governance are needed should be informed by diverse cross-cultural perspectives that consider the needs and desires of different populations. As in the case of the previous nuclear weapons ban treaty, overlapping consensus on norms and practical guidelines can also be achieved even when countries have different political or ethical considerations for justifying these principles. 

For the authors, academia has a large role to play in facilitating cross-cultural cooperation on AI ethics and governance and identifying further areas where it is possible. Research initiatives that promote the free-flowing and intercultural exchange of ideas can help foster greater mutual understandings. Diverse academic expertise will also be needed to outline where fundamental differences do exist and whether value alignment is possible. 

Ă“hÉigeartaigh et al., are optimistic that academia and wider civil society can actively shape the principles behind binding international regulations. They refer to cases where both groups successfully intervened to shape issues of global importance, such as campaigns for the ban of lethal autonomous weapons and the abandonment of Google’s Project Maven. 

To achieve greater cross-cultural cooperation, the authors offer a series of further recommendation and calls to action which include: 

– Developing AI ethics and governance research agendas requiring cross-cultural cooperation. This is aimed at a global research community that can support international policy cooperation.

– Translating key papers and reports. This includes higher quality and several translations that explore the nuances and context of language. 

– Alternate continents for major AI research conferences and ethics and governance conferences. This can allow for greater international and multilingual participation as well as reduce the cost and time commitment for scholars and AI experts to take part. 

– Establish joint/or exchange programs for PhD students and postdocs. International fellowships will allow researchers to be exposed to different cultures early on in their careers and give the capacity to reach mutual understandings. 

Greater efforts aimed at achieving a more nuanced understanding between AI superpowers will help reduce the mistrust and correct assumptions of fundamental differences. As encouraged by the authors, cross-cultural cooperation is possible even among countries with divergent ethical principles by delineating the areas that necessitate cooperation and those that can support a diversity of values as well as by concentrating on practical issues. In the absence of cross-cultural cooperation, the competitive pressures and tensions between states will lead to underinvestment in safe, ethical and socially beneficial AI. It will also be cause for concern when ensuring that applications of AI are set to cross-national and regional boundaries. The authors conclude that as AI systems become more capable and ubiquitous, cultivating deep cooperative relationships on AI ethics and governance should be an immediate and pressing challenge for the global community.


Original paper by Seán S. ÓhÉigeartaigh, Jess Whittlestone, Yang Liu, Yi Zeng and Zhe Liu: https://link.springer.com/article/10.1007/s13347-020-00402-x

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Research summary:  Laughing is Scary, but Farting is Cute: A Conceptual Model of Children’s Perspect...

    Research summary: Laughing is Scary, but Farting is Cute: A Conceptual Model of Children’s Perspect...

  • AI and the Global South: Designing for Other Worlds  (Research Summary)

    AI and the Global South: Designing for Other Worlds (Research Summary)

  • 2022 AI Index Report - Technical AI Ethics Chapter

    2022 AI Index Report - Technical AI Ethics Chapter

  • The E.U.’s Artificial Intelligence Act: An Ordoliberal Assessment

    The E.U.’s Artificial Intelligence Act: An Ordoliberal Assessment

  • The Impact of the GDPR on Artificial Intelligence

    The Impact of the GDPR on Artificial Intelligence

  • Research Summary: Toward Fairness in AI for People with Disabilities: A Research Roadmap

    Research Summary: Toward Fairness in AI for People with Disabilities: A Research Roadmap

  • When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles

    When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles

  • A Lesson From AI: Ethics Is Not an Imitation Game

    A Lesson From AI: Ethics Is Not an Imitation Game

  • Research summary: Decision Points in AI Governance

    Research summary: Decision Points in AI Governance

  • Low-Resource Languages Jailbreak GPT-4

    Low-Resource Languages Jailbreak GPT-4

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.