• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

October 4, 2022

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Pascale Fung and Hubert Etienne]


Overview: The ethical approaches to AI adopted by China and Europe initially seem similar. However, this comparative analysis showcases how their inspirations and attitudes differ significantly.


Introduction

The ethical approaches of the Chinese National New Generation Artificial Intelligence Governance Professional Committee and the European High-level Expert Group on AI seem similar at first, but present contrasting desires. After exploring the collectivist attitude championed by China and the individualist outlook adopted by Europe, I will note how these approaches differ regarding privacy. I will then consider how China’s aversion from pop culture views has influenced its attitude to technology, before exploring some similarities between the two mindsets. I then conclude how despite the differences, understanding alternative approaches leads to more fruitful discussions.

Key Insights

Collectivism vs individualism

Confucian principles have shaped East Asian culture for centuries into a collectivist mould. A system of governance for the people rather than by the people has ensued, and a ruling elite has been put in charge to do the right thing. Subsequently, harmony is achieved by controlling extreme passions of the citizenry to allow the ruling elite to carry out their objectives. Consequently, emphasis on the community gives rise to principles such as ā€œharmony and friendshipā€ (p. 3) within the Chinese approach to AI. These principles are not strict rules but act as guidelines for how AI developers should design their products for the good of Chinese society.

On the other hand, while the Chinese regulation promotes, the EU regulation prevents. Within Europe, AI regulation is designed to prevent harm and potential abuse of power by the political elite. Instead of inclusion, the EU law advocates for protection, inspired by the Enlightenment in the 18th Century. Hence, instead of welcoming technology like in China, European law is utilised to prevent the possibility of government harm. Consequently, the fear of AI being used to surveil the population is far more prominent in Europe than in China, being reflected in the types of laws being passed. As a result, Chinese law has relatively softer restrictions, calling for principles such as transparency to be ā€˜improved’ as opposed to the European dictat ā€˜AI must be transparent’.

Privacy

A clear difference between the two can be seen concerning privacy. Within the EU, GDPR is programmed to protect the individual. In China, data privacy law instead targets private agents and malicious actors. Here, parents and the State regularly having access to children’s data. This is to fulfil their role to guide and protect and provides a stark contrast to the sentiment of individual privacy held in Europe. China’s promising economic growth further solidified the immense amount of trust.

Pop culture

What China did not boast for a while was its susceptibility to pop culture. China being closed to the world before the 1980s allowed it to miss the dystopia and cyberpunk hype. Hence, within the West in general, a common theme in sci-fi is that AI will take over and enslave humans. However, within the Chinese context, AI is portrayed as a companion. These approaches to technology are signalled in the protectionist laws of Europe and the attitude of encouragement within China.

Similarities

Nevertheless, there are some similarities between the two sets of laws. Both sides adopt an organisational approach to AI principles, utilising governance bodies to develop and disseminate AI laws. This commonality is further fuelled by both sides opting for a scientific method. Using mathematicians, technologists and data scientists as references for expertise on AI, both sides want to have their AI approaches backed by empirical evidence.

Between the lines

A common theme throughout this comparison is that context and history are king. The regulatory and technological mindset has not been crafted in the last few years, but sewn, nurtured and grown over centuries. Hence, when analysing our own approach to technology, we can find ourselves questioning the very starting point of it all. By understanding the origins of different contexts, we can better comprehend why different approaches differ from ours. This may not inspire tolerance, but it can lead to more comprehensive discussion.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • The Return on Investment in AI Ethics: A Holistic Framework

    The Return on Investment in AI Ethics: A Holistic Framework

  • The Narrow Depth and Breadth of Corporate Responsible AI Research

    The Narrow Depth and Breadth of Corporate Responsible AI Research

  • When Algorithms Infer Pregnancy or Other Sensitive Information About People

    When Algorithms Infer Pregnancy or Other Sensitive Information About People

  • How Artifacts Afford: The Power and Politics of Everyday Things

    How Artifacts Afford: The Power and Politics of Everyday Things

  • Research summary: Learning to Diversify from Human Judgments - Research Directions and Open Challeng...

    Research summary: Learning to Diversify from Human Judgments - Research Directions and Open Challeng...

  • Understanding Toxicity Triggers on Reddit in the Context of Singapore

    Understanding Toxicity Triggers on Reddit in the Context of Singapore

  • Maintaining fairness across distribution shift: do we have viable solutions for real-world applicati...

    Maintaining fairness across distribution shift: do we have viable solutions for real-world applicati...

  • Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

    Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

  • The social dilemma in artificial intelligence development and why we have to solve it

    The social dilemma in artificial intelligence development and why we have to solve it

  • The State of AI Ethics Report (Volume 6)

    The State of AI Ethics Report (Volume 6)

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.