• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Chinese Approach to AI: An Analysis of Policy, Ethics, and Regulation

March 29, 2021

🔬 Research summary by Dr. Marianna Ganapini (@MariannaBergama), our Faculty Director.

[Original paper by Huw Roberts, Josh Cowls, Jessica Morley, Mariarosaria Taddeo, Vincent Wang & Luciano Floridi]


Overview: This paper explores current China’s current AI policies, their future plans, and ethical standards they’re working on. The authors zoom in on China’s country-wide strategic effort, i.e. the ‘New Generation Artificial Intelligence Development Plan’ (AIDP). The strategic aims of the plan can be divided up into 3 main goals: international competition, economic development, and social governance.


Introduction

It is no news to anyone that China is a leading force in AI, but what is their strategy in becoming an AI-superpower and what are they trying to achieve exactly? In this paper, the authors shed some light on the current Chinese policies and on China’s future plans for AI while also looking at some AI ethical standards that China is trying to develop. The bottom line is that China aims at becoming the AI world leader by rapidly developing breakthrough technologies that will completely revolutionize this field. However, China is poised to face several ethical challenges given its current authoritarian political system which sees technology also as a means to maintain control over its own citizens.

China and AI

The authors zoom in on China’s country-wide strategic effort, i.e. the ‘New Generation Artificial Intelligence Development Plan’ (AIDP) while focusing in particular on three “strategic areas”: international competition, economic growth, and social governance. 

The AIDP was released by China’s State Council (their top administrative body) in 2017 and was the first “unified document that outlines China’s AI policy objectives” with the goal of making China “the world centre of AI innovation by 2030” and making AI the driving factor to propel China’s future economic and industrial developments.

The point of this plan is to function as a catalyst for busting the development of AI tech by private companies and local governments. The plan selects internationally established private companies as ‘AI National Champions’ (e.g. Alibaba, Baidu). “Being endorsed as a national champion involves a deal whereby private companies agree to focus on the government’s strategic aims. In return, these companies receive preferential contract bidding, easier access to finance, and sometimes market share protection”. 

Similarly, local governments are empowered to incentivize the development of new technologies while also trying to fulfill “national government policy aims”. It is important to stress that private companies and local governments are given a lot of leeway in how to proceed: they are only provided “few specific guidelines”. And “[t]his enables companies to cherry-pick the technologies they want to develop and provides local governments with a choice of private-sector partners”.

The strategic aims of the plan can be divided up into 3 main goals: international competition, economic development, and social governance.

International competition

China’s main focus in international competition is to develop breakthrough AI military technologies to overtake the US. For instance, they have been developing cyber warfare and cyberattack strategies to gain valuable knowledge and intelligence and they clearly see technology and AI in particular as a way to gain strategic military advantages over the US. At the same time, however, top officials in China seem to be aware of the dangers of AI in fostering a “potential military escalation” and see the need for cooperation in mitigating potential risks (especially the risks posed by autonomous lethal weapons). 

Economic development

AI is considered as the future driving force behind China’s economic growth which is key to keep up with China’s economic and industrial expansion of the last few decades. The potential of AI as an economic force comes also with some added risks for the economy as it can disrupt the labour market and negatively affect low- and medium-skilled jobs. Though China is preparing for these structural changes, “[e]stimates show that, by 2030, automation in manufacturing might have displaced a fifth of all jobs in the sector […]. These changes are already underway, with robots having replaced up to 40% of workers in several companies.” This process may actually worsen China’s domestic economic inequalities. 

Social governance

The AIDP explicitly tackles the social challenges China faces, from economic inequality to the lack of a “well-established welfare system” and a rapidly expanding “environmental degradation”. Technology is seen as key in producing meaningful changes in the healthcare sector, and AI is perceived as a tool to address some of China’s environmental problems — especially those related to pollution in the air. 

Similarly, AI will be used to administer justice in a potentially more efficient and transparent way with the stated goal of fixing some of the long-term problems of China’s judicial system. This attempt has already raised some eyebrows: using AI to administer justice has often led to even more injustice and unfairness. China is using technology to structure more efficiently its Social Credit System, a system that needs an extensive amount of personal data.

Social governance in China also means smart cities and surveillance technologies. Possibly, the most egregious example of this is the surveillance program adopted in the autonomous region of Xinjiang where so-called “potential terrorists” were tracked through facial recognition and other invasive surveillance technologies. 

Ethics of AI in China 

Given the risks and opportunities of massive development of AI technologies, “the AIDP outlines a specific desire for China to become a world leader in defining ethical norms and standards for AI”/ Based on this, ethical principles and guidelines were put forward which “bear some similarity to those supported in the Global North” in their emphasis on transparency, privacy, accountability and respect for human welfare. Yet there will be some important differences in how these principles are understood and applied because, as the authors point out, “China’s AI ethics needs to be understood in terms of the country’s culture, ideology, and public opinion”. For instance, after its past timid efforts to protect data and personal privacy, China is now trying to enforce some privacy regulations. 

However, China is now also struggling to define exactly what type of data needs to be protected and how.  The main ethical challenge, in this case, is how to square the idea of personal privacy within an authoritarian political system in which the government is de facto not constrained by regulations meant to protect its citizens (as seen in the mass surveillance program in use).  

Similarly for medical ethics, China’s main goal is “societal welfare” rather than individual wellbeing. According to the authors, that boils down to the idea that personal medical data will be shared widely and used by the medical community to find cures that will benefit society at large with little concern for the privacy and the rights of single individuals.

In conclusion, though there seems to be a growing concern for the ethical challenges posed by AI, China is still very much struggling to tackle these problems given its current political and social system which often uses morally dubious strategies and tools to maintain control over its own citizens.  

Between the lines

We believe this paper represents an important step in the right direction of developing a well-informed analysis of the risks and opportunities of AI in non-Western countries. It is important to acknowledge the ethical limitations of the development of AI in China while also trying to objectively report the current efforts being made to overcome these limitations. What is now needed is a careful comparative analysis of the shortcomings of the Global North compared to China because it is unclear that the problems we see in China are not present in some form also in western countries. For instance, it would be important to understand how Europe and the US fare compared to China on issues such as privacy and transparency in AI.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

    AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

  • Investing in AI for Social Good: An Analysis of European National Strategies

    Investing in AI for Social Good: An Analysis of European National Strategies

  • Language Models: A Guide for the Perplexed

    Language Models: A Guide for the Perplexed

  • Research Summary: Countering Information Influence Activities: The State of the Art

    Research Summary: Countering Information Influence Activities: The State of the Art

  • To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide? (Research Summary)

    To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide? (Research Summary)

  • The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

    The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

  • AI Deception: A Survey of Examples, Risks, and Potential Solutions

    AI Deception: A Survey of Examples, Risks, and Potential Solutions

  • Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice

    Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice

  • Learning to Prompt in the Classroom to Understand AI Limits: A pilot study

    Learning to Prompt in the Classroom to Understand AI Limits: A pilot study

  • NATO Artificial Intelligence Strategy

    NATO Artificial Intelligence Strategy

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.