• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Online public discourse on artificial intelligence and ethics in China: context, content, and implications

January 12, 2022

🔬 Research summary by Yishu Mao, Researcher in global governance of AI.

[Original paper by Yishu Mao & Kristin Shi-Kupfer]


Overview:  The societal and ethical implications of artificial intelligence (AI) have sparked vibrant online discussions in China. This paper analyzed a large sample of these discussions which offered a valuable source for understanding the future trajectory of AI development in China as well as implications for global dialogue on AI governance.


Introduction

China’s emergence as a global leader in the field of AI raises the importance of understanding its development trajectory in this specific cultural context. Despite the party-state’s control over the public sphere, societal discourse in China can shed light on what sociotechnical future the population of over one billion can imagine and is currently negotiating. With the aim of examining online public discourse on ethical issues around AI in China, this paper asked the following questions:

1. How are the ethical and societal implications of AI being discussed?

2. Who is shaping the discussions?

3. What are the similarities and differences between the opinions of different stakeholders?

4. What are the implications of Chinese public discourse for global dialogue in AI governance?

Content analysis on a large sample of posts on two Chinese social media platforms, WeChat public accounts and Zhihu found that participants of the discussions were diverse, and they addressed a broad range of concerns associated with the application of AI in various fields. Some even gave recommendations on how to tackle these issues. 

Who is shaping the discussions?

As the two social media platforms were chosen for this study due to the different characteristics of their user base, analysis of the type of authors proved that the prominent voices on the two platforms were indeed quite different. The majority of authors on WeChat were from academia, the media or industry, representing cultural elites, while on Zhihu most were members of the general public and individual IT professionals. This difference provided the background for analyzing how and why opinions expressed on the two platforms differ.

Fig. 1

The discussions on AI ethics in China have been to some extent shaped by international deliberations. Analysis of foreign references demonstrated that deliberations in the USA and Europe have exerted great influence on the research of Chinese scholars and even tech companies, at least at the discursive level. Chinese researchers have extensively explored Western philosophies concerning science, technology and their social implications. They have, in turn, informed the Chinese public about global initiatives concerning the governance of AI. Although Zhihu users appeared to be less receptive to the international high-level deliberations on AI ethics and governance, they were tuned in to the imaginaries of AI created in science fiction produced in the US. These parallel engagements at the two levels – cultural elites and general public—with other parts of the world, although predominantly the Western world, provide a positive outlook for future societal dialogues on this topic at the international level.

How are AI ethics discussed?

Context of discussions

Despite the shared focus on general philosophical questions regarding human–machine relations, discussions on WeChat and Zhihu demonstrated different focus with regards to AI applications across specific sectors. On WeChat, a recent surge of academic papers about AI in healthcare led this field to being the most discussed. However, the impact of AI on labor was paid the least attention to. Conversely, labor issues around AI attracted the most attention on Zhihu, and AI in healthcare the least. Zhihu general public users’ lack of interest in fields seemingly less relevant for them was demonstrated by the near absence of discussion on the use of AI in military scenarios.

Fig. 3

Variety of concerns

A variety of concerns were addressed on both platforms. However, the different emphasis given to different issues is noteworthy. On WeChat, where most authors were from academia, the media and industry, the majority of the concerns were related to individuals, such as responsibility, privacy, and bias. On Zhihu, where most of the authors were members of the general public, more emphasis was given to concerns at a society level, especially concerns for the future of humanity. This can be partly attributed to the general public’s fascination with science fiction; however, their concerns over employment, inequality and autonomy, which were not as prominently featured in the discussions on WeChat, showed how different social groups have different priorities when considering the ethical issues around AI. 

Fig. 5

Recommendations for AI governance

Among those who gave concrete recommendations for AI governance, there was again a clearly different emphasis between the cultural elites on WeChat and members of the general public on Zhihu. The former put more emphasis on the role of governments and the latter put more emphasis on people’s own responsibility. While authors on both platforms shared opinions in recommending the multi-stakeholder approach and technical approach, Zhihu authors demonstrated less trust in letting companies regulate themselves and their interest in international collaborations.

Fig. 6

Between the lines

These findings offer valuable ground for understanding the future trajectory of AI development in China. The diverse perceptions of AI in general and the wide range of concerns as identified in online discourse mean that the Chinese state’s and Chinese companies’ development of AI may continue, but not without addressing concerns raised by Chinese society. Several implications for global dialogue on AI governance and directions for further research can also be drawn from the findings in this paper which demonstrated both the influence of international developments in AI governance on and cultural specificities within the Chinese domestic discourse. First, although actors in China appeared to be familiar with academic literature as well as policy developments globally, the lack of advocacy for collaboration in the Chinese public sphere is noteworthy and warrants further research. Despite differences in philosophical traditions as well as political and economic priorities, the possibility of agreements on the practical implications of values such as security and privacy need to be examined through empirical evidence. In addition, as demonstrated in this paper, the general public has expressed a sense of anxiety towards a future permeated with AI, in which their jobs and humanity could be threatened. This contradicts the widely held view that Chinese people demonstrate more positive attitudes toward digital technologies and demonstrated similarities with the attitudes found in Western societies.The reasons for Chinese society’s observed acceptance towards technologies may lie deeper in beliefs such as how to cope with changes and competition. More research is needed to understand the thinking that underpins Chinese people’s attitudes towards AI technologies, especially in comparative perspectives. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • FaiRIR: Mitigating Exposure Bias from Related Item Recommendations in Two-Sided Platforms

    FaiRIR: Mitigating Exposure Bias from Related Item Recommendations in Two-Sided Platforms

  • Supporting Human-LLM collaboration in Auditing LLMs with LLMs

    Supporting Human-LLM collaboration in Auditing LLMs with LLMs

  • “A Proposal for Identifying and Managing Bias in Artificial Intelligence”. A draft from the NIST

    “A Proposal for Identifying and Managing Bias in Artificial Intelligence”. A draft from the NIST

  • Research summary:  Algorithmic Bias: On the Implicit Biases of Social Technology

    Research summary: Algorithmic Bias: On the Implicit Biases of Social Technology

  • Positive AI Economic Futures: Insight Report

    Positive AI Economic Futures: Insight Report

  • The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

    The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

  • Self-Consuming Generative Models Go MAD

    Self-Consuming Generative Models Go MAD

  • Consent as a Foundation for Responsible Autonomy

    Consent as a Foundation for Responsible Autonomy

  • The Challenge of Understanding What Users Want: Inconsistent Preferences and Engagement Optimization

    The Challenge of Understanding What Users Want: Inconsistent Preferences and Engagement Optimization

  • Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutio...

    Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutio...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.