• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Sociological Perspectives on Artificial Intelligence: A Typological Reading

March 15, 2021

🔬 Research summary by Nga Than (@NgaThanNYC), a doctoral candidate in the Sociology program at City University of New York – The Graduate Center.

✍️ This is part 7 of the ongoing Sociology of AI Ethics series; read previous entries here.

[Original paper by Zheng Liu]


Overview: This paper surveys the existing sociological literature on AI. The author provides researchers new to the field with a typology of three analytical categories: scientific AI, technical AI and cultural AI.  The paper argues that these perspectives reflect the development of AI, from a scientific field in the 20th century to its wide-spread commercial applications, and as a socio-cultural phenomenon in the first two decades of the 21st century.


Introduction

Sociological interest in studying different aspects of AI and society has increased as AI systems have been commercialized, and AI technologies have become more pervasive in different domains. Research literature has exploded as a consequence. Navigating this vast amount of literature is challenging for those who just entered the field of the sociology of AI. Zheng Liu relies on literature found on Google Scholar, Web of Science and Scopus to create a roadmap for researchers to examine the state of the sociology of AI research.  The author first clarifies different ways that the concept of AI has been used in sociological research, and then provides details on three broad categories of research including scientific AI, technical AI, and cultural AI. 

Four Conceptions of AI

  • AI is “the science and engineering of making intelligent machines.” It is a science that aims to make machines perform human tasks. 
  • AI refers to the scientific research field or an epistemic community, a social space of trained professionals whose jobs are to create and disseminate knowledge. The proliferation of AI research labs in academia and industry, and conferences such as NeurIPS, ICLR, ICML, and FAccT shows how this epistemic community has become larger, and more dynamic. This community is governed by “power relations between agents and groups of agents” within the field. 
  • AI as algorithms, or sets of mathematical instructions given to computer programs. AI is conceptualized as a meta-technology, and analyzed in terms of its various sub-technologies and applications. 
  • AI development has been conceived as a distinctive technological and socio-cultural phenomenon that is a part of the digital revolution. AI has been studied alongside with other digital technologies such as big data, the internet of things, cloud computing to evaluate the broader social, economic, cultural and political effects of these technologies. 

Scientific AI 

Early sociological research on AI comes from a scientific AI perspective, which approaches AI as a scientific research field or as a system of knowledge. Questions being explored include how AI research is conducted by social actors in social environments and therefore is a socially constructed enterprise, and what AI’s implications for the nature of human knowledge and for the human-machine relation are. Scholars often come from the sociology of science, STS, human-computer interactions. Research has shown that AI development within labs and research institutes is shaped by power relations within the field of AI and its competition with other fields for resources. Another consensus is that far from objectivity, AI systems embody and reproduce their developers’ cultural values. 

Recent development in this area asks important questions such as if AI is implemented in a social environment and assumes social roles, and enact social practices, and form social relations, then “how AI systems penetrate and transform social institutions, and in the process redefine social life?” Another approach takes the actor-network theory approach which posits that objects, ideas, and ideas all exist within social relations thus humans and machines should be considered similarly in social analysis. It argues that AI systems function like human social actors to form social relations and construct social realities. 

A third thread in this category tackles the question of AI’s implications for human society and human-machine relations. For example, machines can now generate knowledge, and thus change how knowledge is being produced, received, and utilized. Yet other scholars have argued machines are not able to read contexts and contend that machines do not have human ability. 

Future research questions in this category should interrogate the questions of power relations, given the increasingly important role of corporations and governments in funding and shaping AI research. Scholars should examine how this influence has encountered resistance from the AI community and the implications of power struggles for AI research. 

Technical AI 

Research in this category studies AI as a meta-technology and analyzes its various applications and sub-technologies. Automation in the workplace has been widely studied by sociologists (Lei 2021). The main argument is that different types of work are affected by AI differently. Mechanical work can be automated more intensively than work that requires more complex human inputs. Researchers examining the social impacts of autonomous vehicles have argued that in order for these complex machines to function in society, there must be better “social learning” which requires participation from multiple stakeholders such as AI developers, roboticists, government, and other social actors. However, this line of research is still confined within the theoretical, and speculative realm without much empirical research to examine AI on the ground. 

Another important research agenda within this category is to examine the use of automated systems for military purposes. The main question is to what degree should one trust automated systems when they make high-stake decisions such as targeting, and firing, or “killing” other humans. The central theoretical concern is the distinction between automated and autonomy. Even automated systems are highly complicated, they are not necessarily autonomous. Thus decisions made by such systems in warfare have important ethical consequences because “the machines are not accountable for their decisions.”

Another line of research is examining algorithms, which are considered powerful social institutions. Researchers such as Virginia Eubanks (2017) and Safiya Noble (2018) have argued that algorithmic decision-making can be biased, discriminatory and misleading. They “automate inequality,” and often make discriminatory decisions that “subject the poor and underprivileged to even dire circumstances.” The bone of contention here is that algorithms are said to be neutral but socially insensitive. They reinforce existing social inequality, thus are morally and politically problematic. 

Cultural AI 

Compared with the two above-mentioned categories, cultural AI research is still “a budding field.” Research in this category views AI development as a social phenomenon and examines its interactions with the wider social, cultural, economic and political conditions in which it develops and by which it is shaped. The popularization of AI and AI culture can help to “widen and deepen the unequal power relations in society.” 

Researchers have also examined the social construction of AI. Their focus is on how different groups leverage different cultural resources to develop AI narratives that help to advance their differing agendas. Effective cultural framing for AI development has been shown to be successful in the case of developing robots in Japan. One implication of these findings is that responsible AI can be shaped and created by “shaping AI developers’ perception of what makes socially popular AI and by influencing the social design of AI through active cultural framing and deframing of AI.” In other words, the framing of responsible AI development is possible through concerted efforts of multiple stakeholders. 

Between the Lines

The typology outlined in this paper helps researchers navigate the Sociology of AI literature, and serves as a good reference for considering the social evolution of AI. The author argues for the establishment of the sociology of AI as a subfield of sociology in its own right. Even though the author tried to be thorough in surveying the literature, the majority of the research cited in this paper still came from the Global North. Almost no surveyed research concerns AI development and deployment in the Global South. Another missing piece is a transnational comparative perspective. Given the fast-changing landscape of AI research, production, and deployment globally, there are many opportunities for sociologists to move away from the Western-centric perspective to examine AI in the Global South.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Research summary: AI in Context: The Labor of Integrating New Technologies

    Research summary: AI in Context: The Labor of Integrating New Technologies

  • Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

    Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

  • Clueless AI: Should AI Models Report to Us When They Are Clueless?

    Clueless AI: Should AI Models Report to Us When They Are Clueless?

  • A Virtue-Based Framework to Support Putting AI Ethics into Practice

    A Virtue-Based Framework to Support Putting AI Ethics into Practice

  • Sex Trouble: Sex/Gender Slippage, Sex Confusion, and Sex Obsession in Machine Learning Using Electro...

    Sex Trouble: Sex/Gender Slippage, Sex Confusion, and Sex Obsession in Machine Learning Using Electro...

  • Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument

    Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument

  • The E.U.’s Artificial Intelligence Act: An Ordoliberal Assessment

    The E.U.’s Artificial Intelligence Act: An Ordoliberal Assessment

  • Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

    Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

  • Before and after GDPR: tracking in mobile apps

    Before and after GDPR: tracking in mobile apps

  • Building a Credible Case for Safety: Waymo's Approach for the Determination of Absence of Unreasonab...

    Building a Credible Case for Safety: Waymo's Approach for the Determination of Absence of Unreasonab...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.