• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI agents for facilitating social interactions and wellbeing

May 28, 2023

🔬 Research Summary by Hiro Taiyo Hamada, a neuroscientist at Araya Inc., Japan.

[Original paper by Hiro Taiyo Hamada and Ryota Kanai]


Overview: AI agents are prevalent in serving human beings, but their current applications are relatively limited to individual subjects. In a relatively unexplored region, AI agents mediate social interactions leading to well-being. This paper touches upon the possibilities of AI agents for group well-being as social mediators and the social impact where another ethical issue may emerge.


Introduction

Science fiction works like ”Klara and the Sun” by Kazuo Ishiguro, a Nobel laureate, often depict social interactions between AI agents and human beings. Multiple studies have been actively performed, and real applications to support human well-being have been introduced. However, concurrent AI applications focus on individual subjects or non-social domains such as automation, possibly due to language complexity. The paper explored the possibilities of AI agents mediating human social interactions leading to well-being. Human beings belong to multiple social groups, such as family, colleagues, and sports clubs, and our well-being is also influenced by social connectedness. The paper summarized two approaches to intervene in human interactions through the literature on human group dynamics, which may raise potential ethical issues.

Key Insights

Human Well-being and AI

The pandemic of COVID-19 endangered human well-being by loss of social connectedness, such as decreased belongingness and increased loneliness. Well-being has been intensively studied by individual affective, cognitive, and social evaluations. Multiple AI applications also target human well-being by analyzing individual emotional conditions and risks of mental disorders and intervening with them through apps and social media. For example, many chatbots based on psychological therapies provide feedback to users for intervening individual mental states.

On the other hand, people belong to multiple social communities, and the scope of well-being within such communities should be broad. There are some attempts to measure well-being in social communities, such as sports clubs and workplaces. However, fewer works and AI applications have been made to intervene in such group well-being except for social network interventions, which intensify, delete, and transfer social ties to promote healthy behaviors. 

The paper suggests developing AI applications to enhance group well-being since it may greatly impact our societies.

How can AI agents support human group well-being?

Relevant technology to detect and intervene in human-human interactions has been active recently, although direct AI applications targeting group wellbeing are not well studied.

Automated group-level emotion recognition is one of the main relevant fields. Group-level emotion is predicted based on various images, videos, and social media datasets. Another relevant field is human-agent interaction to promote human-human interactions with artificial agents as social mediators. A few works focus on discussion facilitation, group chats, and sharing public goods with artificial agents. These agents target wholistic group dynamics of members to promote human-human interactions. The paper also proposed a different probable approach to mediate interactions based on the one-to-one social connections of members in a group. AI agents may approach conversations of specific group members with natural language processing and social network intervention. By doing so, direct mediation of human-human interactions can be performed. This approach will require dealing with the complexity of social relationships and dynamic changes in conversations. The paper argued that these two approaches to group dynamics and one-to-one social connections allow us to mediate human-human interactions with AI agents to promote the well-being of social groups.

How much acceptable are AI agents to mediate human interactions? Further Surveillance society?

Mediating human-human interactions for group well-being potentially leads to further surveillance of human communication and political inequality since developers can bias AI agents to benefit certain groups. 

The paper discussed three potential issues which are linked together. The first issue is how to compute and deal with fairness while we have different cultural contexts, conflicts of interest, and structures of benefits. Distinct perceptions of well-being based on culture, individual personality, and political positions complicate the formalization of fairness within a group. Another issue is to protect the privacy of human-human interactions from the viewpoint of ownership and autonomy of communications. Human-human interactions often involve exchanges of private information which AI agents sometime should not analyze and intervene. It is unclear how to reconcile ownership of private information and AI involvement. The third issue is associated with the enhancement of usefulness from the viewpoint of users of accessibility and safety. How mediation by AI agents is aligned with other interests, such as public welfare, is important since the introduction of the agents may not be necessary or appropriate in some cases. Investigation of these issues and designing ethical guidelines are necessary to establish AI agents mediating human interactions to support well-being.

Between the lines

The article offers forthcoming applications and potential issues of AI for human group well-being. The novel role of AI agents as social mediators may support human well-being more effectively than human beings, yet the current literature lacks actual applications and ethical problems. For real-world usage, natural language processing (NLP) and computer vision will be critical fields to deal with human-human interactions since they have verbal and non-verbal aspects. Recent discussions on whether large-scale language models like GPT-3 can understand ethics may shed light on the feasibility of AI understanding human conversations from the verbal aspect. The advancement of image recognition with deep learning may delineate human-human interactions from the non-verbal perspective. Meanwhile, how much AIs are allowed to be involved in human-human interactions is an ethical issue that is not fully elucidated. How NLP and computer vision deal with human group interactions and ethics would bring further discussions and advances in group wellbeing.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • How Artifacts Afford: The Power and Politics of Everyday Things

    How Artifacts Afford: The Power and Politics of Everyday Things

  • Confidence-Building Measures for Artificial Intelligence

    Confidence-Building Measures for Artificial Intelligence

  • Research summary: Social Biases in NLP Models as Barriers for Persons with Disabilities

    Research summary: Social Biases in NLP Models as Barriers for Persons with Disabilities

  • Whose AI Dream? In search of the aspiration in data annotation.

    Whose AI Dream? In search of the aspiration in data annotation.

  • When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles

    When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles

  • Hazard Contribution Modes of Machine Learning Components (Research Summary)

    Hazard Contribution Modes of Machine Learning Components (Research Summary)

  • Robust Distortion-free Watermarks for Language Models

    Robust Distortion-free Watermarks for Language Models

  • The Ethics of Sustainability for Artificial Intelligence

    The Ethics of Sustainability for Artificial Intelligence

  • Between a Rock and a Hard Place: Freedom, Flexibility, Precarity and Vulnerability in the Gig Econom...

    Between a Rock and a Hard Place: Freedom, Flexibility, Precarity and Vulnerability in the Gig Econom...

  • Compute Trends Across Three Eras of Machine Learning

    Compute Trends Across Three Eras of Machine Learning

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.