• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI agents for facilitating social interactions and wellbeing

May 28, 2023

🔬 Research Summary by Hiro Taiyo Hamada, a neuroscientist at Araya Inc., Japan.

[Original paper by Hiro Taiyo Hamada and Ryota Kanai]


Overview: AI agents are prevalent in serving human beings, but their current applications are relatively limited to individual subjects. In a relatively unexplored region, AI agents mediate social interactions leading to well-being. This paper touches upon the possibilities of AI agents for group well-being as social mediators and the social impact where another ethical issue may emerge.


Introduction

Science fiction works like ”Klara and the Sun” by Kazuo Ishiguro, a Nobel laureate, often depict social interactions between AI agents and human beings. Multiple studies have been actively performed, and real applications to support human well-being have been introduced. However, concurrent AI applications focus on individual subjects or non-social domains such as automation, possibly due to language complexity. The paper explored the possibilities of AI agents mediating human social interactions leading to well-being. Human beings belong to multiple social groups, such as family, colleagues, and sports clubs, and our well-being is also influenced by social connectedness. The paper summarized two approaches to intervene in human interactions through the literature on human group dynamics, which may raise potential ethical issues.

Key Insights

Human Well-being and AI

The pandemic of COVID-19 endangered human well-being by loss of social connectedness, such as decreased belongingness and increased loneliness. Well-being has been intensively studied by individual affective, cognitive, and social evaluations. Multiple AI applications also target human well-being by analyzing individual emotional conditions and risks of mental disorders and intervening with them through apps and social media. For example, many chatbots based on psychological therapies provide feedback to users for intervening individual mental states.

On the other hand, people belong to multiple social communities, and the scope of well-being within such communities should be broad. There are some attempts to measure well-being in social communities, such as sports clubs and workplaces. However, fewer works and AI applications have been made to intervene in such group well-being except for social network interventions, which intensify, delete, and transfer social ties to promote healthy behaviors. 

The paper suggests developing AI applications to enhance group well-being since it may greatly impact our societies.

How can AI agents support human group well-being?

Relevant technology to detect and intervene in human-human interactions has been active recently, although direct AI applications targeting group wellbeing are not well studied.

Automated group-level emotion recognition is one of the main relevant fields. Group-level emotion is predicted based on various images, videos, and social media datasets. Another relevant field is human-agent interaction to promote human-human interactions with artificial agents as social mediators. A few works focus on discussion facilitation, group chats, and sharing public goods with artificial agents. These agents target wholistic group dynamics of members to promote human-human interactions. The paper also proposed a different probable approach to mediate interactions based on the one-to-one social connections of members in a group. AI agents may approach conversations of specific group members with natural language processing and social network intervention. By doing so, direct mediation of human-human interactions can be performed. This approach will require dealing with the complexity of social relationships and dynamic changes in conversations. The paper argued that these two approaches to group dynamics and one-to-one social connections allow us to mediate human-human interactions with AI agents to promote the well-being of social groups.

How much acceptable are AI agents to mediate human interactions? Further Surveillance society?

Mediating human-human interactions for group well-being potentially leads to further surveillance of human communication and political inequality since developers can bias AI agents to benefit certain groups. 

The paper discussed three potential issues which are linked together. The first issue is how to compute and deal with fairness while we have different cultural contexts, conflicts of interest, and structures of benefits. Distinct perceptions of well-being based on culture, individual personality, and political positions complicate the formalization of fairness within a group. Another issue is to protect the privacy of human-human interactions from the viewpoint of ownership and autonomy of communications. Human-human interactions often involve exchanges of private information which AI agents sometime should not analyze and intervene. It is unclear how to reconcile ownership of private information and AI involvement. The third issue is associated with the enhancement of usefulness from the viewpoint of users of accessibility and safety. How mediation by AI agents is aligned with other interests, such as public welfare, is important since the introduction of the agents may not be necessary or appropriate in some cases. Investigation of these issues and designing ethical guidelines are necessary to establish AI agents mediating human interactions to support well-being.

Between the lines

The article offers forthcoming applications and potential issues of AI for human group well-being. The novel role of AI agents as social mediators may support human well-being more effectively than human beings, yet the current literature lacks actual applications and ethical problems. For real-world usage, natural language processing (NLP) and computer vision will be critical fields to deal with human-human interactions since they have verbal and non-verbal aspects. Recent discussions on whether large-scale language models like GPT-3 can understand ethics may shed light on the feasibility of AI understanding human conversations from the verbal aspect. The advancement of image recognition with deep learning may delineate human-human interactions from the non-verbal perspective. Meanwhile, how much AIs are allowed to be involved in human-human interactions is an ethical issue that is not fully elucidated. How NLP and computer vision deal with human group interactions and ethics would bring further discussions and advances in group wellbeing.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • “A Proposal for Identifying and Managing Bias in Artificial Intelligence”. A draft from the NIST

    “A Proposal for Identifying and Managing Bias in Artificial Intelligence”. A draft from the NIST

  • What lies behind AGI: ethical concerns related to LLMs

    What lies behind AGI: ethical concerns related to LLMs

  • LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games

    LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games

  • AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

    AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

  • Modeling Content Creator Incentives on Algorithm-Curated Platforms

    Modeling Content Creator Incentives on Algorithm-Curated Platforms

  • Research summary: What’s Next for AI Ethics, Policy, and Governance? A Global Overview

    Research summary: What’s Next for AI Ethics, Policy, and Governance? A Global Overview

  • Research summary: Robot Rights? Let’s Talk about Human Welfare instead

    Research summary: Robot Rights? Let’s Talk about Human Welfare instead

  • AI Ethics: Enter the Dragon!

    AI Ethics: Enter the Dragon!

  • On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research

    On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research

  • Research summary: The Toxic Potential of YouTube's Feedback Loop

    Research summary: The Toxic Potential of YouTube's Feedback Loop

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.