🔬 Research Summary by Hiro Taiyo Hamada, a neuroscientist at Araya Inc., Japan.
[Original paper by Hiro Taiyo Hamada and Ryota Kanai]
Overview: AI agents are prevalent in serving human beings, but their current applications are relatively limited to individual subjects. In a relatively unexplored region, AI agents mediate social interactions leading to well-being. This paper touches upon the possibilities of AI agents for group well-being as social mediators and the social impact where another ethical issue may emerge.
Introduction
Science fiction works like ”Klara and the Sun” by Kazuo Ishiguro, a Nobel laureate, often depict social interactions between AI agents and human beings. Multiple studies have been actively performed, and real applications to support human well-being have been introduced. However, concurrent AI applications focus on individual subjects or non-social domains such as automation, possibly due to language complexity. The paper explored the possibilities of AI agents mediating human social interactions leading to well-being. Human beings belong to multiple social groups, such as family, colleagues, and sports clubs, and our well-being is also influenced by social connectedness. The paper summarized two approaches to intervene in human interactions through the literature on human group dynamics, which may raise potential ethical issues.
Key Insights
Human Well-being and AI
The pandemic of COVID-19 endangered human well-being by loss of social connectedness, such as decreased belongingness and increased loneliness. Well-being has been intensively studied by individual affective, cognitive, and social evaluations. Multiple AI applications also target human well-being by analyzing individual emotional conditions and risks of mental disorders and intervening with them through apps and social media. For example, many chatbots based on psychological therapies provide feedback to users for intervening individual mental states.
On the other hand, people belong to multiple social communities, and the scope of well-being within such communities should be broad. There are some attempts to measure well-being in social communities, such as sports clubs and workplaces. However, fewer works and AI applications have been made to intervene in such group well-being except for social network interventions, which intensify, delete, and transfer social ties to promote healthy behaviors.
The paper suggests developing AI applications to enhance group well-being since it may greatly impact our societies.
How can AI agents support human group well-being?
Relevant technology to detect and intervene in human-human interactions has been active recently, although direct AI applications targeting group wellbeing are not well studied.
Automated group-level emotion recognition is one of the main relevant fields. Group-level emotion is predicted based on various images, videos, and social media datasets. Another relevant field is human-agent interaction to promote human-human interactions with artificial agents as social mediators. A few works focus on discussion facilitation, group chats, and sharing public goods with artificial agents. These agents target wholistic group dynamics of members to promote human-human interactions. The paper also proposed a different probable approach to mediate interactions based on the one-to-one social connections of members in a group. AI agents may approach conversations of specific group members with natural language processing and social network intervention. By doing so, direct mediation of human-human interactions can be performed. This approach will require dealing with the complexity of social relationships and dynamic changes in conversations. The paper argued that these two approaches to group dynamics and one-to-one social connections allow us to mediate human-human interactions with AI agents to promote the well-being of social groups.
How much acceptable are AI agents to mediate human interactions? Further Surveillance society?
Mediating human-human interactions for group well-being potentially leads to further surveillance of human communication and political inequality since developers can bias AI agents to benefit certain groups.
The paper discussed three potential issues which are linked together. The first issue is how to compute and deal with fairness while we have different cultural contexts, conflicts of interest, and structures of benefits. Distinct perceptions of well-being based on culture, individual personality, and political positions complicate the formalization of fairness within a group. Another issue is to protect the privacy of human-human interactions from the viewpoint of ownership and autonomy of communications. Human-human interactions often involve exchanges of private information which AI agents sometime should not analyze and intervene. It is unclear how to reconcile ownership of private information and AI involvement. The third issue is associated with the enhancement of usefulness from the viewpoint of users of accessibility and safety. How mediation by AI agents is aligned with other interests, such as public welfare, is important since the introduction of the agents may not be necessary or appropriate in some cases. Investigation of these issues and designing ethical guidelines are necessary to establish AI agents mediating human interactions to support well-being.
Between the lines
The article offers forthcoming applications and potential issues of AI for human group well-being. The novel role of AI agents as social mediators may support human well-being more effectively than human beings, yet the current literature lacks actual applications and ethical problems. For real-world usage, natural language processing (NLP) and computer vision will be critical fields to deal with human-human interactions since they have verbal and non-verbal aspects. Recent discussions on whether large-scale language models like GPT-3 can understand ethics may shed light on the feasibility of AI understanding human conversations from the verbal aspect. The advancement of image recognition with deep learning may delineate human-human interactions from the non-verbal perspective. Meanwhile, how much AIs are allowed to be involved in human-human interactions is an ethical issue that is not fully elucidated. How NLP and computer vision deal with human group interactions and ethics would bring further discussions and advances in group wellbeing.