Summary contributed by our researcher Victoria Heath (@victoria_heath7), who’s also a Communications Manager at Creative Commons.
*Link to original paper + authors at the bottom.
Overview: This research places the perceptions of AI and robots in South Korea, China, and Japan along a spectrum ranging from “tool to partner,” and further examines the relationships between these perceptions and approaches to AI ethics. The author also identifies three interrelated AI and robotics-related ethical issues: 1) female objectification, 2) the Anthropomorphized Tools Paradox, and 3) “antisocial” development.
Although artificial intelligence (AI) and robots are tools, “their perception is increasingly that of partners,” writes author Danit Gal. In this research, Gal places the perceptions of AI and robots in South Korea, China, and Japan along the spectrum ranging from “tool to partner” by exploring a) policies and ethical principles, b) academic thought and local practices, and c) popular culture. Further, Gal examines the relationships between these perceptions and local approaches to AI ethics. identifying three AI and robotics-related ethical issues that arise: 1) female objectification, 2) the Anthropomorphized Tools Paradox, and 3) “antisocial” development.
According to Gal, South Korea is placed in “the tool range” of the spectrum “due to its establishment of a clear human-over-machine hierarchy” and “demonstrates a clear preference for functional AI application and robots.” There are several policies and ethical principles that enshrine this idea, such as the Robots Ethics Charter (revised in 2016), from which the South Korean National Information Society Agency (NIA) built its Ethics Guidelines for the Intelligent Information Society (April 2018). These Guidelines outline four positions: 1) responsibility on users to regulate use, 2) responsibility for assessing AI and robots’ negative social impact on providers, 3) responsibility on developers for eliminating bias and discriminatory characteristics in AI, and 4) calls for developing AI and robots that do not have “antisocial” characteristics.” Broadly, most policies and ethical principles in South Korea place an emphasis on balancing protecting “human dignity” and “the common good,” as well as reaffirm the idea that these are “tools meant to protect human dignity and promote the common social good.”
In academic thought and local practices, Gal highlights the Korea Advanced Institute of Science and Technology (KAST) Code of Ethics for Artificial Intelligence (2018) released in response to protests regarding its involvement in developing lethal autonomous weapons systems. The third principle found in this Code of Ethics is most unique, stating: “AI shall follow both explicit and implicit human intention,” with a note that the “AI should follow the person with the highest priority or closet relationship” if multiple people are involved. Gal critically examines the conflict between this idea, which could reinforce existing power structures and discriminatory practices, with the “developer’s mandate to act as eliminators of social bias and discrimination under NIA’s Ethical Guidelines.”
The aforementioned hierarchical structure has been challenged by South Korean popular culture. Korean dramas, writes Gal, often place AI and robots as “family members, friends, and love interests.” This leads some to wonder if the more reliant on or normalized to social robots people become, the more people will lose our basic ethical values and devalue human relationships. Hence the emphasis by the NIA on “avoiding the antisocial development of AI and robots.” For now, however, it appears South Koreans are more comfortable with “functional robots” because they retain more “control” over it, unlike a more “biologically inspired” robot.
China’s overall perception of AI and robots, especially on the government and corporate-level, is similar to South Korea, sitting more so on the “tool range” of Gal’s spectrum. The Chinese Association for Artificial Intelligence (CAAI), led by Professor Xiaoping Chen, is responsible for creating ethical guidelines for the development of AI and robots in China. There is evidence that the CAAI is looking at the ethical challenges related to creating technology that can be used as an “intelligent tool but designed with the characteristics of a desirable partner.” More often than not, these “tools” are designed with “feminine” characteristics, touching on the Anthropomorphized Tools paradox and female objectification issues aforementioned by Gal.
Robin Li Yanhong, the CEO of Baidu, emphasized in 2019 at a government-run event the importance of sharing “Chinese wisdom” globally to inform international AI Ethics discourse. This includes the “integration of the Chinese government’s twelve ‘core socialist values,” which are divided into three groups: 1) national values, 2) individual values, and 3) social values. Within the national Engineering Ethics textbook, four unique Chinese characteristics are highlighted: “responsibility precedes freedom, obligation precedes rights, the group precedes the individual, and harmony precedes conflict.”
Among academic thought, local practices, and popular culture, however, there is “strong interest in imbuing AI and robots with partner-like capabilities to help them realize their full positive potential.” The Harmonious Artificial Intelligence Principles (HAIP) led by Professor Yi Zeng promotes ideas that aim to achieve harmony between humans and AI and robots through mutual respect, empathy, and altruism. For example, one idea is that AI should have privacy. Another is that humans shouldn’t show bias towards the machine. There is even an idea, proposed by Hanniman Huang, that AI and robots should be considered a new species and should be eventually considered a part of human society. This idea aligns with ideas in Chinese Buddhism that “everything can be cultivated toward enlightenment and become the Buddha. An example of how this can manifest itself in popular culture is the intelligent robot monk, Xian’er, which has over one million followers on social media and engages with Buddhist scriptures.
AI and robots have been depicted as love interests since the 1990s in China, with movies like Funny Robot Talk (1996), Robot Boyfriend (2017), and Robot Maid from Heaven (2017). Raising the female objectification issue (plus other issues) is the social chatbot XiaoIce, created by Microsoft, which is modeled after a female teenager and has over 660 million users who, as Gal writes, “often perceive it as a friend or love interest.” There are even AI and robots replicating famous entertainers and music groups, like May Wei VIV, or acting as news anchors on state television. Even though these are developed as “tools” for entertainment purposes, people often engage with them as “friends and partners,” similar to human entertainers.
Gal places Japan on the “partner range” of the spectrum “due to its exceptionally strong mix of pro human-AI-robots partnership academic thought, local practices, and popular culture.” Interestingly, while the policy approach to AI in the country appears to be moving more toward the “tools” range of the spectrum in order to stay in line with international discourse, “the extent of its societal vision for coexistence and coevolution with AI and robots is distinct.” The 5th Science and Technology Basic Plan (2016) introduces Society 5.0: a future in which AI and robots enable a more “convenient and diverse” society by responding to the needs of humans and even anticipating those needs—potentially creating a “push rather than pull culture” (similar to the current world of online advertising). The Cabinet Office Council on the Social Principles of Human-centric AI has warned of the “overdependence on AI and robots,” emphasizing “the need to maintain human dignity” while still calling for an “AI-based human living environment.” This basically outlines a future where Japan’s social systems and “individual character” may need to be redesigned to accommodate the use of AI and robots as social tools.
In academic thought, local practices, and popular culture, there seems to be a divergence from the idea of AI and robots as merely tools for enabling social progress. This may be explained by Japan’s history of “robot-friendly” media, as well as the perception that robots can help solve many of the social problems Japan is facing, including an aging population (robots can offer elder care) and a slowing economy (AI offers automation). In Japanese popular culture, two cartoons have made a significant impact on perceptions regarding AI and robots, as well as inspired would-be developers: Astro Boy, first introduced in 1952, and Doraemon, first introduced in 1969. Softbank’s robot Pepper, which is a conversational humanoid robot, has also had profound influence. It functions as everything from an assistant to a Buddhist priest, and has been marketed by the company as a “friend, sibling, potential love interest, entertainer, and caretaker.” There’s also Aibo, Sony’s pet robot dog, which in some instances has been given religious rites when it breaks down. “This derives from the concept of animism,” writes Gal, which is found in the Shinto belief that the “spirits of otherworldly beings can dwell in animate and inanimate objects,” and in the Buddhist belief that “both animate and inanimate objects are a part of the natural world and possess the character of the Buddha.”
Finally, Japanese media has many stories of AI and robots acting as partners, especially love interests, such as Absolute Boyfriend (2008), Cyborg She (2008), or Ando Lloyd—A.I. Knows Love? (2013). This exists outside of television and movies, however, there is also Vinclu Inc.’s Gatebox AI lab’s holographic virtual wife and home assistant which is “modeled after a young female character named Hikari Azumu.” Although its popularity is a byproduct of the loneliness epidemic in Japan, it also constitutes a “rare edge case of intentional tool anthropomorphizing and female objectification, where a functional home assistant is specifically designed to act as a meaningful romantic partner.”
Although these three countries are at different places on the “tool to partner” spectrum, they are all shifting their place due to changes in international and local discourse regarding AI and robots, as well as emerging tensions between the social benefits and harms of these technologies become clearer. In particular, the three AI and robotics-related ethical issues that Gal discusses: 1) female objectification, 2) the Anthropomorphized Tools Paradox, and 3) “antisocial” development, will increasingly become tension points—not only for the countries studied here, but everywhere.
Original paper by Danit Gal: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3400816#