š¬ Original article by Ravit Dotan, a researcher in AI ethics, impact investing, philosophy of science, feminist philosophy, and their intersections.
As AI applications become widespread, it is increasingly important to understand and manage their impact on people, society, and the environment. AI ethicists have been working hard toward this goal, producing research, policies, and tools to evaluate and improve the ethical dimensions of AI systems. However, a major gap in AI ethics has emerged: Western countries are massively over-represented. For example, studies have shown that the US and European countries dominate the production of AI ethics guidelines (e.g., this study and this study).
The Western dominance disadvantages those who are affiliated with other parts of the world and therefore inhibits the field of AI ethics. In this article, I review a few examples of work in AI ethics that center on non-Western issues, highlighting non-Western values, needs, circumstances, and perspectives on AI. In addition, I present a directory of experts in non-Western AI ethics, curated by myself and Dr. Emmanuel Goffi, co-director of the Global AI Ethics Institute. The directory includes the names and contact information of the experts, as well as links to samples of their work.
Values
Western values are different from the values of other cultural groups. Therefore, Western guidelines for ethical AI sometimes conflict with non-Western value systems.
For example, Mary Carman and Benjamin Roseman bring out a conflict with African values. One of the widespread themes in AI ethics today is that AI systems should protect human autonomy. According to this principle, to use the influential definition given by Luciano Floridi and his colleagues, āindividuals have a right to make decisions for themselves about the treatment they do or not receive.ā Carmen and Roseman argue that this principle conflicts with common communitarian values in African cultures due to its emphasis on individual decision-making. In many African cultures, they point out, decision-making is a communal process that might include family members as well as authority figures. āThe salience of community versus a strong individualism,ā they argue, āillustrates why we requireā¦sensitivity in how we adopt and adapt the principles in different contexts, if we are to apply them.ā
Junaid Qadir and Muhammad Suleman highlight Islamic perspectives on AI ethics. They argue that resources on digital ethics, such as IEEE and ACM guidelines, are based on secular ethics. However, Islamic ethics is very different. Therefore, they developed a unique course, titled āEthics, Value and Technologyā, for their students in Pakistan. This course, described in their paper, discusses ethical issues related to technology, including AI, from both Western and Islamic perspectives.
Needs and Circumstances
Some distinct needs of non-Western communities stem from their cultural traditions. For example, Angie Abdilla and her colleagues describe how Western AI systems can overlook Aboriginal Australian traditions. For example, traditionally, it is very important that aboriginals avoid speaking or exchanging goods with their in-laws. However, if not careful, AI applications may create these forbidden interactions. For instance, a smart fridge could share oneās food with their mother-in-law.
Moreover, as Chimayi Arun points out, AI can be more harmful to people in and from non-Western regions when the AIās design is unsuitable for non-Western contexts. To illustrate, Arun uses an analogy. Houses built for the cold climate of Northern Europe are unsuitable for warmer cities in developing countries. Similarly, AI systems built in Western contexts may be unsuitable for non-Western contexts.
For example, Arun points out that social media platforms, such as Facebook, were built in a Western context in which independent journalism flourishes. The availability of independent journalism provides social media users with ample sources for fact-checking. However, in countries where free journalism is scarce, it is harder to combat the disinformation and hate speech that spread so easily on social media.
Arun illustrates how devastating the results can be by appealing to the violence against Muslims in Myanmar. In Myanmar, military officials systematically used Facebook to spread misinformation and hate against Muslims. Given that the state heavily controls the press, residents had few resources to check the veracity of the posts. The influence of the militaryās Facebook campaign was so extensive, that former military officials, researchers, and civilian officials have argued that the military used Facebook as a tool for ethnic cleansing.
Perspectives on AI
An Ipsos survey found a correlation between peopleās opinion of AI and their countryās economic development level. For example, people in developing countries are much more likely to trust AI and have a positive outlook on AI services. For this reason, it is important to study perspectives on AI outside of the Western world in particular.
Arisa Ema and her colleagues studied perceptions of AI in Japan. For example, the team asked Japanese survey respondents which activities are the most suitable to be done by AI alone, without human supervision. The respondents favored driving, disaster prevention, and military activities. Popular stated reasons were that reliance on AI would reduce mistakes and increase reliability when it comes to these activities. Another finding is that, on the whole, people in Japan have low confidence that the government can prevent the misuse of AI. However, the confidence is higher among the general public than in other groups.
The need for diversity in AI ethics
AI systems affect the entire planet. However, efforts to understand and manage AIās impact focus on Western countries, whose values, needs, circumstances, and perspective do not generalize to other parts of the world. The result can be disadvantageous and oppressive to those who are affiliated with non-Western cultures.
We should work to change the landscape of AI ethics to include more perspectives. To make global perspectives more accessible, Dr. Emmanuel Goffi and I have created a curated directory of people with expertise in non-Western AI ethics.
We ask the reader to keep in mind that this directory is not exhaustive. First, we chose only to include experts we could contact and explicitly consented to be included. Second, it is likely that we havenāt identified all relevant experts. Therefore, we encourage the reader to see this directory as a starting point for exploring non-Western perspectives on AI ethics.
With that in mind, we would also like to recommend some related resources:
The directory is as follows: