This event recap was co-written by Connor Wright (our Partnerships Manager) and Shannon Egan (our QRM Intern) who co-hosted our “AI Ethics in the APAC Region” virtual meetup in partnership with Women in AI and the University of New South Wales (UNSW).
Room 1
Being a region usually estranged from the AI discussion in the West, the true power and utility of considering such perspective shone through. Ranging from trust in governments, the multiculturality of the region and the impact of COVID-19, AI is truly considered in a refreshing light. So, here are the top 5 takeaways from Room 1âs discussion.
- The multiculturality of the APAC region has to be considered
The APAC region contains around 70% of the global population/economy, which encases a myriad of different cultures and perspectives, especially in terms of views on what matters within the AI debate. We have seen how Europe has paid the price for having a unilateral view of AI (such as in the UK governmentâs use of a grading algorithm). Hence, the room questioned whether there should be, or could actually be, a policy to bring the APAC region under one umbrella.
- Trust in government
A prominent part of a multicultural approach is the faith that different cultures have in their different form governments to solve different country crises, even when not democracies. For example, South Koreans were observed to hold a higher trust in government, as seen in allowing the government to use their credit card histories to track potential COVID spreading.
The room then noted how this bears a surprising resemblance to the âcollectivistâ attitude towards handling the pandemic. The government does not boast a large pool of confidence from the population, but the population still felt it necessary to trust in its decisions.
- Itâs easy to build AIs, but whatâs there to stop them?
One benefit of a more centralised governance model is the ability and confidence this brings when having to affront AI-orientated problems. For example, the government in Beijing has confidence to shutdown AI companies if they have to, whereas other countries in the APAC region, perhaps, do not. One worry is then a potential power-dynamic being generated between countries like China and less centralised members of the APAC region.
In this case, the controlling of funding for AI-orientated companies was proposed as a way to help empower less centralised countries. However, defining what the benefits of this are in different contexts is extremely hard.
- . AI is not quite established in some countries in the APAC region
The room observed how some APAC members (such as Australia) have more time in learning how to best introduce AI into the country. At this point in time, it may be that Australia doesnât have enough data for AI to be instantiated appropriately in the Australian context. However, as necessity is the mother of all invention, should more sophisticated data storage or ways of sourcing such data be required, there certainly will be efforts to put them into place. For, what would be the worst case scenario is countries like Australia being stuck in a tech war between them and the more advanced countries of the region.
- COVID and its effects in the APAC
One war that for sure each individual country of the region is going through is that of combatting the pandemic. One observation brought up by the room is how during war times, the liberties taken away were done so on a more temporary basis. However, it appears that the digital implementations being enacted now are here to stay. In this sense, it may not be the end of the world that people have mountains of COVID-related health data now as it decreases in value over time, but the question then remains whether it expires quickly enough.
Some concluding thoughts
What the roomâs discussion has clearly demonstrated is just how multi-faceted and rich the APAC experience of the AI landscape truly is. Different methodologies to approaching AI, the pandemic and even governance in itself proves to bring the refreshing notion of awareness to the fore. AI is going to influence every corner of the globe, which in turn means every corner has an opportunity to have their say, with the APAC region proving a particularly fruitful perspective to listen to.
Room 2
Room 2âs discussion flowed through a series of questions, all connected by the central theme: how will AI alter our relationship to government, to industry, and to one another? Specifically, we asked: How are companies adapting their approach as public consciousness of ethical issues in AI increases? Is it justified to have a national database used to surveil the population? Can cultural sensitivity be built into our AI systems to address regionally-specific bias? All of these questions were tackled with emphasis on the implications for human rights and social justice.
- The near future of autonomous vehicles
The advent of autonomous vehicles is both exciting and worrying. On one hand, they have the potential to transform our daily experience and our economy in the next few years. On the other, there are likely to be bugs in the system that cause harm to their drivers and others on the road. Despite this danger, the EUâs recent proposal for harmonised rules on artificial intelligence contains very few mentions of self-driving cars. Failing to regulate the airline industry early resulted in many lives lost, so we should move quickly to establish autonomous vehicle safety standards. The good news is – flying is now one of the safest means of transportation, thanks in part to strong regulation.
- The business case for AI Ethics
As more people understand the role that AI plays in their lives, especially with respect to their personal data, tech companies have come under increased scrutiny. With the status quo being a lack of transparency or informed consent, many firms have tried to gain competitive advantage by positioning themselves firmly in the opposite direction. The terms âResponsible AIâ and âExplainable AIâ now surface frequently on company websites. Does this represent progress, or simply the emergence of a new marketing tactic?
- Surveillance and facial recognition technology
Facial recognition technology (FRT) has become increasingly controversial due to its potential applications in national surveillance. This has pushed firms like Amazon and Microsoft to ban sales of FRT to police, while IBM has pulled out of the business entirely. But they cannot prevent clients from using technology that has already been purchased, and many firms still exchange data with governments around the globe. Public discomfort with FRT varies greatly by culture. In Singapore, GovTechâs Lamppost as a Platform (LaaP) project has installed cameras equipped with FRT in many public spaces. The project has received little backlash from Singaporeans, which may be attributed to an exceptionally high trust in government.
- Addressing algorithmic bias across culturesÂ
When we discuss combatting algorithmic bias, we must ask: whose bias? AI algorithms are often deployed without correcting for biases present in their training data. But even if they are, the priorities are heavily influenced by the cultural and professional context of the developer.
The APAC region comprises an extremely diverse set of cultures, ethnicities, and religions. As result of geographic and cultural distance, as well as the complexity of interactions between groups, developers at powerful firms in America are unlikely to address the particular forms of discrimination present within APAC countries, such as caste-class convergence in India.
- Decolonizing AI
Colonization has had a marked impact in shaping the APAC Region, in many cases forming inequitable power structures which still exist today. AI has the potential to reinforce this legacy, by concentrating power among those with access to large datasets for AI training, and to resources required to run computationally expensive algorithms. Prosperity generated by the AI economy should also benefit indigenous cultures across the region, even those with reduced access to technology.
Some concluding thoughts
The specific issues discussed all point to an important conclusion: developers of AI must identify where their systems could cause harm in the real world, and take steps to mitigate the negative impacts. The political and cultural context in which the technology is used, as opposed to where it is developed, is the most important factor in these considerations. Furthermore, the diversity of cultures and political systems in the APAC region mean that the solutions cannot be one-size-fits-all. Just as ethics vary by culture, so too should the ethics of AI.