Summary contributed by Abhishek Gupta (@atg_abhishek), Founder and Principal Researcher of the Montreal AI Ethics Institute. His book Actionable AI Ethics will be published in 2021.
This piece is part of a series of paper summaries that you can also find on his website.
*Link to original paper + authors at the bottom
Overview: This paper takes a critical view of the international relationships between countries that have advanced AI capabilities and makes recommendations for grounding discussions on AI capabilities, limitations, and harms through piggybacking on traditional avenues of transnational negotiation and policy-making. Instead of perceiving AI development as an arms race, it advocates for the view of cooperation to ensure a more secure future as this technology becomes more widely deployed, especially in military applications.
What are some of the key problems?
- Unsurprisingly, given the popularity of AI, military leaders are often excited by the potential of deploying this technology without complete consideration for the risks that might arise from its use.
- Unexpected failures and emergent behaviour in a highly volatile environment like war presents very real concerns.
- AI systems are vulnerable to new vectors of attack and the novel domain of Machine Learning Security is highly important to include in these discussions.
My book Actionable AI Ethics (coming in 2021) will walk readers through these ideas in practice.
- The ability to use AI systems in warfare lends advanced capabilities to non-state actors who might not adhere to items like Article 36 that checks whether new weapons are consistent with the Geneva Convention posing an additional risk. Typically, state actors do follow these laws.
- From a policy perspective, a tradeoff that comes up frequently is the benefit that such coordination efforts can have in terms of minimizing miscalculation of the other’s capabilities and reducing inadvertent escalation. But, this might also mean that we create more robust AI systems that are quicker and more effective in their deployments.
How do we create pragmatic engagement?
- Developing a shared vernacular: I’ve pointed out in my previous work with a colleague published at the Oxford Internet Institute that there is a dire need to have consistency in how we discuss the risks from AI systems. Specifically, without consensus and shared understanding we risk talking across each other.
- Notably, the Chinese approach here has included societal impacts in addition to the technical considerations in the use of AI in the military.
- Shared evaluation of each other’s work: Even in trying times of geopolitical tension, one can embark on carefully selected initiatives to translate and interpret work being done by others in an attempt to develop a shared understanding. Taking the example of the USSR-US collaboration on the Apollo-Soyuz project during the Cold War stands as an example of how diplomacy can be advanced through scientific endeavors.
- In particular, this has implications for the kind of collaboration that might take place between China and USA, the two major forces in the use of AI in a national security context. Translation of each other’s work will help avoid misunderstanding.
- Utilizing Track 2 and Track 1.5 mechanisms in addition to primary channels to achieve diplomacy is an effective approach to diffusing tensions and discussing policies and security considerations that might be mired amongst other issues in Track 1 discussions.
What is Track 1.5 and Track 2?
At major policy negotiations and conferences, these tracks are venues where supplemental agendas are discussed, often with the presence of domain experts and those operating in assistance capacities to the official delegates. It is an avenue for advancing goals like the ones being discussed here that are sometimes nascent and not as immediately included in the primary agendas of the gathering.
Actions that people can take
- Creating shared standards for testing, evaluating, verifying, and validating (TEVV) of these systems to compare capabilities and limitations across deployments is essential.
From an AI lifecycle perspective, TEVV is something that is covered in the Actionable AI Ethics book for those who are looking to gain a deeper understanding of this concept.
An example benefit of this would be in judging whether the systems are adequately able to separate military and civilian targets. Also, the degree to which they are able to assure confidence in their results.
- While the inclusion of AI into nuclear security can have benefits in terms of higher precision in targeting, etc. we must also be conscious of destabilization because of inherent uncertainty in the use of these systems. This further strengthens the case for effective TEVV approaches to be used and adopted across countries.
- A shared understanding on the relative weighting of the false positives and false negatives by different regimes will also help to calibrate the abilities of the systems in their usage across different regions.
Open sky revival for AI systems
While the Open Sky treaty has faced considerable flak from the policy community and a recent announcement from the US represents an unfortunate development in the space. But, in terms of soft enforcement and monitoring, it is an essential mechanism for accountability. It also serves to reinforce a more representative understanding of the capabilities and limitations of AI from different countries.
Better communication
In a field like AI that prides itself on open-source and open-access policies in terms of research and development, we ought to extend this to the field of policy as well. There is some risk that such an initiative might be one-sided, but taking an iterative approach to building trust can assess the viability of such an approach.
Lessons learned
- Sometimes bilateral sessions have points of friction that are hard to overcome, utilizing multilateral fora can help ease those points of friction.
- Starting with small, concrete, tractable issues will help to incrementally build trust to tackle larger issues later on.
- Gathering a diverse set of stakeholders, appropriate for the stage of conversation is important rather than having a blanket set of people to approach and talk to.
- Having a high degree of transparency in the operation of these initiatives and their goals along with a firm expectation of reciprocity will also help in the success of these initiatives.
- Mitigating the risks of counter-intelligence are also important, especially for those who are invited to these fora.
- Track 2 conversations should become routine and tracking their efficacy through metrics and outcomes can help justify their existence rather than having them as one-off events.
- Related to the above point, having tight feedback loops between Track 1 and other tracks will help to keep each other abreast of the relevant issues and their severity.
- A discussion on the seriousness of issues, especially those that might not be raised at Track 1 in the service of achieving other goals shouldn’t deter their discussion in other tracks. This will be essential for places where for example there might be human rights implications, say in the case of persecution of Uighurs in China aided by the use of facial recognition technology.
Conclusion
There are many shortcomings in the way AI safety is discussed at an international level at the moment and without more coordinated efforts that build on existing policymaking and negotiation instruments, we risk creating a fragmented ecosystem that can lead to unintended consequences in terms of assessing each other’s AI capabilities and mitigating the risks that arise from its use.
What does this mean for Actionable AI Ethics?
- As a practitioner, this means that we have a responsibility to communicate more clearly the impacts of our work to those who might be involved in policy making, both at a domestic and international level. Specifically, I envision working with others to create a shared commons, something like the Living Dictionary that can help with this.
- If you are invited to be a part of some of these Track 1.5 and Track 2 conversations, please do engage as they are great ways of making an impact by sharing the real capabilities and limitations of AI systems.
Questions that I am exploring
If you have answers to any of these questions, please tweet and let me know!
- How can the policymaking community better tap into the network of practitioners to invite the appropriate stakeholders to these conversations?
- What are the barriers for technical folks to understand some of the policy implications of their work?
Potential further reading
A list of papers that I think might be interesting related to this paper.
- Rising Powers, Responsibility, and International Society
- Understanding the Essence of Artificial Intelligence: Towards Ecological Safety of AI in Human Society
Please note that this is a wish list of sorts and I haven’t read through the papers listed here unless specified otherwise (if I have read them, there will be a link from the entry to the page for that.)
Original paper by Andrew Imbrie, Elsa B. Kania: https://cset.georgetown.edu/research/ai-safety-security-and-stability-among-great-powers-options-challenges-and-lessons-learned-for-pragmatic-engagement/