Summary contributed by Abhishek Gupta (@atg_abhishek), Founder and Principal Researcher of the Montreal AI Ethics Institute. His book Actionable AI Ethics will be published in 2021.
*Link to original paper + authors at the bottom
Overview: This research study seeks to glean whether there is indeed an adversarial dynamic between the tech industry and the Department of Defense (DoD) and other US government agencies. It finds that there is wide variability in perception that the tech industry has of the DoD, and willingness to work depends on the area of work and prior exposure to funding from and work of the DoD.
Key findings
- Most AI professionals are actually positive or neutral on working with the DoD. This is in stark contrast to what the common media portrayal is of the attitudes of workers in the tech industry.
- Doing good and working in interesting research areas were the most compelling reasons to engage with the DoD.
- Discomfort primarily arose from lack of clarity on how the DoD might use their work and potentially using the work to cause harm.
- Unsurprisingly, people were more willing to work on humanitarian projects compared to war efforts or back-office applications (the back office attitude was surprising to me).
- If the funding provided by the DoD is used solely for basic research, a lot of people were willing to engage.
- Academia and their own employers got the highest trust ratings when it came to whether AI will be developed with the public interest at heart.
- Those that had prior exposure to the funding and work from the DoD has more positive viewpoints on the DoD. I’ll talk a bit more about this later, but I think there is an unaddressed bias that might need to be thought about in this context.
Ultimately, there isn’t a binary framing that characterizes the relationship between the tech industry and the DoD as is commonly portrayed in the media, and depending on the context, we get varying results in terms of the willingness of people to engage with this kind of work.
What is the key problem this study is trying to address?
There are many conflicting narratives on the willingness to engage and attitudes towards the DoD from the tech industry. To clarify these and gain a better understanding for what the reality is and which factors shape that will help to better bridge the gaps between US government agencies and the tech industry.
The DoD in particular has the ability to drive large-scale changes by virtue of its funding and market-making power which means that there are a lot of scientific advances that academia and industry might be unwilling to investigate that if funded by the DoD (when people are willing to engage with them) can lead to large societal benefits. Essentially, having a productive, open, and honest dialogue between the two is essential to leverage the potential opportunities.
Survey questions
The questions were centred around trying to elicit the attitudes that people had towards the DoD, what their response would be to a hypothetical project in terms of engagement and employer relationship, what might be some factors that can shift their perceptions, and finally their understanding and perception of the different US agencies and trust in the political instruments in the US.
- A caveat that the authors point out is that they had a fairly low response rate (~4%) to the survey that they used and as such don’t provide guarantees on the representativeness of the general population and call for further research to build upon the results that they obtain in this study.
- The study also ended up sourcing practitioners who self-identified as AI practitioners online and were mostly from the major tech hubs like Boston, SF, Seattle, etc. so perhaps not fully representative of all the places where such collaborations might be taking place.
Discussion of results
- One of the most interesting findings for me from this study was the vast difference in positive perception of the DoD when people had prior exposure to working with them.
- This might be a case where people genuinely through their interactions with the DoD found the work to be highly meaningful and to buck the narrative that most applications of AI by the DoD are war-related with the potential to cause harm.
- This might also be influenced by the fact that participants normalized such work, even if there were some ethical consequences, by virtue of repeated exposure and working on it over time.
- The ability to work on big, cutting-edge research and the potential to do a lot of good through such engagements was one of the most compelling reasons to engage with the DoD.
- Another consideration is that the DoD might be able to provide access to technology and data that might otherwise be inaccessible that can help surface insights pushing the envelope.
- The most prominent concern identified by the responders was the potential for misuse of the research, a lack of ethics, and harm that might be inflicted on people through the advances made in such collaborations.
- Alleviating these through transparent and robust governance can help both parties, especially researchers building more trust in the DoD.
- Specifying outcomes and making sure that those are adhered to can be a way that might help alleviate some of these concerns.
- The lack of transparency sometimes in being able to publish results, or not having complete control over the direction of the research was cited as a reason for not engaging.
- The general views that people hold of the DoD greatly impacted in both the positive and the negative reasons that they found compelling in a potential engagement. Thus, there is a strong prior effect that can shape the willingness of actors from the tech industry to engage.
- While one might imagine that willingness might be increased by framing the work in the context of threats to the US from foreign adversaries, this wasn’t something that was indicated by the respondents to the survey.
- Most people that responded weren’t aware of the DoD ethical AI principles that were published.
- This is perhaps something that needs redressal so that we have balanced discussions on the impact of technologies and what measures are being used to mitigate harmful consequences.
- Differential motivations in the responders was an important finding from this study.
- When people had a positive perception, the potential to mitigate foreign threats to the safety of the US was a strong motivating factor.
- When people had a negative perception, engaging in non-combat related work was a strong motivating factor.
- There also wasn’t unwillingness as the default amongst AI practitioners, something that is probably miscommunicated most often in popular media.
- In terms of actions that employees would take when presented with an opportunity to engage with the DoD, in the case of the hypothetical humanitarian project, most would choose to engage. In the case of the battlefield project, most would choose to not engage.
- An insight that was interesting here was that the frequency of people proactively supporting projects that they believed in was higher than that of those who actively condemned the projects that they didn’t believe in.
- In terms of the discussions around lethal autonomous weapons, most people were somewhat familiar with the issues in the larger ecosystem, but not specifically as they relate to the DoD.
- In choosing to work on something or not, a lot of professionals took into consideration the social impact of their work which is a good indication for the healthiness of the ecosystem.
- In terms of trust, intergovernmental organizations and the EU ranked higher than US government agencies and tech companies. The Chinese government received the least amount of trust from the respondents.
- These perceptions are not without flaws: I think that the role that media plays in how it portrays each of the actors has a huge impact.
- Trust in national governments was high when it came to who should be entrusted with the responsibility to manage the consequences of AI.
Conclusion
My takeaway from the study was that we need to have more granular and informed discussions when it comes to the relationship between the tech industry and government agencies. Ill-informed characterizations, propagated by media outlets, sometimes based on anecdotal evidence have the potential to do tremendous harm by creating self-fulfilling prophecies that strain the relationship between the two.
What does this mean for Actionable AI Ethics?
- Straying away from research with government agencies just based on perceptions that you have formed from the media discourse are inadequate grounds for making a decision. Active recognition of your own biases and searching for information to gain a balanced understanding will be essential to support your claims to engage or not to engage within your organization.
Questions that I am exploring
If you have answers to any of these questions, please tweet and let me know!
- Why is there such a high-level of miscommunication in popular media that is debunked to a certain extent by this study?
- How can we do better to have balanced conversations around DoD-supported research that isn’t polarized between two extremes?
Potential further reading
A list of papers that I think might be interesting related to this paper.
- The AI-cyber nexus: implications for military escalation, deterrence and strategic stability
- Miltary uses of AI
Original paper by Catherine Aiken, Rebecca Kagan, Michael Page: https://cset.georgetown.edu/research/cool-projects-or-expanding-the-efficiency-of-the-murderous-american-war-machine/