Summary contributed by Connor Wright, who’s a 3rd year Philosophy student at the University of Exeter.
Link to full paper + authors listed at the bottom.
Mini-summary: Newman embarks on the lonely and brave journey of investigating how to put AI governmental principles into action. To do this, 3 case studies are considered, ranging from ethics committees, publication norms and intergovernmental agreement. While all 3 of those aspects have their benefits, none of them are perfect, and Newman eloquently explains why. The challenges presented are numerous, but the way forward is visible, and that way is called practicality.
Full summary:
The inspiration behind Newman’s paper lies within her observation that governance on AI principles focuses too much on the what, and not enough on the how. As a result, her paper aims to denote examples to act as suggestions to how to best operationalise the AI principles being discussed. To do this, she illustrates 3 different case studies, which I will now illustrate in turn.
Case study 1: Can an AI Ethics advisory committee help advance responsible AI (Microsoft AETHER committee)?
A very current debate topic is of whether a company’s ethics boards can actually impact the work done by its engineers. Newman refers to Microsoft’s AETHER committee attempt to do just that.
As a bug company, Microsoft’s moves in the AI world will have a significantly larger impact than other smaller businesses, putting even more emphasis on making this known to the key stakeholders. To act on this, Microsoft organised their principles based on the engineering processes involved, including guidance on privacy, and accountability. The committee (comprising 7 working groups, with about 23 members from each major department) would then write reports on any AI concerns had by different employees raised through their Ask-AETHER phone-line. This was made available to all departments within Microsoft, and allowed the reports compiled to represent each concern raised. These reports would then be sent to senior management for review, keeping those at the top connected with what goes on elsewhere.
Qualms were nonetheless raised about the council’s impact, with Microsoft winning the $10 billion contract in 2019 to restructure the Department of Defense’s cloud system. Their response was that there was no objection to being involved with the military within the company’s AI principles, so long as the system was safe, reliable, and accountable. No official objection was ever published from AETHER, but they apparently did raise a policy concern on an executive retreat that same year.
Newman’s takeaways were resultantly based on the welcomed move of establishing the AETHER call line, and involving the executives at the top. For the principles to be truly representative, all concerns must be taken into account, and inter-disciplinary departments are to be involved. Microsoft did exactly that, but AETHER’s true impact is still to be seen.
Case study 2: Does shifting publication norms of AI reduce its risk?
Here, Newman considers the staged-release publication process of AI systems, in complete contrast to the norm in the AI field of an all at once release. The staged process has been examined as a possible way to prevent the use of the AI software by malicious actors, as well as being able to give time to policy makers and human actors involved. Such a process gives policy-makers time to consider how best to approach the software and its societal effects, while human actors have time to reflect on their own usage of the product.
However, the process has been criticised for potentially stifling the speed and growth of the AI field through having a more delayed process. Admittedly, such a process can prevent potential harms, but it can also prevent potential benefits. Here, Newman utilises OpenAI’s GPT-2 language model as an example. Committed to releasing it in stages, models with greater parameters and specs were released before GPT-2 had fully been made available. Furthermore, once released, a doctor from Imperial College London repurposed GPT-2 to write accurate scientific abstracts in solely 24 hours, something which could have occurred much earlier had the model been fully released.
Newman believes that open source AI information is key to the field progressing, whether released in stages or fully. Releasing in stages can help prevent certain harms, but can also make it harder for independent researchers to properly evaluate the model without its full release. Altering publication norms can potentially help prevent malicious usage of the product, but can also prevent its proper evaluation in the first place.
Case study 3: Can a global focus point provide for international coordination on AI policy and implementation?
Newman takes advantage of the monumental OECD principles as her example of (and one of the only) points of international agreement on AI principles. On may 22nd 2019, 42 countries signed up to the OECD’s intergovernmental principles on AI, ranging from Asia, South America, Europe and Africa. The language utilised in the principles that stand out to me are words such as stewardship, plain easy-to-understand information, human-centeredness, and underrepresented. Strong and powerful language contained within principles agreed upon by 42 countries was never anticipated, and proved an extremely positive step in the right direction.
Unfortunately, Newman acknowledges that the implementation of these principles in each country will be different. Cultural considerations, the presence of infrastructure and the economic situation will impact which principles can be adopted in which way. Bodies such as the AI observatory have been established to try and link practical instantiations of the principles with their desired goal, but how each country develops its AI strategy remains to be seen.
Newman’s paper has provided us with real life examples of how AI principles are trying to be implemented in the real world. Involving leaders at large corporations like AETHER has done can help to move towards a great cognizance of the implications of decisions made on AI. Such a cognizance can then help influence the publication norms to prevent evil-use of AI products, helping international governments do the same. While there are many challenges ahead, turning talk into action is certainly the way to overcome them.
Original paper by Jessica Cussins Newman: https://cltc.berkeley.edu/wp-content/uploads/2020/05/Decision_Points_AI_Governance.pdf