*Authors of original paper & link at the bottom
2019 has seen a sharp rise in interest surrounding AI Governance. This is a welcome addition to the lasting buzz surrounding AI and AI Ethics, especially if we are to collectively build AI that enriches people’s lives.
The AI Governance in 2019 report presents 44 short articles written by 50 international experts in the fields of AI, AI Ethics, and AI Policy. Each article highlights, from its author’s or authors’ point of view, the salient events in the field of AI Governance in 2019. Apart from the thought-provoking insights it contains, this report also offers a great way for individuals to familiarize themselves with the experts contributing to AI governance internationally, as well as with the numerous research centers, think tanks, and organizations involved.
Throughout the report, many experts mention the large amount of AI Ethics principles published in the past few years by organizations and governments attempting to frame how AI should be developed for good. Experts also highlight how, in 2019, governments were slowly moving from these previously established ethical principles towards more rigid, policy measures. This, of course, is far from accomplished. Currently, many governments are holding consultations and partnering with organizations like MAIEI to help them develop their AI strategy. Authors of the articles featured in this report also suggest considerations they deem necessary to getting AI governance right. For one, Steve Hoffman (pp. 51-52) suggests policymakers take advantage of market forces in regulating AI. FU Ying (pp. 81-82) stresses the importance of a China-US partnership regarding AI, for which better relations between both governments are necessary.
On another note, the release of gradually larger versions of OpenAI’s GPT-2 language model and the risks around its publication are mentioned by many authors as a salient event of 2019. For many, this brought up issues surrounding responsible publishing in AI, as well as more general concerns around how AI may be used to do harm. The report even features an article written by four members of OpenAI discussing the event and its impact on the discussion concerning publishing norms in AI (pp. 43-44).
One expert, Prof. YANG Qiang, also mentions the importance of new advances like federated learning, differential privacy, and homomorphic encryption, and their importance in ensuring that AI is used to the benefit of humanity (pp. 11-12). In his article, Prof. Colin Allen, highlights a crucial but oft forgotten element of good AI governance: strong AI journalism (pp. 29-30). He writes: “The most important progress related to AI governance during the year 2019 has been the result of increased attention by journalists to the issues surrounding AI” (p. 29). It is necessary for policymakers, politicians, business leaders, and the general public to have a proper understanding of the technical aspects of AI, and journalists play a large role in building public competence in this area.
It’s interesting to note that the report was released by the Shanghai Institute of Science for Science. Its editor-in-chief (Prof. SHI Qian) and one of its executive editors (Prof. Li Hui) are affiliated with this Institute, and the report features numerous Chinese AI experts. In light of this, it is particularly refreshing to see such a collaboration not only between Chinese and American or British experts, but also with other scholars from around the world. Efforts in AI governance can easily become siloed due to politics and national allegiances. This report, thankfully, does away with these to privilege an international and collaborative approach. In addition, twenty of the fifty experts featured are women, and many of them are at the beginning of their careers. This is commendable, considering the field of AI tends to be male-dominated. However, none of the fifty experts featured in the report are Black. This is unacceptable. There are numerous Black individuals doing innovative and crucial work in AI, and their voices are central to developing beneficial AI. I encourage our readers to engage with the work of Black AI experts. For one, start by listening to this playlist of interviews from the TWIML podcast, which features Black AI experts talking about their work. If a similar report on AI governance is put together next year, it must include the perspectives of Black AI experts.
Original paper by SHI Qian (Editor-in-Chief), Li Hui (Executive Editor), Brian Tse (Executive Editor): https://www.aigovernancereview.com/static/AI-Governance-in-2019-7795369fd451da49ae4471ce9d648a45.pdf