🔬 Summary by Angshuman Kaushik, Researcher in AI Policy, Governance and Ethics.
[Statements and Releases from The White House Briefing Room]
Overview: This write-up focuses on the conclusions reached at the inaugural meeting of the U.S.-EU Trade and Technology Council (“TTC”) held in Pittsburgh, Pennsylvania on September 29, 2021. It concerns only those aspects in the meeting that deal with the use of AI, its effects, and the areas of cooperation envisioned, going forward.
The timing of the inaugural meeting of the TTC couldn’t have been better, with the Facebook saga unfolding before the world. The reason I say this is because, the TTC’s Inaugural Joint Statement discusses outcomes in respect of five key areas, one of them being development and implementation of AI systems that are trustworthy, and those that respect universal human rights. Set in motion by President Joe Biden, President of the European Commission Ursula von der Leyen and European Council President Charles Michel at the US-EU Summit in June 2021, TTC comprises 10 Working Groups, with AI falling within the Technology Standards Working Group. The importance of the TTC can be gauged by the fact that both the US and the EU have appointed some of their senior officials to spearhead it. The US side is led by Secretary of State Antony Blinken, US Trade Representative Katherine Tai and Secretary of Commerce Gina Raimondo. EU Commissioner for Competition Margrethe Vestager and Commissioner for Trade Valdis Dombrovskis are representing Brussels.
Statement on AI
Coming to the material contents of the Joint Statement, with regards to AI, it talks about the belief of both the sides in the potential of AI to bring about substantial benefits to their respective societies, and tackle various challenges. One significant aspect of the joint statement is the acknowledgement from both US and the EU of the risks associated with AI-enabled technologies that are either not developed and deployed responsibly, or misused. Further, they assert their willingness and intention to develop and implement trustworthy AI, and their commitment to a human-centered approach that buttresses shared democratic values and respects universal human rights. The key here is the choice of words i.e., trustworthy AI and human-centered approach. In fact, the EU has already shown the way to the world, by putting out the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonized Rules on Artificial Intelligence and Amending Certain Union Legislative Acts on April 21, 2021 (“Artificial Intelligence Act”). The aforementioned Proposal delivers on the political commitment by President Ursula von der Leyen that the Commission would bring about legislation for a coordinated European approach on the human and ethical implications of AI. In pursuance of the above, the Commission published the White Paper on Artificial Intelligence – A European approach to excellence and trust on February 19, 2020. The White Paper sets out the policy routes on how to achieve the twin goals of promoting the uptake of AI and of addressing the risks associated with certain uses of such technology.
This proposal aims to implement the second goal for the development of an ecosystem of trust by proposing a legal framework for trustworthy AI. Moreover, both the US and the EU are the founding members of Global Partnership on AI, which brings together a group of like-minded partners seeking to support the responsible development of AI that is based on human rights and societal benefit. The joint statement also opposed and reflected its significant concern, regarding the social scoring systems deployed by authoritarian governments (without naming any country, in particular) with the aim to implement social control at scale. They reiterated that these systems pose threats to fundamental freedoms and the rule of law, which includes silencing speech, punishing peaceful assembly and unlawful surveillance. The statement also emphasized that the policy and regulatory measures should be based on and proportionate to the risks posed by the different uses of AI. Moreover, the US noted the European Commission’s proposal for a risk-based regulatory framework for AI and the fact that, EU supports a number of research projects on trustworthy AI, as part of its AI strategy. The EU also noted the US government’s development of an AI Risk Management Framework, as well as ongoing projects on trustworthy AI as part of the US National AI Initiative. Further, the joint statement reiterated the commitment of both the sides to work together to foster responsible stewardship of trustworthy AI and provide research-based methods to advance trustworthy approaches to AI that serve people in beneficial ways.
Areas of cooperation
The statement also mentions areas of cooperation between both the US and the EU. The objective is to translate shared common values into tangible action and cooperation for mutual benefit. It goes without saying that the commitment to the responsible stewardship of trustworthy AI seems to be on top of the agenda for both US and the EU, as they seek to develop a mutual understanding on the principles underlying ‘trustworthy and responsible AI’. In this regard, they intend to discuss measurement and evaluation tools and activities to assess the technical requirements for trustworthy AI, concerning, for example, accuracy and bias mitigation. Further, they also expressed their desire to collaborate on projects furthering the development of ‘trustworthy and responsible AI’ to explore better use of machine learning and other AI techniques towards desirable impacts. The above quite clearly points toward the fact that, both sides are concerned about the damaging effects of certain algorithms on society. Both the US and EU also expressed their intention to explore cooperation on AI technologies designed to enhance privacy protections, in full compliance with their respective rules, coupled with additional areas of cooperation to be defined through dedicated exchanges. Further, they also stressed on upholding and implementing the OECD Recommendation on AI. They also intend to jointly undertake an economic study examining the impact of AI on the future of their workforces, with attention to outcomes in employment, wages and dispersion of labor market opportunities. They also expressed their willingness to inform approaches to AI consistent with an inclusive economic policy that ensures that the benefits of technological gains are broadly shared by workers.
Between the lines
As stated above, this meeting assumes a lot of significance, considering the developments taking place globally against the detrimental effects of AI on individuals in particular, and the society, in general. Undoubtedly, it is a herculean task to resolve the issue of bias and discrimination creeping into the AI systems, but someone has to make a start somewhere. Further, the aforesaid problem is exacerbated by interpretability and explainability issues associated with certain ‘black box’ algorithms. Governments around the world, either individually, or in a collective manner (as is the case here) can only enact policies and laws to enforce responsible and trustworthy AI, but self-regulation by the entities accountable for the development and deployment of AI is crucial. The latest example of government stepping in is the non-binding resolution passed by the European Parliament on October 6, 2021 calling for a ban on police use of facial recognition technology in public places and on predictive policing. It also called for a ban on private facial recognition databases in law enforcement and supported the European Commission’s recommendations to put an end to social scoring systems. As far as self-regulation by the aforesaid entities are concerned, Francis Haugen (Facebook whistleblower), during her testimony before the Senate Subcommittee on Consumer Protection, Product Safety and Data Security within the Committee on Commerce, Science and Transportation stated – “they (she was referring to Facebook) have a 100% control over their algorithms”, which means that companies who make use of AI, have 100% control over their algorithms. Therefore, it is imperative that both the government and the corporates need to join hands in order to arrest the further occurrence and spread of bias, discrimination, hatred and disintegration of the society caused by “toxic algorithms”.