🔬 Research Summary by Angshuman Kaushik, Researcher in AI Policy, Governance and Ethics.
[Original paper by Araz Taeihagh]
Overview: The various applications of AI not only offer opportunities for increasing economic efficiency and cutting costs, but they also present new forms of risks. Therefore, in order to maximize the benefits derived from AI while minimizing its threats, governments’ worldwide need to understand the scope and the depth of the hazards posed by it, and develop regulatory processes to address these challenges. This paper describes why the governance of AI should receive more attention, considering the myriad challenges it presents.
The internet is full of a plethora of websites catering to the diverse needs of its users. Most of these websites use complex machine learning algorithms to make the browsing experience of a surfer ‘seamless’ (as the marketers would love to call it). For example, there are content recommendation algorithms powering certain websites, which play a considerable role in shaping the ‘thought processes’ of its users. These algorithms apart from being used for predicting and evaluating human behaviour are also used for profiling and ranking people. However, there have been instances, when these content recommendation algorithms have been criticized for leading and exposing users to extreme content. Since the modus operandi of these algorithms is built to engage users and keep them on the platform (‘dollars for eyeballs mentality’) it creates a ‘feedback loop’, by suggesting content that users have expressed interest in. The consequence is that the users migrate from milder to more extreme content. The situation becomes grave, when say, for example, it becomes a fertile ground for any insurrectionist group to broadcast propaganda upon young and impressionable minds, thereby, attracting devastating consequences. Hence, in such scenarios, the governments need to step in and keep the system within bounds, by formulating effective policies and regulations. This paper starts off with an introduction to the all-pervading and omnipresent AI, replete with its various value-laden decisions for the society, be it, in clinical decision support systems, policing systems, provision of personalized content etc. It then enters into the awfully difficult territory of unexpected consequences and risks (in the form of bias, discrimination etc.), associated with the use of AI systems and then, proceeds to address the challenges encountered during its governance, and steps forward.
Conceptions of AI date back to the earlier efforts in developing artificial neural networks to replicate human intelligence, which can be referred to as the ability to interpret and learn from the information. The present AI capabilities have expanded to include computer programs that can learn from massive amounts of data and make decisions without human guidance, commonly referred to as Machine Learning algorithms (ML). Although these algorithms are quite fast and efficient, there is a broad consensus that it still falls short of human cognitive abilities, and most of the AI systems that have been successful till now, belong to the category of ‘narrow or weak AI.’ As per the researcher, some of the incentives for deploying AI include increasing economic efficiency and quality of life, meeting labor shortages, tackling aging populations etc.
Understanding the risks of AI
One of the biggest challenges faced by most of the AI systems is what is widely referred to as ‘corner cases’ i.e., unexpected situations, that the system had not been trained to handle. Further, the decision-making autonomy of AI significantly reduces human control over their decisions, creating new challenges for ascribing liability for the harms imposed by it. Moreover, given the value-laden nature of the outcomes reached by the algorithms, AI systems can potentially exhibit behaviours that conflict with societal norms and values, prompting concerns regarding the ethical issues that can crop up from its adoption. The paper also highlights the hazards of data privacy, surveillance, unemployment and social instability arising from the deployment of AI applications.
Challenges to AI Governance
According to the paper, the reason why the governments face innumerable difficulties in designing and implementing effective policies to govern AI, is due to its high degree of inherent opacity, uncertainty and complexity, which makes it challenging to ensure its accountability, interpretability, transparency and explainability. Another key issue surrounding the debate on AI governance is data governance, as multiple organizational and technical challenges exist that impede effective control over data and attribution of responsibility to data-driven decisions made by AI systems. To add to the above, the existing regulatory and governance frameworks are ill-equipped to manage the unique and novel societal problems introduced by the AI systems. The Regulators being generalists, struggle enormously when it comes to comprehending the subtle nuances of the ever evolving AI landscape. Hence, an information asymmetry and a chasm is created between tech companies and regulators which prove to be a major hindrance for the latter in formulating policies and regulations that are specific to the issue in hand. Further, considering the issues associated with ‘hard’ regulatory frameworks, the discussion in the paper veers towards the adoption of self-regulatory or ‘soft law’ approaches, espoused by the various industry bodies and governments to govern AI. ‘Soft law’ approaches refer to non-binding norms that create substantive expectations that are not directly enforceable. For example, industry bodies like IEEE and the High-Level Expert Group on AI formed by the European Commission have released their own Ethics Guidelines for Trustworthy AI. The paper also raises question marks at the efficacy of such self-regulatory initiatives and standards, considering their voluntary nature. Another challenge faced by the governments is the significant influence exerted by the big technology companies in the formulation and implementation of efficacious AI Policies, through their lobbying efforts, and their inclusion in the AI expert groups formed by the governments. Studies have highlighted the risks of regulatory capture by AI developers due to their substantial informational advantages, which makes their technological expertise particularly valuable to the regulators. The paper also calls for more research in the field to ensure greater inclusivity and diversity in AI governance.
Steps forward for AI Governance
According to the author, as AI is still developing with the potential to grow more salient and diverse, the complexity of its challenges suggests that its decision-making needs to be carefully conceptualized according to their context of application, and these framing processes should be subject to public debate. In fact, there are increasing calls for the adoption of innovative governance approaches, such as, adaptive governance and hybrid or ‘de-centered’ governance to address the governance challenges posed by the complexity and the uncertainty of the AI systems. The characteristic of adaptive and hybrid governance is the diminished role of the government in controlling the distribution of resources in the society. Another area of emphasis pointed out is the presence of flexibility, which is imperative to enable diverse groups of stakeholders to build consensus around the norms and trade-offs in designing AI systems, as well as for global AI governance to be applicable across different geographical, cultural and legal contexts, and aligned with existing standards of democracy and human rights. Further, the paper calls for learning from the experiences of governing previous emerging technologies, such as, the internet, nanotechnology, aviation safety and space law. Reference is also made towards an emerging body of literature that has proposed governing AI systems through their design, where social, legal and ethical rules can be enforced through code to regulate the behaviour of AI systems. According to the author, the trend common to recent studies in their proposed frameworks for AI governance is the emphasis on building broad societal consensus around AI ethical principles and ensuring accountability, but there is a need for studies examining how these frameworks can be implemented in practice. He goes on to refer to different frameworks, such as, the society-in-the-loop framework, where society is first responsible for finding consensus on the values that should shape AI and the distribution of benefits and costs among different stakeholders. Another approach includes the centralization and cross-cultural cooperation to improve coordination among national approaches. However, the various AI governance frameworks call for producing more concrete specifications on implementing these governance frameworks in practice, and identifying the parties in government that are responsible for leading different aspects of AI governance.
Between the lines
The paper quite methodically, decrypts the trials and tribulations faced in governing AI. Even the solutions envisioned in order to tackle the perils associated with AI systems by the government, seem workable, to a large extent. In fact, the paper lays out a very feasible and pragmatic path for the governments to follow, while formulating their various policies and regulations, concerning AI. More importantly, the findings are extremely crucial, considering the situation created by certain unbridled AI systems.