Top-level summary: This white paper from the World Economic Forum presents a great getting started guide for people looking to implement governance and regulatory mechanisms for AI systems. While being high-level in many recommendations, it sets out the landscape very clearly and posits certain mini-frameworks to reasons about the various tensions that one will encounter when trying to implement governance for AI systems. One of the overarching themes in the report is that one needs to view the AI system as embedded in a large socio-technical ecosystem and hence effective interventions will involve both technical and legal, policy making approaches. When thinking about policy making, what’s important is to position that within the context of the local culture such that it is in line with what that community values. Because of a diversity of ethical principle sets, the report advocates for an approach that utilizes cross-sectoral expertise and leans on those who have context and knowledge of the domain. Finally, the report stresses on the importance of balancing the tradeoff between early regulation that can catch and mitigate harms and overzealousness without understanding the technology that might stifle innovation while not creating meaningful regulation.
The white paper starts by highlighting the existing tensions in the definitions of AI as there are many parties working on advancing definitions that meet their needs. One of the most commonly accepted ones frames AI systems as those that are able to adapt their behavior in response to interactions with the world independent of human control. Also, another popular framing is that AI is something that mimics human intelligence and is constantly shifting as a goal post as what was once perceived as AI, when sufficiently integrated and accepted in society becomes everyday technology.
One thing that really stands out in the definitions section is how ethics are defined, which is a departure from a lot of other such documents. The authors talk about ethics as a set of principles of morality where morality is an assemblage of rules and values that guide human behavior and principles for evaluating that behavior. They take a neutral stand on the definition, a far cry from framing it as a positive inclination of human conduct to allow for diversity in embedding ethics into AI systems that are in concordance with local context and culture.
AI systems present many advantages which most readers are now already familiar given the ubiquity of AI benefits as touted in everyday media. One of the risks of AI-enabled automation is the potential loss of jobs, the authors make a comparison with some historical cases highlighting how some tasks and jobs were eliminated creating new jobs while some were permanently lost. Many reports give varying estimates for the labor impacts and there isn’t yet a clear consensus on the actual impacts that this might have on the economy.
From a liability perspective, there is still debate as to how to account for the damage that might be caused to human life, health and property by such systems. In a strict product liability regime like Europe, there might be some guidance on how to account for this, but most regimes don’t have specific liability allocations for independent events and decisions meaning users face coverage gaps that can expose them to significant harms.
By virtue of the complexity of deep learning systems, their internal representations are not human-understandable and hence lack transparency, which is also called the black box effect. This is harmful because it erodes trust from the user perspective, among other negative impacts.
Social relations are altered as more and more human interactions are mediated and governed by machines. We see examples of that in how our newsfeeds are curated, toys that children play with, and robots taking care of the elderly. This decreased human contact, along with the increasing capability of machine systems, examples of which we see in how disinformation spreads, will tax humans in constantly having to evaluate their interactions for authenticity or worse, relegation of control to machines to the point of apathy.
Since the current dominant paradigm in machine learning is that of supervised machine learning, access to data is crucial to the success of the systems and in cases where there aren’t sufficient protections in place for personal data, it can lead to severe privacy abuses. Self determination theory states that autonomy of humans is important for proper functioning and fulfillment, so an overreliance on AI systems to do our work can lead to loss of personal autonomy, which can lead to a sense of digital helplessness. Digital dementia is the cognitive equivalent where relying on devices for things like storing phone numbers, looking up information, etc. will over time lead to a decline in cognitive abilities.
The echo chamber effect is fairly well studied, owing to the successful use of AI technologies to promulgate disinformation to the masses during the US Presidential elections of 2016. Due to the easy scalability of the systems, the negative effects are multiplicative in nature and have the potential to become run-away problems.
Given that AI systems are built on top of existing software and hardware, errors in the underlying systems can still cause failures at the level of AI systems. More so, given the statistical nature of AI systems, behaviour is inherently stochastic and that can cause some variability in response which is difficult to account for. Flash crashes in the financial markets are an example of this. For critical systems that require safety and robustness, there is a lot that needs to be done for ensuring reliability.
Building ethics compliance by design can take a bottom-up or top-down approach, the risk with a bottom-up approach is that by observing examples of human behaviour and extracting ethics principles from that, instead of getting things that are good for people, you get what’s common. Hence, the report advocates for a top-down approach where desired ethical behavior is directly programmed into the system.
Casuistic approaches to embedding ethics into systems would work well in situations where there are simple scenarios, such as in healthcare when the patient has a clear directive of do-not-resuscitate. But, in cases where there isn’t one and where it is not possible to seek a directive from the patient, such an approach can fail and it requires that programmers either in a top-down manner embed rules or the system learns from examples. Though, in a high-stakes situation like healthcare, it might not be ideal to rely on learning from examples because of skewed and limited numbers of samples.
A dogmatic approach would also be ill-advised where a system might slavishly follow a particular school of ethical beliefs which might lead it to make decisions that might be unethical in certain scenarios. Ethicists utilize several schools of thought when addressing a particular situation to arrive at a balanced decision. It will be crucial to consult with a diversity of stakeholders such that the nuances of different situations can be captured well. The WEF is working with partners to come up with an “ethical switch” that will imbue flexibility on the system such that it can operate with different schools of thought based on the demands of the situation.The report also proposes the potential of utilizing a guardian AI system that can monitor other AI systems to check for compliance with different sets of AI principles.
Given that AI systems operate in a larger socio-technical ecosystem, we need to tap into fields like law and policy making to come up with effective ways of integrating ethics into AI systems, part of which can involve creating binding legal agreements that tie in with economic incentives.While policy making and law are often seen as slow to adapt to fast changing technology, there are a variety of benefits to be had, for example higher customer trust for services that have adherence to stringent regulations regarding privacy and data protection. This can serve to be a competitive advantage and counter some of the negative innovation barriers imposed by regulations. Another point of concern with these instruments is that they are limited by geography which leads to a patchwork of regulation that might apply on a product or service that spans several jurisdictions. Some other instruments to consider include: self-regulation, certification, bilateral investments treaties, contractual law, soft law, agile governance, etc.
The report highlights the initiatives by IEEE and WEF in creating standards documents. The public sector through its enormous spending power can enhance the widespread adoption of these standards such as by utilizing them in procurement for AI systems that are used to interact with and serve their citizens. The report also advocates for the creation of an ethics board or Chief Values Officer as a way of enhancing the adoption of ethical principles in the development of products and services.
For vulnerable segments of the population, for example children, there need to be higher standards of data protection and transparency that can help parents make informed decisions about which AI toys to bring into their homes. Regulators might play an added role of enforcing certain ethics principles as part of their responsibility. There also needs to be broader education for AI ethics for people that are in technical roles.
Given that there are many negative applications of AI, it shouldn’t preclude us from using AI systems for positive use cases, a risk assessment and prudent evaluation prior to use is a meaningful compromise. That said, there are certain scenarios where AI shouldn’t be used at all and that can be surfaced through the risk or impact assessment process.
There is a diversity of ethical principles that have been put forth by various organizations, most of which are in some degree of accordance with local laws, regulations, and value sets. Yet, they share certain universal principles across all of them. One concern highlighted by the report is on the subject of how even widely accepted and stated principles of human rights can be controversial when translated into specific mandates for an AI system. When looking at AI-enabled toys as an example, while they have a lot of privacy and surveillance related issues, in countries where there isn’t adequate access to education, these toys could be seen as a medium to impart precision education and increase literacy rates. Thus, the job of the regulator becomes harder in terms of figuring out how to balance the positive and negative impacts of any AI product. A lot of it depends on the context and surrounding socio-economic system as well.
Given the diversity in ethical values and needs across communities, an approach might be for these groups to develop and apply non-binding certifications that indicate whether a product meets the ethical and value system of that community. Since there isn’t a one size fits all model that works, we should aim to have a graded governance structure that has instruments in line with the risk and severity profile of the applications.
Regulation in the field of AI thus presents a tough challenge, especially given the interrelatedness of each of the factors. The decisions need to be made in light of various competing and sometimes contradictory fundamental values. Given the rapid pace of technological advances, the regulatory framework needs to be agile and have a strong integration into the product development lifecycle. The regulatory approach needs to be such that it balances speed so that potential harms are mitigated with overzealousness that might lead to ineffective regulations that stifle innovation and don’t really understand well the technology in question.
Original white paper from The World Economic Forum: https://www.weforum.org/whitepapers/ai-governance-a-holistic-approach-to-implement-ethics-into-ai