Top-level summary: This paper by Nathalie A. Smuha explores how human rights can form a solid foundation for the development of AI governance frameworks but it cautions against an over-reliance on them for making decisions on how to structure the framework and decide on its actual components. The author highlights how the EC Trustworthy AI guidelines successfully utilized a human rights foundation to advocate for building legal, ethical and robust AI systems. While moral objectivism might seem like a great idea for creating a universal framework, there remains value in looking at it from a relativistic perspective where nuances of culture and context of different places can be adequately represented such that the proposed framework is more in line with the expectations of the people living in that jurisdiction. Arguments against using human rights are centered on them being too Western, individualistic and abstract but the author provides adequate justification for how those are weak arguments and that in fact the most often cited problem with human rights being abstract is a boon in that they can be applied to novel circumstances as well without much modification though they are subject to interpretation. With sufficient exercise of those principles, they often get enshrined in law as rules, which though can be inflexible, still offer concrete guidance that can serve as constituent parts to an AI governance framework. The paper also posits that it will be important for people in both law and technology to know the specificities of each other’s domains better to build frameworks that are meaningful and practical.
AI is currently enjoying a summer of envy after having gone through a couple of winters of disenchantment, with massive interest and investments from researchers, industry and everyone else there are many uses of AI to create societal benefits but they aren’t without their socio-ethical implications. AI systems are prone to biases, unfairness and adversarial attacks on their robustness among other real-world deployment concerns. Even when ethical AI systems are deployed for fostering social good, there are risks that they cater to only a particular group to the detriment of others.
Moral relativism would argue for a diversity of definitions as to what constitutes good AI which would depend on the time, context, culture and more. This would be reflected in market decisions by consumers who choose products and services that align with their moral principles but it poses a challenge for those trying to create public governance frameworks for these systems. This dilemma would push regulators towards moral objectivism which would try and advocate for a single set of values that are universal making the process of coming up with a shared governance framework easier. A consensus based approach utilized in crafting the EC Trustworthy AI guidelines settled on human rights as something that everyone can get on board with.
Given the ubiquity in the applicability of human rights, especially with their legal enshrinement in various charters and constitutions, they serve as a foundation to create legal, ethical and robust AI as highlighted in the EC Trustworthy AI guidelines. Stressing on the importance of protecting human rights, the guidelines advocate for a Trustworthy AI assessment in case that an AI system has the potential to negatively impact the human rights of an individual, much like the better established data protection impact assessment requirement under the GDPR. Additional requirements are imposed in terms of ex-ante oversight, traceability, auditability, stakeholder consultations, and mechanisms of redress in case of mistakes, harms or other infringements.
The universal applicability of human rights and their legal enshrinement also renders the benefits of established institutions like courts whose function is to monitor and enforce these rights without prejudice across the populace. But they don’t stand uncontested when it comes to building good AI systems; they are often seen as too Western, individualistic, narrow in scope and abstract to be concrete enough for developers and designers of these systems. Some arguments against this are that they go against the plurality of value sets and are a continued form of former imperialism imposing a specific set of values in a hegemonic manner. But, this can be rebutted by the signing of the original Universal Declaration of Human Rights that was done by nations across the world in an international diplomatic manner. However, even despite numerous infringements, there is a normative justification that they ought to be universal and enforced.
While human rights might be branded as too individual focused, potentially creating a tension between protecting the rights of individuals to the detriment of societal good, this is a weak argument because stronger protection of individual rights has knock-on social benefits as free, healthy and well-educated (among other individual benefits) creates a net positive for society as these individuals are better aware and more willing to be concerned about societal good.
While there are some exceptions to the absolute nature of human rights, most are well balanced in terms of providing for the societal good and the good of others while enforcing protections of those rights. Given the long history of enforcement and exercises in balancing these rights in legal instruments, there is a rich jurisprudence on which people can rely when trying to assess AI systems.
While human rights create a social contract between the individual and the state, putting obligations on the state towards the individual but some argue that they don’t apply horizontally between individuals and between an individual and a private corporation. But, increasingly that’s not the case as we see many examples where the state intervenes and enforces these rights and obligations between an individual and a private corporation as this falls in its mandate to protect rights within its jurisdiction.
The abstract nature of human rights, as is the case with any set of principles rather than rules, allows them to be applied to a diversity of situations and to hitherto unseen situations as well. But, they rely on an ad-hoc interpretation when enforcing them and are thus subjective in nature and might lead to uneven enforcement across different cases. Under the EU, this margin of appreciation is often criticized in the sense that it leads to weakening and twisting of different principles but this deferment to those who are closer to the case actually allows for a nuanced approach which would be lost otherwise.
On the other hand we have rules which are much more concrete formulations and thus have a rigid definition and limited applicability which allows for uniformity but it suffers from inflexibility in the face of novel scenarios.
Yet, both rules and principles are complementary approaches and often the exercise of principles over time leads to their concretization into rules under existing and novel legal instruments.
While human rights can thus provide a normative, overarching direction for the governance of AI systems, they don’t provide the actual constituents for an applicable AI governance framework. For those that come from a non-legal background, often technical developer and designers of AI systems, it is essential that they understand their legal and moral obligations to codify and protect these rights in the applications that they build. The same argument cuts the other way, requiring a technical understanding of how AI systems work for legal practitioners such that they can meaningfully identify when breaches might have occurred. This is also important for those looking to contest claims of breaches of their rights in interacting with AI systems.
This kind of enforcement requires a wide public debate to ensure that they fall within accepted democratic and cultural norms and values within their context. While human rights will continue to remain relevant even in an AI systems environment, there might be novel ways in which breaches might occur and those might need to be protected which require a more thorough understanding of how AI systems work. Growing the powers of regulators won’t be sufficient if there isn’t an understanding of the intricacies of the systems and where breaches can happen, thus there is more of a need to enshrine some of those responsibilities in law to enforce this by the developers and designers of the system.
Original paper by Nathalie A. Smuha: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3543112