• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: From Rationality to Relationality: Ubuntu as an Ethical & Human Rights Framework for Artificial Intelligence Governance

July 18, 2020

Summary contributed by Connor Wright, who’s a 3rd year Philosophy student at the University of Exeter.

Link to original source + author at the bottom.


Mini-summary: The paper aims to shift the centred belief that a person is a person through being rational. Instead, Mhlambi presents the Ubuntu framework to argue that it is actually how a person endeavours to emphasise the relationality between humans that marks them as a person. Personhood is no longer presented with the benchmark of the quality of rationality, but rather represented by the state of relationality. Mhlambi develops this through his historical account of how rationality became the locus of personhood in Western thought, and then demonstrates how this goes against the very essence of the Ubuntu framework.

Problems such as data coloniality and surveillance capitalism are explored, linking to AI’s problems concerning marginalised communities and how they are represented. These are wrapped up in his 5 critiques of AI, which he suggests are tackled by the realisation of Ubuntu. He then concludes with what an Ubuntu internet would look like, and wrapping up a piece full of facts, realisations, and food for thought.


Full summary:

If one word were to describe this paper, it would be ‘Relationality’. This word is pivotal and of paramount importance to the Ubuntu ethical framework. We are no longer to recognise personhood in terms of ‘rationality’, but in terms of relationality. What this move does is to recognise humans as fundamentally communal, and treat this as a point of departure when considering how to govern AI, no longer considering personhood (for both humans and machines) with the benchmark of rationality, but rather in their relations to other facets of existence. With the Ubuntu framework (found in Sub-Saharan and Southern Africa), the mark of a person is acknowledging that personhood cannot be treated as an individual milestone, but something that is prolonged and realised by how someone represents and interacts with their community. How this came about, how this all fits together, and how this challenges the Western ethical strata which we apply to our governance of AI, I will endeavour to explain now.

Broken down etymologically, ‘ubu’ “indicates a state of being and becoming” (Mhlambi, 2020, p. 14), and ‘Ntu’ “evokes the idea of a continuous being or becoming a person” (Mhlambi, 2020, p. 14), an incomplete person then becomes a complete person through serving their community. To clarify, personhood is a state that a person continually possesses through its commitment to relationality, rather than a quality possessed by a human, such as rationality. As a result, personhood can be taken away, as well as regained, determined against the three inter-dependent pillars of Ubuntu: social progress, social harmony, and human dignity. Crimes committed against these (such as algorithmic bias marginalising certain communities) renders a person (or potentially a machine) “Akumu-Ntu lowo”, meaning ‘not a person’. The entity in question is still a person biologically (or is still a machine in terms of mechanics), but no longer a possessor of the state of personhood due to their crimes against the community that have harmed 1 or more of the three pillars.

With these three pillars, AI’s ability to possess personhood would then need to be considered in terms of its service to the community it’s involved in. Its personhood state would be lost through its digital surveillance stripping away human dignity by eradicating the idea of privacy, while its analysis reducing humans to data points eliminates its relationality to other humans in the process. The prioritisation of the individual in this way harks back to the relentless reference to the individual in terms of personhood, namely via treating rationality as a quality instead of a state. Instead, the Ubuntu framework would aim to guide AI governance towards conducting AI in terms of its relation to the inter-connectedness of society. Not only is AI to serve human dignity, but also social progress and social harmony. Such maxims include the environment and those that reside in it, and so treating the AI as a person through being individually rational will only serve to negate its reach to the environment, and to those who reside in it, including marginalised groups. AI’s ability to influence nearly all facets of humanity points towards a need for a framework which can account for this inter-connectedness, and that is what Mhlambi sees in Ubuntu.

Nevertheless, this framework acknowledges that although humans rely on one another to be considered a person (whereby AI would need a similar affirmation), humans have liberty in their choices. “Umu-Ntu ngumu-Ntu nga ba-Ntu” (Mhlambi, 2020, p. 19), meaning a person is a person, emphasises how both bad and good acts can be committed, which could then be reflected in the AI models deployed. Examples of this explored by Mhlambi include surveillance capitalism, and data colonialism, both brought about by the human decision to focus on the individual. This will be my last point of focus.

Surveillance capitalism is the practice of observing as many aspects of human behaviour as possible, in order to predict and then manipulate such behaviour. Examples such as predictive policing focus on the individual’s data points and how they behave, stripping them of their human dignity, and treating them as separate from the whole (their community). Once this is done, both the individual without the whole, and the whole without the integral part that is the individual, both weaken. Data coloniality then seeks to alter the modes of perception of society, utilising data to define new social relations that are driven by the predatory, extractive and historical processes of colonialism justified in the name of computing. The surplus of data created by the technological revolution opened up the need for new data markets, just like the industrial revolution did for Britain. Like the colonisers of southern-Africa creating the demand for technology on the continent justified in the name of progress, companies are looking for new data to be collected to fuel their algorithms justified in the name of computing. Ubuntu can then offer a defence against such problems. As previously mentioned, data surveillance removes the individual from the whole and strips their privacy away, rendering data surveillance as unsustainable. Data coloniality is then to be seen in the same harmful light as its predecessor, whereby if left unchecked, marginalised communities will suffer. Namely, in the form of the mis-understanding of their experience through disproportionate representation in big-tech companies. Focussing on the individual through these methodologies negates the importance of the inter-connectedness they possess with their community and most importantly, relationality with fellow humans. Not only are these methods invasive and presumptuous, they strip the very essence of personhood through individualism. AI is then to be governed accordingly, prioritising the connectedness possessed by humans, and their dependence on one another for their personhood, rather than as 7 billion individuals with data to be mined.

Overall, if our perception of personhood changes, so does AI. AI would no longer be set the benchmark of rationality, but rather perceived as a person in terms of its duties to the community. For AI to best serve the people, it needs to be considered in connection with the people, involving all corners of the people. It’s time to embrace the Ntu part of Ubuntu, and immerse AI in the inter-connected human ecosystem, rather than allowing it to be fractured by the dangers of surveillance capitalism, and data coloniality.


Original paper by Sabelo Mhlambi: https://carrcenter.hks.harvard.edu/files/cchr/files/ccdp_2020-009_sabelo_b.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Eticas Foundation external audits VioGĂŠn: Spain’s algorithm designed to protect victims of gender vi...

    Eticas Foundation external audits VioGén: Spain’s algorithm designed to protect victims of gender vi...

  • The increasing footprint of facial recognition technology in Indian law enforcement - pitfalls and r...

    The increasing footprint of facial recognition technology in Indian law enforcement - pitfalls and r...

  • A Matrix for Selecting Responsible AI Frameworks

    A Matrix for Selecting Responsible AI Frameworks

  • Research summary: Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Predictio...

    Research summary: Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Predictio...

  • DC-Check: A Data-Centric AI checklist to guide the development of reliable machine learning systems

    DC-Check: A Data-Centric AI checklist to guide the development of reliable machine learning systems

  • The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

    The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

  • In 2020, Nobody Knows You’re a Chatbot

    In 2020, Nobody Knows You’re a Chatbot

  • Employee Perceptions of the Effective Adoption of AI Principles

    Employee Perceptions of the Effective Adoption of AI Principles

  • Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work toge...

    Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work toge...

  • Harmonizing Artificial Intelligence: The role of standards in the EU AI Regulation

    Harmonizing Artificial Intelligence: The role of standards in the EU AI Regulation

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Š MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.