• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Rise of the machines: Prof Stuart Russell on the promises and perils of AI

June 20, 2022

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[A podcast from the World Economic Forum: Radio Davos]


Overview: Will the rise of the machines solve our problems or prove detrimental to our existence? A robot uprising is not really on the cards, but there are equally scary prospects taking place today.


Introduction

The rise of artificial general intelligence (AGI) against humans has been a hot topic in the AI literature. Yet, in this podcast, Professor Russell says this is not on the cards. Instead, we have other issues to deal with first. To explain, I’ll go through a definition of AI before covering AI in three use cases: social media, facial recognition technology (FRT) and the economy. I’ll then observe how AGI has always been the goal of AI design. I then conclude how AI is not intrinsically good or evil, and we’re the ones who’ll decide which way it goes.

Key Insights

To begin with, AI is treated as a spectrum. Rather than advocating for an explanation along the lines of ā€˜an AI system is one which has X number of rules’, it is set on a scale from extremely simple agents to extremely complex agents continuum. The highest complex agent is centred on a human. 

Situated on the lower end are systems based in more rigid environments. Algorithms that turn on your house lights at 6 pm and turn them off at 11 pm are a good example. Yet, within the inbetween, we have an interesting plethora of algorithms, some of which are pivotal in influencing the social media landscape.

How AI is affecting social media

Algorithms want to maximise click-through. Initially, this goal was met by sending people the content they like. However, the way to fully encounter click-through is to supply content that molds the human into the ideal candidate, i.e. a person that spends time on the platform. Through these hundreds of little nudges a day, the algorithms begin to alter people’s beliefs. Subsequently, those molded into ideal candidates start specialising in specific content streams, creating polarisation between those focussed on different subject areas.

As a result, Professor Russell calls for more visibility into how these algorithms work. He believes that this is a byproduct of algorithmic design, with researchers being given the benefit of the doubt about not intending to allow algorithms to have this effect. When an algorithm generates lots of money for the company, there’s added pressure not to change the algorithm. Hence, despite researchers being aware of this algorithmic effect, they may not be empowered to change it.

Hence, to attack this problem, we shouldn’t think about revenue. Rather, directing our energy towards what our users care about should be at the core of social media. This can be seen in the realm of FRT.

AI and FRT

Mistakes made once FRT is deployed are not so much because the data is ā€˜wrong’ but more so attributable to a non-representative dataset. Given this reality, it’s a hot topic whether there is a way to create a perfectly representative dataset. For example, what’s representative in Namibia will be different to Thailand. Hence, the question of how we deal with the dangers of this technology becomes socio-technical. 

What matters is how we respond to the issues and how we adapt your system once it gets deployed. For example, it may become the case that people do not want to release their data to be included in the new FRT system. Consequently, they do not consent to the use of FRT and cannot enter the spaces in which it’s established. There may even be scenarios where consent is not required, such as losing your job. 

How AGI would affect the economy

The British economist John Maynard Keynes mentioned technological unemployment in his research, saying we would not need any workers, given the trajectory of technological innovation. What rings especially true is the impact that small nudges this innovation has on the economy. If we designed a robot to be able to pick up any object out of a bin, 3-4 million people’s jobs would be put at risk. An automated taxi would be a quarter of the price of a regular taxi, affecting the job security of 25 million people.

Training everyone to be a data scientist or to have a job related to AI will not solve this problem and may not even be possible. Instead, asking questions about whether automation is necessary in the first place could be a promising initial step. Nevertheless, the march towards AGI goes on.

General-purpose AI has always been the goal

AGI systems are those which can carry out or learn any task that humans can do and do them better. The problem we find when creating these systems comes with how AI must know the full ins and outs of a task to carry out their work. For example, we can’t instruct an AI to solve climate change as we don’t know what this entirely involves. If we wanted an AI to solely construct electric vehicles, we don’t know what this future would look like.

Professor Russell notes how if we create systems where they are aware that we don’t know the entire outcome, they will ask permission. This gives us more control over their behaviour. Control comes through the machine’s uncertainty over what the objective is.

The arrival date of AGI won’t be overnight. While the end of the century could be promising to Professor Russell, there will be many other scenarios to follow up on first, especially regarding the economy.

Between the lines

Uncertainty is part of our world, which doesn’t suit AI very well. Hence, humans need to prioritise building adaptive and flexible systems to deal with the inevitable. It will be impossible to predict what scenarios could result from a system, especially if it’s never been released before. While this is daunting, it can also be exciting. It means that AI is not intrinsically good or evil. Instead, it’s up to us and, for good or worse, we are the ones making the decisions.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Breaking Fair Binary Classification with Optimal Flipping Attacks

    Breaking Fair Binary Classification with Optimal Flipping Attacks

  • The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines

    The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines

  • Low-Resource Languages Jailbreak GPT-4

    Low-Resource Languages Jailbreak GPT-4

  • Re-imagining Algorithmic Fairness in India and Beyond (Research Summary)

    Re-imagining Algorithmic Fairness in India and Beyond (Research Summary)

  • AI Has Arrived in Healthcare, but What Does This Mean?

    AI Has Arrived in Healthcare, but What Does This Mean?

  • Conversational AI Systems for Social Good: Opportunities and Challenges

    Conversational AI Systems for Social Good: Opportunities and Challenges

  • Research summary: Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Lea...

    Research summary: Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Lea...

  • Experimenting with Zero-Knowledge Proofs of Training

    Experimenting with Zero-Knowledge Proofs of Training

  • The Logic of Strategic Assets: From Oil to AI

    The Logic of Strategic Assets: From Oil to AI

  • Using attention methods to predict judicial outcomes

    Using attention methods to predict judicial outcomes

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.