• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Governance by Algorithms (Research Summary)

February 1, 2021

šŸ”¬ Research summary contributed by Connor Wright, our Partnerships Manager.

[Link to original paper + authors at the bottom]


Overview: Through exploring the world of e-commerce and search engines, algorithms are no longer to be relegated to solely being about inputting and outputting data according to some specific calculations. Its intangibility, unquestionability, and influence over what we believe agency to be is explored in this paper, giving us the low-down on the influence such algorithms can and do, have in our lives.


As I’m sure is well-known by many, the influence of algorithms is very real. Algorithms are no longer portrayed as just being confined to turning input data into output data according to some specific calculations, with their effects expanding and influencing well beyond its bounds. To demonstrate this, I’ll first draw on the paper’s journey through the e-commerce and the subsequent questions of responsibility that come up. I’ll then look into how this too can then be seen in the topic of search engines. From there, it’ll be worth noting the intangibility offered by algorithms, and the subsequent tabu around questioning its decisions despite such intangibility. Having considered the above, the question of algorithmic governance by who and over whom proves to be the next step in what is such an elusive topic.

The realm of e-commerce

The paper’s venture into the role of algorithms in the field of e-commerce mainly centres on how algorithms are used in our everyday commercial lives. Here, Amazon’s algorithmic uses are observed to have become a key player in ā€œprescribingā€, with its algorithm having sorted through the endless amounts of data made available by our conduct on its website. For example, whenever we receive a ā€˜other people were also interested in’ upon buying an item, we have Amazon’s algorithm to thank. Similarly, algorithms across the web space are being used to sift through the mountains of data available to it in order to track our engagements across different websites and thus ā€˜personalise’ our experience across each site (mainly done through cookies). In this sense, governance by algorithms in the e-commerce world is seen by the data separating being left up to the algorithm itself, proving to be a double automation as the resultant decision from said sorting also has to be made by the algorithm given how it sorted the data itself. 

What then rises to the surface, are questions over responsibility and agency. Given how the algorithm has sorted through the data and made the decision on what to recommend you, what happens if you find the recommendation offensive? Can the human involved in providing the data to the algorithm or the designer of the algorithm be held accountable for what they weren’t involved in? If not, this then leads us to the odd thought on whether we ought to grant the algorithm agency given the level of independency of its actions. Of course, since the algorithm doesn’t actually have the capacity to realise what it’s doing and is blissfully unaware of the consequences of its actions, this cannot be the case. Nevertheless, the potency of the algorithms is still demonstrated by the mere need to consider its agency, and especially if it cannot take the blame for its actions.

The realm of search engines

A similar situation is then found in exploring algorithmic involvement in search engines. Here, the order of results on our searches through search engines, whether on Google, Firefox or Yahoo, are determined through the sorting action of an algorithm. To give an example, Google’s PageRank algorithm has been labelled by Masnick as a ā€œbenevolent dictatorā€, benevolently sorting through the data and dictatorially prioritising what is being most engaged with on the internet itself. Such prioritisation then stems from the users bringing the content to the algorithm’s attention through internet publications, thus making the space co-authored between the public and the algorithm. Hence, algorithms that form the basis of search engines are susceptible to being swayed by the public just as easily as humans. So, how can we be governed by algorithms in this space?

The main response to this question finds itself when talking about the public space. Here, how can the digital space actually be a public space if certain digital information is displayed more than others due to it being more prominent? There will be some conversations that never achieve such coverage, but are to be viewed as no less important. In this way, we can catch a

A glimpse of the invisible workings of algorithms and their ability to govern the digital space, captured eloquently in Masnick’s quote. 

The intangibility of algorithms

What then interests me most, as a result, is how despite only catching a glimpse of the workings of an algorithm, it is almost taboo to question its outcomes or decision-making process. To flesh this out, the paper mentions Gillespie’s six dimensions of political valence which an algorithm influences: the ā€œpromise of objectivityā€ and the ā€œentanglement with practiceā€. Here, the assumption that algorithms guarantee objectivity due to their non-human touch and the subsequent adjustment of human processes to incorporate that, cedes even more control to the algorithmic governance process. Due to its perceived objectivity, any allegation against the algorithm gets quickly dismissed as owing to our own personal biases and distorting the truth presented by the algorithm. As a result, such questioning subsides and processes are altered in order to centralise the algorithm and its truth displaying ability. Step by step, such tabu around questioning the algorithm slowly slips into the increased governance of algorithms over human practice; the governance over the material by the immaterial. 

This intangibility of the algorithmic process that gains such high repute starts to make it harder and harder to see its influence. The governance of algorithms takes on this ā€˜cloak of invisibility’, where its inner workings are hidden by an iron curtain thanks to the automation of its process (data sifting and subsequent decisions made). As seen in the e-commerce and search engine explorations, what makes these areas tick increasingly becomes more and more focused on the immaterial, rather than owing to the physical (such as a human agent). In this way, the influence of algorithms is there for us to observe, but we aren’t even best suited to look for it.

As clearly shown in the arenas of e-commerce and search engines, governance by algorithms is widespread. Its elusiveness both in terms of appearance and in terms of its unquestionability means that algorithms are slowly being adhered to without a second thought. For me, this is where the true power of governance by algorithms lies. Its perceived objectivity in all cases and subsequent changing of the processes surrounding any algorithmic interaction is where governance by algorithms really takes form. Governance by whom and over whom is then a whole other story.


Original paper by Francesca Musiani: https://montrealethics.ai/wp-content/uploads/2021/02/Governance-by-algorithms-NEOACM-reading-2.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

    Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

  • The MAIEI Learning Community Report (September 2021)

    The MAIEI Learning Community Report (September 2021)

  • On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models

    On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models

  • The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Po...

    The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Po...

  • On the Creativity of Large Language Models

    On the Creativity of Large Language Models

  • People are not coins: Morally distinct types of predictions necessitate different fairness constrain...

    People are not coins: Morally distinct types of predictions necessitate different fairness constrain...

  • Towards a Feminist Metaethics of AI

    Towards a Feminist Metaethics of AI

  • Disability, Bias, and AI (Research Summary)

    Disability, Bias, and AI (Research Summary)

  • International Institutions for Advanced AI

    International Institutions for Advanced AI

  • Research summary: The Deepfake Detection  Challenge: Insights and Recommendations  for AI and Media ...

    Research summary: The Deepfake Detection Challenge: Insights and Recommendations for AI and Media ...

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.