• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide? (Research Summary)

March 1, 2021

šŸ”¬ Research summary contributed by Sarah P. Grant, a freelance writer dedicated to covering the implications of AI and big data analytics.

[Link to original paper + authors at the bottom]


Overview: Understanding how algorithms shape our experiences is arguably a prerequisite for an effective digital life. In this paper, Gran, Booth, and Bucher determine whether different degrees of algorithm awareness among internet users in Norway correspond to ā€œa new reinforced digital divide.ā€


Traditional digital divide research focuses on inequalities in access and skills. By exploring what separates the haves from the have-nots when it comes to algorithm awareness, Gran, Booth, and Bucher aim to take the concept of the digital divide in a new direction.

The authors assert that algorithm awareness is an issue of ā€œagency, public life, and democracy,ā€ emphasizing that algorithms don’t just facilitate the flow of content, they shape content. The paper also highlights how algorithms are changing the ways in which institutions (such as public safety agencies) make high-stakes decisions. Because algorithms have been found to produce outcomes that replicate historical biases, the authors argue, there is a need to understand whether an awareness gap exists among the general population. 

Using data collected from a survey of internet users in Norway (where 98% of the population has internet access), the researchers analyze algorithm awareness, attitudes to specific algorithm-driven functions, and how varying degrees of awareness influence these attitudes. They compare these findings against key demographic variables and use cluster analysis to place the respondents into six distinct awareness categories: the unaware, the uncertain, the affirmative, the neutral, the sceptic, and the critical.

For this research, the focus is on three types of algorithm functions, including content recommendations (via platforms such as YouTube), targeted advertising, and personalized content such as news feeds. The authors note that studies have examined algorithm awareness in the past for specific platforms like Facebook, Twitter, and Etsy. The aim of this study is to go beyond individual platforms and adopt a more exploratory approach.

Levels of Algorithm Awareness 

The findings suggest that a significant percentage of the Norwegian population has either no awareness or low awareness of algorithms: 41% of the respondents report no awareness of algorithms, while 21% perceive that they have low awareness. 

No awareness is highest among older respondents, while the youngest age groups represent the highest levels of awareness. Education is strongly linked to algorithm awareness, with low awareness highest among the least educated group. Men perceive higher levels of algorithm awareness than women.

Attitudes Towards Algorithms

Those who report higher levels of awareness also hold more distinctly positive or negative attitudes towards algorithm-driven content recommendations. More ā€œneutralā€ or ā€œI don’t knowā€ attitudes are more strongly associated with respondents who have a low awareness of algorithms.

Types of Algorithm Awareness

The respondents fall into six categories based on demographics, attitudes, and level of awareness. The ā€œunawareā€ group, for example, has the oldest average age, is composed of 59% women, and has a significantly higher proportion of people with secondary school as their highest level of educational attainment. 

In contrast, the ā€œcriticalā€ group (which reports a high level of awareness) is composed of younger people, is over-represented by males, and has a much higher proportion with higher levels of educational attainment. It holds negative or very negative attitudes towards the different types of algorithm content.

Implications for Digital Divide Research

The authors conclude that a general lack of awareness poses a democratic challenge and that the demographic differences in algorithm awareness correspond to a new level of digital divide. They explore where algorithm awareness fits into the traditional digital divide framework, and determine that it can be best defined as a meta-skill that is necessary ā€œfor an enlightened and rewarding online life.ā€ 

A major implication covered in this paper is the potential for negative outcomes as algorithms become increasingly embedded in high-stakes decision-making related to areas such as health, criminal justice, and the news media. Another area to consider that is not emphasized in this research is what happens when tech companies position themselves as champions for closing the internet access gap by providing free services, but expose more people to the influence of algorithms in the process. The findings from this paper can be used to consider whether this is indeed a fair contract when large segments of the population may have a lack of algorithm awareness.


Original paper by Anne-Britt Gran, Peter Booth, Taina Bucher: https://www.tandfonline.com/doi/full/10.1080/1369118X.2020.1736124

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Use case cards: a use case reporting framework inspired by the European AI Act

    Use case cards: a use case reporting framework inspired by the European AI Act

  • GAM(e) changer or not? An evaluation of interpretable machine learning models

    GAM(e) changer or not? An evaluation of interpretable machine learning models

  • Breaking Fair Binary Classification with Optimal Flipping Attacks

    Breaking Fair Binary Classification with Optimal Flipping Attacks

  • Fine-Grained Human Feedback Gives Better Rewards for Language Model Training

    Fine-Grained Human Feedback Gives Better Rewards for Language Model Training

  • Explainable artificial intelligence (XAI) post‐hoc explainability methods: risks and limitations in ...

    Explainable artificial intelligence (XAI) post‐hoc explainability methods: risks and limitations in ...

  • Positive AI Economic Futures: Insight Report

    Positive AI Economic Futures: Insight Report

  • Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics (Research Su...

    Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics (Research Su...

  • Research summary: Algorithmic Injustices towards a Relational Ethics

    Research summary: Algorithmic Injustices towards a Relational Ethics

  • Best humans still outperform artificial intelligence in a creative divergent thinking task

    Best humans still outperform artificial intelligence in a creative divergent thinking task

  • Conversational Swarm Intelligence (CSI) Enhances Groupwise Deliberation

    Conversational Swarm Intelligence (CSI) Enhances Groupwise Deliberation

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.