• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Our Top-5 takeaways from our meetup “Protecting the Ecosystem: AI, Data and Algorithms”

September 20, 2021

🔬 Event summary by Connor Wright, our Partnerships Manager.


Overview: In our meetup with AI Policy Labs, we discussed AI’s involvement with climate change. From the need for corporate buy-in to data centres, AI’s role in the fight can often be confused. However, it starts with what is factual that will give us the best chance of using it.


Introduction

In partnership with AI Policy Labs, we discussed how AI is interconnected with the fight against climate change. The group quickly identified the role of misinformation; the group soon realised the need for a collective and not just individual effort. How this would be achieved then brought up questions regarding governance while the ever-present problem of tangibility continued to plague efforts to fight the crisis potentially. What is important to note is that knowing what’s factual is the first step of many in confronting this challenge.

Key Insights

  1. Knowing what’s factual

Part of the problem of fighting climate change is combatting those who deny there is any fight at all, with a worrying amount of counter-information on climate change in circulation. The role AI plays in this fight is resultantly confused, for example, AI being used to identify pollution hotspots and spread misinformation.

Therefore, part of the fight is understanding how to detect misinformation and how to know when something’s factual. Demystifying climate change and knowing what is factual can help identify the actual problems, allowing us to focus on each issue one by one. The fight can seem overwhelming at the best of times, so different people concentrating their efforts can help to make great strides in the areas they choose.

However, this can’t be done alone.

  1. Efforts at the individual level alone won’t cut it

Despite it being a global fight, only specific populations and sectors are buying in. Corporations are generally the most significant contributors to pollution. So, without their involvement in altering their habits, individual actions will become meaningless. The combination of personal and corporate action (whether a tech company or restaurant) will prove a potentially winning formula.

However, while the corporate side has its challenges, so too does the individual.

  1. The problem of data collection

It must be acknowledged that even altering actions at the individual level is troublesome. Take, for example, the Sidewalk Labs’ Smart City project in Toronto. Striving to try and create a revolutionised city, the data required to do so is deep and personal. Concerns about what this data would involve and how it would be stored were key in eventually stalling the project.

The kind of infrastructure needed for this project in the first place is also noteworthy, whether physical or regulatory. Data centres may provide the answer.

  1. Data centres

Data centres could be a way to store and share data to facilitate a cooperative effort on the crisis, but this brings up governance problems. Any data that leaves a country’s soil will involve relinquishing at least some control over what data is accessed and used. Different countries have different privacy laws, and the type of data that one country might want to collect may not be possible in another. Even then, 100% wifi reliability in both countries is needed to keep the data collected alive.

A theoretical approach and futuristic considerations are strongly present in discussing climate change. Yet, this sometimes generates a problem of tangibility.

  1. The tangibility problem

At times, individuals tend to see climate change as a theoretical issue rather than seeing it for its effects on us. Here, mentioned in the meetup from a developer’s view, the impacts of any non-climate-change-friendly policies are far removed. Helping to solve this could make carbon footprints of particular technologies, like washing machines, visible. 

Although, the next question surrounds whether this would influence a consumer’s decision. With so many choices in life to make, would a consumer want to be disposed to make another?

Between the lines

In answer to the previous question, individuals making choices are an essential component of the climate change fight. It provides an opportunity not to allow climate change compliance to be put on the back burner, especially when influencing what products companies are to produce. To facilitate this choice, AI needs to be seen as the right solution, not just another technological solution utilised just because. From my view, AI is still early enough to employ these kinds of considerations and with the correct factual information shared, these considerations can take a central role in the fight against climate change.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions

    Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions

  • The Limits of Global Inclusion in AI Development (Research Summary)

    The Limits of Global Inclusion in AI Development (Research Summary)

  • LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins

    LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins

  • On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

    On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

  • Research Summary: Countering Information Influence Activities: The State of the Art

    Research Summary: Countering Information Influence Activities: The State of the Art

  • Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions

    Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions

  • Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

    Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

  • Animism, Rinri, Modernization; the Base of Japanese Robotics

    Animism, Rinri, Modernization; the Base of Japanese Robotics

  • Research summary:  Algorithmic Bias: On the Implicit Biases of Social Technology

    Research summary: Algorithmic Bias: On the Implicit Biases of Social Technology

  • Defining organizational AI governance

    Defining organizational AI governance

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.