• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

SECure: A Social and Environmental Certificate for AI Systems

July 19, 2020

Get the paper in PDF formDownload

Art by Playthink

Abstract

In a world increasingly dominated by AI applications, an understudied aspect is the carbon and social footprint of these power-hungry algorithms that require copious computation and a trove of data for training and prediction. While profitable in the short-term, these practices are unsustainable and socially extractive from both a data-use and energy-use perspective. This work proposes an ESG-inspired framework combining socio-technical measures to build eco-socially responsible AI systems. The framework has four pillars: compute-efficient machine learning, federated learning, data sovereignty, and a LEEDesque certificate.

Compute-efficient machine learning is the use of compressed network architectures that show marginal decreases in accuracy. Federated learning augments the first pillar’s impact through the use of techniques that distribute computational loads across idle capacity on devices. This is paired with the third pillar of data sovereignty to ensure the privacy of user data via techniques like use-based privacy and differential privacy. The final pillar ties all these factors together and certifies products and services in a standardized manner on their environmental and social
impacts, allowing consumers to align their purchase with their values.

Get the paper in PDF formDownload
Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

馃攳 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Declaration on the ethics of brain-computer interfaces and augment intelligence

    Declaration on the ethics of brain-computer interfaces and augment intelligence

  • Should you make your decisions on a WhIM? Data-driven decision-making using a What-If Machine for Ev...

    Should you make your decisions on a WhIM? Data-driven decision-making using a What-If Machine for Ev...

  • Unlocking Accuracy and Fairness in Differentially Private Image Classification

    Unlocking Accuracy and Fairness in Differentially Private Image Classification

  • LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games

    LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games

  • On Measuring Fairness in Generative Modelling (NeurIPS 2023)

    On Measuring Fairness in Generative Modelling (NeurIPS 2023)

  • Science Communications for Explainable Artificial Intelligence

    Science Communications for Explainable Artificial Intelligence

  • How Machine Learning Can Enhance Remote Patient Monitoring

    How Machine Learning Can Enhance Remote Patient Monitoring

  • Project Let鈥檚 Talk Privacy (Research Summary)

    Project Let鈥檚 Talk Privacy (Research Summary)

  • Zoom Out and Observe: News Environment Perception for Fake News Detection

    Zoom Out and Observe: News Environment Perception for Fake News Detection

  • A Matrix for Selecting Responsible AI Frameworks

    A Matrix for Selecting Responsible AI Frameworks

Partners

  • 聽
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • 漏 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.