• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI Framework for Healthy Built Environments

January 5, 2025

🔬 Research Summary by Ismael Kherroubi Garcia. Ismael is trained in business management and philosophy of the social sciences. He is the founder and CEO of Kairoi, the AI Ethics and Research Governance Consultancy, and founder and co-lead of the Responsible AI Network (RAIN).

[Original paper by Jodie Pimentel (International WELL Building Institute) and Ismael Kherroubi Garcia (Kairoi)]


Overview: How do we safeguard people’s health in built environments where AI is adopted?  Research led by the International WELL Building Institute (IWBI) and Kairoi sets out a framework for built environment sectors to deploy and adopt AI in ways that are beneficial for people’s health and well-being.


Introduction

Most of us spend the vast majority of our lives indoors – inside offices, homes, schools, shops…1 The industries that service these spaces – the built environment sectors – are increasingly interested in the prospects of AI.2 This matters because those industries are vast and diverse, including architecture, engineering, construction, facilities operations and real estate.

The adoption of AI in built environments warrants careful thought. On the one hand, there may be undue excitement among built environment sectors that lack a deep understanding of AI technologies.3 On the other hand, built environments deal with critical infrastructure, possibly rendering some AI systems “high risk” – and, therefore, subject to robust requirements – according to the EU AI Act:

“[…] It is appropriate to classify as high-risk the AI systems intended to be used as safety components in […] the supply of water, gas, heating and electricity.” 4

With this, it is crucial that built environment sectors adopt and deploy AI in a way that is thoughtful, mitigates risks, and strives for positive changes in how organizations operate and how people lead their lives.

Key Insights

At the complex intersection of AI, built environments, and human well-being, the International WELL Building Institute asked: How can AI applications promote a people-first approach to buildings and organizations?

By identifying AI Champions with diverse perspectives, drawing on rigorous research, and applying Kairoi’s AI Ethics Canvas, the team identified authoritative principles and pathways to develop a framework considering  “baseline behaviors” and  “aspirational goals” when making AI-related decisions.

Part from the UN Global Compact

New AI tools should not distract built environment sectors from good business practices, and the UN Global Compact sets out industry standards to which IWBI and many of their stakeholders subscribe.5 

The following table collates instances that demonstrate the relevance of the UN Global Compact when making AI-related decisions:

Impact areaPrincipleExample of relevant AI risk
Human RightsBusinesses should support and respect the protection of internationally proclaimed human rights.AI tools may be trained on enormous amounts of data regardless of privacy and intellectual property rights.6
Businesses should make sure that they are not complicit in human rights abuses.AI technology could contribute to digital violence against women and girls.7
LaborBusinesses should uphold the freedom of association and the effective recognition of the right to collective bargaining.AI tools may be used for surveillance and to counter unionization.8
Businesses should uphold the elimination of all forms of forced and compulsory labor.AI tools may be sustained through deeply questionable employment practices.9
Businesses should uphold the effective abolition of child labor.AI data-labeling services have hired young teens.10
Businesses should uphold the elimination of discrimination in respect of employment and occupation.AI tools have perpetuated discrimination in staff recruitment.11
EnvironmentBusinesses should support a precautionary approach to environmental challenges.Organizations should be aware of methods for monitoring the energy consumption for developing the AI tools they use.12
Businesses should undertake initiatives to promote greater environmental responsibility.AI can be used to spread disinformation about climate change at scale.13
Businesses should encourage the development and diffusion of environmentally friendly technologies.Despite its potential for analyzing climate data, the infrastructure supporting AI has significant environmental impacts.14
Anti-corruptionBusinesses should work against corruption in all its forms, including extortion and bribery.”Corrupt AI” refers to the “abuse of AI systems by (entrusted) power holders for their private gain”.15

Aspire for the UN Sustainable Development Goals (SDGs)

AI also means new possibilities for built environment sectors, which can promote the UN SDGs by deploying and adopting AI thoughtfully. To translate global aspirational goals into practical guidance, Kairoi’s four pillars of responsible AI serve as a bridge between principles and action:

  1. Better communications: Articulate AI-related decisions to diverse stakeholders – from investors and funders to policymakers and the broader public – in accurate and thoughtful ways. This promotes AI literacy and combats AI hype.
  2. Relevant technical solutions: Follow practices for the safe, secure and robust design, development and deployment of AI tools and research. Technical strategies help organizations meet industry standards and develop effective AI tools and systems.
  3. Meaningful public engagement: Enable diverse stakeholders to participate in AI-related decision-making processes. This engenders trust in industry actors developing and adopting AI, which is done in light of real societal needs.
  4. Robust governance: Ensure legal compliance, engage with AI-related policy-making processes and document decisions. This enables innovation and allocating responsibility and accountability transparently.

Evaluating relevant SDGs in conjunction with the four pillars enabled the research team to brainstorm relevant organizational interventions and solutions. With this, the report suggests 36 clear activities that built environment sector organizations can implement to promote the UN SDGs when making AI-related decisions.

Between the lines

The AI Framework for Healthy Built Environments is grounded in real organizational practices and encompasses many topics that interest diverse built environment sectors. The framework was informed through an iterative process involving diverse staff from across IWBI, and an expert roundtable held in May 2024 at the WELL Conference. During the roundtable, industry leaders welcomed the document’s mention of the environmental footprint of AI technologies, keenly explored data management practices brought to life by case studies and readily discussed the role of AI in ESG disclosures.

With this, the framework sets out pragmatic changes that many organizations may follow. It also sets out a roadmap for its own development. In its final section, the document outlines the ongoing aims of IWBI to explore how AI and equity intersect for built environment sectors and to seek industry-wide partnerships for the promotion of good AI practices. This is key for all AI-related frameworks out there: we cannot simply conclude projects with a report, but must continue to promote its findings to lead positive change.


Footnotes

  1. Roberts, T. (2016, December 15). We Spend 90% of Our Time Indoors. Says Who?, Building Green, online [accessed 02 June 2024] ↩︎
  2. Jll.co.uk (2023) How the construction industry is adopting AI, JLL, online [accessed 02 June 2024] ↩︎
  3. Pimentel, J. & Kherroubi Garcia, I. (2024) Moving Beyond the Hype: Advancing Common Principles for Responsible AI in the Built Environment, WELL, online ↩︎
  4. European Parliament (2024) European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM (2021) 0206 — C9-0146/2021 — 2021/0106 (COD)), online [accessed  02 June 2024] ↩︎
  5. United Nations Global Compact (n.d.) The Ten Principles of the UN Global Compact, online [accessed 24 March 2024] ↩︎
  6. United Nations Human Rights Office of the High Commissioner (2024) Taxonomy of Human Rights Risks Connected to Generative AI, B-tech, online [accessed 15 April 2024] ↩︎
  7. Cerise, S. et al. (2022) Accelerating Efforts to Tackle Technology Facilitated Violence Against Women and Girls (VAWG), UN Women, online [accessed 21 April 2024] ↩︎
  8. Del Rey, J. & Ghaffary, S. (2020) Leaked: Confidential Amazon memo reveals new software to track unions, Vox, online [accessed 15 April 2024] ↩︎
  9. Williams, A. et al. (2022) The Exploited Labor Behind Artificial Intelligence, Noema Magazine, online [accessed 15 April 2024] ↩︎
  10. Rowe, N. (2023) Underage Workers Are Training AI, Wired, online [accessed 15 April 2024] ↩︎
  11. Dastin, J. (2018) Insight – Amazon scraps secret AI recruiting tool that showed bias against women, Reuters, online [accessed 15 April 2024] ↩︎
  12. Strubell, E. et al. (2019) Energy and Policy Considerations for Deep Learning in NLP, In the 57th Annual Meeting of the Association for Computational Linguistics (ACL). Florence, Italy, DOI: 10.48550/arXiv.1906.02243 ↩︎
  13. Climate Action Against Disinformation et al. (2024) Artificial Intelligence Threats to Climate Change, foe.org, online [accessed 15 April 2024] ↩︎
  14. Gonzalez Monserrate, S. (2022) The Staggering Ecological Impacts of Computation and the Cloud, The MIT Press Reader, online [accessed 15 April 2024] ↩︎
  15. Köbis, N.C. et al. (2022) The corruption risks of artificial intelligence, Transparency International, online [accessed 15 April 2024] ↩︎
Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Energy and Policy Considerations in Deep Learning for NLP

    Energy and Policy Considerations in Deep Learning for NLP

  • Research summary: Sponge Examples: Energy-Latency Attacks on Neural Networks

    Research summary: Sponge Examples: Energy-Latency Attacks on Neural Networks

  • Mapping the Responsible AI Profession, A Field in Formation (techUK)

    Mapping the Responsible AI Profession, A Field in Formation (techUK)

  • NIST Special Publication 1270: Towards a Standard for Identifying and Managing Bias in Artificial In...

    NIST Special Publication 1270: Towards a Standard for Identifying and Managing Bias in Artificial In...

  • Technological trajectories as an outcome of the structure-agency interplay at the national level: In...

    Technological trajectories as an outcome of the structure-agency interplay at the national level: In...

  • How Tech Companies are Helping Big Oil Profit from Climate Destruction

    How Tech Companies are Helping Big Oil Profit from Climate Destruction

  • Harmonizing Artificial Intelligence: The role of standards in the EU AI Regulation

    Harmonizing Artificial Intelligence: The role of standards in the EU AI Regulation

  • Should you make your decisions on a WhIM? Data-driven decision-making using a What-If Machine for Ev...

    Should you make your decisions on a WhIM? Data-driven decision-making using a What-If Machine for Ev...

  • Episodio 3 - Idoia Salazar: Sobre la Vital Importancia de Educar al Ciudadano en los Usos Responsabl...

    Episodio 3 - Idoia Salazar: Sobre la Vital Importancia de Educar al Ciudadano en los Usos Responsabl...

  • Breaking Fair Binary Classification with Optimal Flipping Attacks

    Breaking Fair Binary Classification with Optimal Flipping Attacks

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.