The State of AI Ethics

Volume 7

November 2025

In Memory of Abhishek Gupta, Founder and Principal Researcher (Dec 20, 1992 – Sep 30, 2024)



Table of Contents

INTRODUCTION

Opening Foreword 🔗

Renjie Butalid (MAIEI)

[Content here]

Philosophy, AI Ethics, and Practical Implementations 🔗

Author Name

[Content here]

Bridging Policy and Ethics: On the Launch of AI Policy Corner 🔗

Author Name

[Content here]

PART I: FOUNDATIONS & GOVERNANCE

Chapter 1: Global AI Governance at the Crossroads

1.1 XXX 🔗

Renjie Butalid (MAIEI)

[Content here]

1.2 Competing AI Action Plans: Regional Bloc Responses to the US and China 🔗

Jimmy Y. Huang (MAIEI and McGill University)

[Content here]

Chapter 2: Disentangling AI Safety, AI Alignment and AI Ethics

2.1 The Institutions Behind the Concepts 🔗

Renée Sieber (McGill University)

The year 2025 was pivotal in concepts related to AI ethics, as seen at the Paris AI Action Summit, where national leaders expressed major concern over China’s DeepSeek model. We saw ethical constraints on AI relaxed in the hard law of AI regulations. For instance, the US federal government issued an executive order revoking national and state-level regulations on AI, believed to serve as barriers to innovation. Concepts like “AI safety,” “alignment” and “ethics” were largely abandoned or reshaped by national governments in favour of new buzzwords like “AI security” and “digital sovereignty.” I will disentangle some of these foundational concepts, concluding with missing elements and a path forward.

AI Safety

Briefly, proponents of AI safety argue that we must understand the anticipated and unanticipated consequences of implementing AI. These consequences are often interpreted as risks, which tend to vary along a continuum of low to high risk. Some are anticipated and intended, such as AI that supports the military; some are unanticipated and unintended, such as wrongful arrests due to facial recognition technology. Some are existential risks, such as AI accelerating the risk of World War III via a computer imbued with some “artificial general intelligence” that decides that humans are irrelevant. Therefore, proponents advocate for the identification and assessment of those risks with remedies that are computational — e.g., ensuring system accuracy and robustness — and policy-based — e.g., development of algorithmic impact assessments. Globally, there were three major AI safety trends. Several national AI Safety Institutes were formed in late 2024 and operational in 2025. For example, the Canadian AI Safety Institute opened in November 2024 to “leverage Canada’s world-leading AI research ecosystem and talent base to advance the understanding of risks associated with advanced AI systems and to drive the development of measures to address those risks.” There was the establishment of an international network of safety institutes. Occurring simultaneously with these formations, third and somewhat ironically, was the partial retreat from “AI safety” to “AI security,” as with the UK AI Security Institute in February 2025.

AI Alignment

Simply put, AI alignment refers to ensuring that AI, its design and outputs conforms to human values. The underlying concern is that, because AI acts autonomously, it obviates the need of a human-in-the-loop. Therefore, how are human values represented and — thinking computationally — in what sequence are they valued? Although alignment and safety overlap significantly, reinforcement learning — rewarding the AI system in its development phase — is a prime method to ensure alignment. That requires understanding how the system learns, since a reward could violate the rules in unanticipated ways. Consider the example of creating an AI system to simulate finding a way to win a boat race. Instead of moving faster than the other boats, the AI might propose sinking the other boats to win the race. This leads to a larger fear that — in an unchecked intelligence — AI’s unanticipated “solutions” to societal ills could lead to catastrophe.

AI Ethics

AI ethics is often the catch-all term for all forms of responsible AI, although it is primarily associated with certain core tenets, such as fairness, accountability, transparency, trustworthiness, and explainability. In AI ethics, there continues to be an emphasis on remedies like AI literacy to increase trust, computational debiasing to lessen discrimination, participatory design or procedural fairness processes to increase fairness, audits and impact assessments to increase transparency and explainability, and soft law — norms and standards — and hard law to increase accountability. Despite these measures, ensuring AI ethics remains a challenge. In 2025, Canada announced a Minister of AI, the first of its kind in the country. Minister Solomon has suggested moving away from AI regulation — which he argues could stifle innovation — in favour of soft law or regulations to protect privacy. He has mentioned AI ethics as necessary to build a trustworthy AI ecosystem that competes globally while protecting Canadians. The government also launched a 30-day sprint to collect public input on AI. The timeline was too short for a country where surveys reveal a deep distrust of AI compared to other countries. Sprint questions heavily favoured industry and economic development and treated trust solely as AI literacy. There are flaws in focusing on trust, especially when AI literacy appears here to be reduced to understanding the benefits of AI. Recent research has also found that, the more scientists learn about AI, the less likely they trust it, suggesting that other members of the public will respond likewise.

Conclusion

In 2025, AI ethics showed more continuity than change. In practice, issues such as the concentration of wealth and power, the environmental costs of data centres, and the political economy influencing AI’s development are avoided. Local perspectives on AI governance remain absent, even as cities and communities confront AI’s impacts on the ground. As the public and private sectors adopt new directions such as AI security, digital sovereignty, and public-interest AI, these new approaches must be thoroughly examined to ensure they serve and engage society, rather than simply repackage existing inequalities.

2.2 The Contested Meanings of Responsible AI 🔗

Fabio Tollon (University of Edinburgh)

What should we make of the “responsible” in “responsible AI” (R-AI)? It seems like the best fit for this kind of responsibility is to understand it as a term of praise. When we talk about ‘responsible’ AI, we are envisioning the AI technology in question as having been developed or deployed in a way that was commendable and trustworthy. This could mean it meets certain fairness, safety and transparency criteria or is in the service of some public good. Of course, this is not the same meaning as when we say “she is a responsible parent,” but something similar is being tracked. Namely, that something was done in a way that we would think of as “good,” or at least aspiring towards that which is good. The below insights draw on the BRAID UK report on the R-AI ecosystem.

However, R-AI is not just a simple term of praise, but a contested idea that stands for a number of different and at times contradictory things. It sometimes refers to a growing interdisciplinary field of critical AI research, at other times a governance ambition, and at other times a process for producing desirable AI products. Yet R-AI research often criticizes the R-AI governance agenda; meanwhile, many proposed ways to build responsible AI products lack buy-in from AI industry leaders. Thus, the ambiguity comes from the ways the term has been differentially attached to various practices in the AI space, with stakeholders deploying it in different ways. By understanding how these different meanings hang together we can get a better handle on what we want R-AI to mean.

R-AI as an interdisciplinary research agenda

Research under the banner of R-AI has been carried out by industry since at least 2017, when Microsoft and others first began using the term to brand their new algorithmic fairness, privacy and transparency toolkits. Industry R-AI researchers quickly engaged with academics in the closely related field of AI ethics, as well as public sector efforts to develop more “trustworthy” AI, and nonprofit/civil society researchers studying AI-driven harms. To be engaged in “responsible” AI from this perspective, then, is to be engaged in research that cuts across disciplinary and sectoral boundaries in order to address ethical and societal concerns emerging from AI.

R-AI as a stated governance ambition

R-AI as a governance ambition originated from a desire (and sometimes a need) for tech companies to self-regulate. Since 2017, Meta, Google, Microsoft, IBM, PwC and Accenture have all produced internal R-AI documents which each offer a set of principles and/or core values. These Responsible AI principles are used to inform the development of various tools and practices, such as internal ethics reviews, risk assessments, and product testing, in the hopes that these will realise particular values within their AI business. Soon after, nations began to frame Responsible AI as a government ambition, part of their own innovation strategies. R-AI under this banner refers to a body of effective internal governance procedures, guidelines and guardrails for aligning AI innovation with the values and principles that corporations or governments want to signal to others that they stand for.

R-AI as a desired type of AI product

Outside of particular corporate, government, and research agendas, there is an interest in developing a single reliable system or comprehensive set of standards and techniques for ensuring that AI products and services are ‘socially benign’ or have responsible characteristics. Here, the target is the technology rather than the developer, user or organisation.

R-AI as an ecosystem of contested meanings

So, which is the “correct” meaning of R-AI? All of the above, one of them, or none? One way to approach this issue is to reframe it: What if we think about R-AI as a broad community or ecosystem of stakeholders. Instead of conceiving of R-AI in silos, we can tease out the ways these conceptualizations shape one another, we can aim towards better and more holistic perspective on R-AI. Understanding R-AI in this way allows us to see that the three different meanings outlined earlier hang together, and that there are better or worse ways for them to feed into one another. That is, they each pick out important parts of what a truly ‘responsible’ AI ecosystem might look like, but none by themselves are enough.

No singular part of an ecosystem is completely isolated from the rest, and so these composite ‘ecologies’ need to be mapped, managed, and supported in ways that enable flourishing across the whole. The ecological metaphor lets us formulate the holistic goal of Responsible AI, one that aims at a future state of affairs in which responsibility appropriately infuses and guides the complex interactions between the diverse and vast community of actors with a stake in AI and its societal and planetary impact.

The key insight from this initial survey of the different meanings of R-AI is that it is not a singular concept or effort with a fixed meaning and clear definitional boundaries, but a complex and dynamic ecosystem pervaded by tensions and interdependencies.

2.3 The Evolving AI Safety Conversation: Singapore’s Practical Path Forward 🔗

Wan Sie Lee (Infocomm Media Development Authority of Singapore)

A shift in global cooperation on AI safety

Looking back at 2025, the AI Action Summit, held in Paris in February, represented an inflection point in the global discourse on AI safety, marking a shift in the dynamics of international cooperation. The two preceding global summits — Bletchley Park (2023) and Seoul (2024) — established a consensus centered on mitigating catastrophic risks posed by frontier AI. However, the Paris summit’s expansion of a narrow safety agenda to a broader one that encompasses economic opportunity, global equity, and industrial strategy, as well as the US’s announcement on prioritising diffusion of US frontier AI capabilities, signaled a de-emphasis on AI safety.

While some momentum for global collaboration on AI safety is lost, balancing the narrative with innovation and adoption allows for more inclusive participation globally. For Singapore, collaboration and partnership continues to be important to address risks arising from the rapid advancement of AI. We focused on mechanisms to do this productively and practically. These are along three non-competitive and apolitical vectors: supporting technical cooperation within the expert community, facilitating the development of best practices and standards in testing and evaluation, and contributing to global capacity building.

Technical Safety and Research Consensus

The universal need for a shared, scientific foundation for AI safety allows for productive cooperation that transcends national political agendas. Building on the International AI Safety Report 2025, the Singapore Consensus on Global AI Safety Research Priorities convened more than 100 AI experts from around the world to exchange ideas and clarify urgent needs for technical AI safety research. The resulting research priorities are organised into three interlinked domains: risk assessment (evaluating risks before deployment); development of trustworthy, secure, reliable systems (during design and build phases); and control, monitoring and intervention (post-deployment). These form a shared agenda and provide a technical roadmap for collaboration within the scientific community that is essential, regardless of national political and regulatory philosophy.

Best Practices and Standards in Evaluation

Advancing measurement science and practices will provide the empirical foundation for evaluating risks. Current benchmarks and testing methods are fragmented. Developing shared and continuously updated evaluation frameworks will allow AI to be tested under consistent and robust conditions, supporting cross-border comparability and transparency. Establishing reliable methodologies and standardised metrics — analogous to those in aviation or pharmaceuticals — would make safety claims testable, enabling cooperation between governments, industry and researchers, and ultimately supporting AI adoption.

As part of the International Network of AI Safety Institutes, the Singapore AISI continued to lead joint testing efforts within the Network, working with the other AISIs to develop common evaluation methodologies for frontier models. The latest of these joint testing exercises focused on systemic safety behaviours of AI agents in areas like cybersecurity, data leakage, and fraud. It also included multi-lingual evaluation, drawing on the diverse capabilities within the Network.

To support the greater use of AI, evaluations of AI safety need to also address the reliability and trustworthiness of deployed AI applications and systems, tackling societal risks and tangible, near-term harms. To do this, the Global Assurance Pilot brought together AI deployers and testers to develop testing standards for AI systems. They looked at ways to address risks in AI deployment in healthcare, financial services and other contexts, setting the foundation for global standards in AI application testing.

Safety as Inclusion and Access

Post-Paris, inclusive access and capacity-building are now critical components of the safety agenda. AI safety cannot be achieved by a few advanced economies alone. Through shared testing facilities, open research tools, training programmes, and technical partnerships, capability-building enables all nations to adopt safe and reliable AI. At the UN level, digital cooperation in AI governance is gaining increasing prominence.

Singapore is doing its part to support this work; driving work within our region to develop AI safety frameworks and evaluation capacity that reflect diverse social contexts and languages, and reflect local realities. This also improves the quality and resilience of safety outcomes, as diverse perspectives help identify harms that might otherwise go undetected.

As part of the Forum of Small States, we cooperated with countries that are members of the Forum to put out materials and resource guides that could be useful for other countries; for example, working with Rwanda to create an AI Playbook for Small States. We also set up Singapore Digital Gateway as a platform to share our resources and experience in AI safety, such as culturally-relevant models like Sea-Lion and open-source testing tools like AI Verify.

Looking Ahead to 2026

In 2025, the global tone for AI safety has shifted. The upcoming India AI Impact Summit in early 2026 is likely to solidify this trend, shifting the conversation even further from "action" to measurable "impact," with themes centred on inclusive development, sustainability, and democratising resources. While the UN platforms, including the upcoming scientific expert panel, will continue to align AI innovation with tangible global development goals, the world is less likely to see a rapid global treaty on AI safety. With geopolitical fragmentation remaining a reality, the path forward for governing this transformative technology lies in scaling these vectors of collaboration: the shared, non-political commitment to technical standards and assurance, combined with a focus on capacity-building and AI for public good.

Chapter 3: From Principles to Practice: Implementing AI Ethics in Organizations

3.1 AI Governance in Practice: 2025 trends in understanding and implementation 🔗

Ismael Kherroubi Garcia (Kairoi, RAIN and MAIEI)

When we hear “governance,” we often think of regional, nationwide or international policies — after all, that is where governmental bodies operate. This section is not about that governance but the much more relatable policy structures we find in the workplace; across businesses, schools, hospitals and charities; organisations large and small. At this level, national and multinational initiatives may seem quite abstract; after all, why would the EU’s AI Act affect me if I simply use AI chatbots to write emails? And how could the UN’s independent international scientific panel on AI be relevant to, say, a bakery or a marketing agency? And yet, those multinational initiatives respond precisely to years of signals from the wider business ecosystem; years of entrepreneurs and organisational leaders calling for clarity as to how to best approach AI in an everchanging world. These are the signals that the present article attempts to tap into, seeking to understand not how policy-makers are responding to calls for clarity, but to understand how organisations are creating clarity for themselves in a world where policy seems to be lagging behind.

AI Governance is a Business Necessity

Regardless of the sector in which an organisation operates, it cannot avoid the AI conversation and its implications. AI chatbots have now been readily available to the public for three years. This means that employees may use such chatbots for work-related tasks. These tasks, in turn, are fundamental to work across sectors: AI chatbots may be used for writing, brainstorming, correspondence, summarising texts, and so on. So, how many people are using AI chatbots at work, and is it helpful? The evidence is unclear.

In February 2025, Pew Research Center reported that, in the US, “relatively small shares of workers say they have used AI chatbots for work: 9% say they use them every day or a few times a week, and 7% say they use them a few times a month. [...] Among workers who have used AI chatbots for work, 40% say these tools have been extremely or very helpful in allowing them to do things more quickly. A smaller share (29%) say they have been highly helpful in improving the quality of their work.” (Lin & Parker, 2025). Meanwhile, a report from the Danish Bureau of Economic Research concluded in May that “AI chatbots have had no significant impact on earnings or recorded hours in any occupation” (Humlum & Vestergaard, 2025). Against this stands a global study conducted by the University of Melbourne and KPMG, which suggests that over 50% of workers use AI chatbots, and that their use leads to efficiency gains in over 60% of cases. Notwithstanding, the Australia-led study also emphasizes the risks that come with this rapid adoption of AI in the workplace, evidencing that “almost half of employees admit to having used AI in ways that contravene organizational policies. This includes uploading sensitive company information into public AI tools” (Gillespie et al., 2025).

In this context, AI governance is a business necessity, as the risk of misusing AI tools or falling for the hype may become costly. As Ganapini & Butalid (2025) explain in Tech Policy Press, “AI systems introduce operational, reputational, and regulatory risks.” With this, risk management mechanisms become central to protecting business interests; they respond to “market incentives” and remain consistent with pressures from regulators and consumers or beneficiaries.

AI Governance is more than Compliance

The pressures rendering AI governance a business necessity — market incentives, regulations and public influence — help explain that it is a question that goes beyond compliance alone. During a panel discussion hosted by the Responsible Artificial Intelligence Network (RAIN) in London in October, the speakers pointed to the risk of AI governance backsliding into compliance. Leaning on the BRAID UK responsible AI ecosystem report from June (Tollon & Vallor, 2025), the speakers made the case that legislation may inhibit the otherwise holistic and reflective nature of responsible AI initiatives. In other words, rather than AI governance building on decades of responsible research and innovation literature and advocacy, its scope may be narrowed to a series of checklists that ensure legal compliance.

Returning for a moment to the higher-level governance activities mentioned at the start, both the US and the UK have shown in 2025 a retreat from “responsible AI” to compliance in 2025, best demonstrated by their refusal to sign the Paris summit declaration on inclusive AI. In this regard and for the foreseeable future, it will fall to organisations to design and implement AI governance strategies; to approach AI responsibly and with an eye to the societal impacts of their AI-related decisions; to seek independent advice and to promote AI literacy.

3.2 AI Governance in Practice: 2025 trends in understanding and implementation 🔗

Author Name

[Content here]

3.3 From Solidarity to Practice: Building Ethical AI Capacity in Africa 🔗

Shi Kang’ethe (AIVERSE)

[Content here]

PART II: SOCIAL JUSTICE & EQUITY

Chapter 4: Democracy and AI Disinformation

4.1 Legislating the Moving Digital Terrain 🔗

Rachel Adams (Global Center on AI Governance)

[Content here]

4.2 AI and the Body Politic 🔗

Linda Solomon Wood (Canada's National Observer)

[Content here]

Chapter 5: Algorithmic Justice in Practice

5.1 Algorithmic Justice vs. State Power 🔗

Blair Attard-Frost (Alberta Machine Intelligence Institute)

[Content here]

5.2 Beyond the Algorithm: Why Student Success is a Sociotechnical Challenge 🔗

Adnan Akbar (tekniti.ai)

[Content here]

5.3 XXX 🔗

Author Name

[Content here]

Chapter 6: AI Surveillance and Human Rights

6.1 AI, Surveillance, and the Public Good 🔗

Maria Lungu (University of Virginia)

[Content here]

6.2 Mandated AI in the Public Sector and Challenging Inevitability 🔗

Roxana Akhmetova (University of Oxford)

[Content here]

6.3 XXX 🔗

Jake Wildman-Sisk (Borden Ladner Gervais LLP)

[Content here]

Chapter 7: Environmental Impact of AI

7.1 The Subtle and Not-so-subtle Environmental Impacts of AI 🔗

Burkhard Mausberg (Small Change Fund)
Shay Kennedy (Small Change Fund)

[Content here]

7.2 Measuring the Environmental Impact of the AI Supply Chain 🔗

Trisha Ray (Atlantic Council)

[Content here]

7.3 Policies Centring AI’s Resource Consumptions 🔗

Priscila Chaves Martínez (Independent researcher and consultant)

[Content here]

PART III: SECTORAL APPLICATIONS

Chapter 8: Healthcare AI: When Algorithms Meet Patient Care

8.1 XXX 🔗

Author Name

[Content here]

8.2 AI-enabled Errors at the Doctors, and How to Solve Them 🔗

Sahaj Vaidya (Koita Centre for Digital Health, Ashoka University)

[Content here]

8.3 Medical Trade Unions and Professional Bodies are Taking Back Control and Oversight of AI in Healthcare 🔗

Zoya Yasmine (University of Oxford)

[Content here]

Chapter 9: AI in Education: Tools, Policies, and Institutional Change

9.1 XXX 🔗

Author Name

[Content here]

9.2 Building Confidence for Class Participation 🔗

Ivy Seow (Singapore Management University)
Tamas Makanay (Singapore Management University)

[Content here]

9.3 Generative AI at Universities: Accounts from the front-line 🔗

Encode Canada (Aimee Li, Anna Zhou, Chelsea Sun, Kanika Singh Pundir, Roberto Concepcion, Rose Simon, and Tao Liu)

[Content here]

Chapter 10: AI and Labour Justice

10.1 AI in Oil and Gas: The case of Alberta 🔗

Ryan Burns (University of Washington Bothell)
Eliot Tretter (University of Calgary)

[Content here]

10.2 Restoring Employee Trust in AI 🔗

Dr. Elizabeth M. Adams (Minnesota Responsible AI Institute)

[Content here]

Ch 11: AI in Arts, Culture, and Media

11.1 2025 Marks a New Era for Canadian Performers: The first collective agreements with AI protections 🔗

Anna Sikorski (ACTRA Montreal)
Kent Sikstrom (ACTRA National)

[Content here]

11.2 Media Jobs are Canaries in the AI Automation Coal Mine 🔗

Katrina Ingram (Ethically Aligned AI)

[Content here]

11.3 The Ursula Exchange 🔗

Amanda Silvera (AI Ethics in art and entertainment)

[Content here]

PART IV: EMERGING TECH

Ch 12: Military AI and Autonomous Weapons

12.1 A Minute Before Escalation: Algorithmic power and the new military-industrial complex 🔗

Ayaz Syed (The Dais)

[Content here]

12.2 Civil Society’s Responses to the Militarization of AI 🔗

Kirthi Jayakumar (civitatem resolutions)

[Content here]

Ch 13: AI Agents and Agentic Systems

13.1 XXX 🔗

Author Name

[Content here]

13.2 XX 🔗

Author Name

[Content here]

Ch 14: Democratic AI — Community Control and Open Models

14.1 Learnings for Canada: Community-led AI in an age of democratic decay 🔗

Jonathan van Geuns

[Content here]

14.2 From accessible models to democratic AI 🔗

David Atkinson (Georgetown University)

[Content here]

14.3 Open Science Practices for Democratic AI 🔗

Ismael Kherroubi Garcia (Kairoi, RAIN and MAIEI)

[Content here]

PART V: COLLECTIVE ACTION

Ch 15: AI Literacy — Building Civic Competence for Democratic AI

15.1 AI Literacy: A right, not a luxury 🔗

Kate Arthur

[Content here]

15.2 AI Literacy: Building civic competence for democratic AI 🔗

Tania Duarte (We and AI)

[Content here]

15.3 From Co-Creation to Co-Production: How communities are building AI literacy beyond schools 🔗

Jae-Seong Lee (Electronics and Telecommunications Research Institute)

[Content here]

Ch 16: Civil Society and AI — Nonprofits, Philanthropy, and Movement Building

16.1 From Proximity to Practice: Civil Society’s Role in Shaping AI Together 🔗

Michelle Baldwin (Equity Cubed)
Alex Tveit (Sustainable Impact Foundation)

[Content here]

16.2 Indigenous approaches to AI governance: data sovereignty, seven-generation thinking, and long-term stewardship 🔗

Denise Williams

[Content here]

Ch 17: AI in Government — Public Sector Leadership and Implementation

17.1 Unions, Lawsuits and Whistleblowers: Public sector leadership from below 🔗

Ana Brandusescu (McGill University)

[Content here]

17.2 AI in Government: Accessibility, trust, and sovereignty 🔗

Tariq Khan (London Borough of Camden County Council)

[Content here]

17.3 The Hard Work of AI in Government 🔗

Jennifer Laplante (Government of Nova Scotia)

[Content here]