

✍️ Op-Ed by Charlie Pownall and Maki Kanayama
Charlie Pownall is the founder and managing editor of AIAAIC. Maki Kanayama is a contributor to AIAAIC and a privacy counsel.
Introduction
Distinguishing between risks and harms seems simple and obvious: risks are negative impacts that will occur, while harms are forms of damage or loss that have already occurred. However, research AIAAIC has conducted into selected AI and algorithmic harm and risk taxonomies reveals that industry and academia regularly misunderstand the two terms.
For example, a Microsoft taxonomy classifies ‘inadequate fail-safes’ as an AI harm. A 2023 paper by Stuart Russell and Andrew Critch on societal-scale risks of AI includes ‘security breaches’ as a type of harm—not inherently harmful but potentially causing harm—alongside monetary loss and reputational damage. Meanwhile, a Google DeepMind paper on the risks of large language models uses the terms “risk” and “harm” interchangeably.
What’s more, the International AI Safety Report produced for the Paris AI Action Summit mischaracterised (p. 183, ref. 1129) an AIAAIC paper proposing a taxonomy of harms as a taxonomy of hazards.
A hazard is usually defined as a form of future threat.
These conflations are not merely semantic issues but may have real-world implications, leading to confused and frustrated users and citizens, misguided legislation, and companies neglecting actual, present harms.
They also raise important questions about why this is happening to the extent that it is, and what can be done to address the problem.
Why does the confusion persist?
1. Lack of standardised definitions
One of the reasons for confusion may lie in the lack of standardised terminology used to define categories and types of risks and harms, with different jurisdictions using different definitions.
Additionally, multiple definitions of risk and harm exist, each differing according to different industries, regulatory bodies and disciplines’ approach to risk and harm from unique angles, leading to specialised definitions that suit their specific needs.
For instance, while both the EU AI Act and the US National Institute of Standards and Technology (NIST) AI Risk Management Framework take a risk-based approach, their definitions and applications vary.
2. Multiple interpretations
Adding to the complexity, risks and harms can encompass multiple aspects—physical, psychological, environmental and economic—making it challenging to create a universal definition.
More broadly, terms like “AI safety,” “AI ethics,” and “AI governance” are often used interchangeably, further muddying discourse. Without clear distinctions, decision-makers struggle to determine whether to focus on preventing discriminatory hiring algorithms or ensuring AI doesn’t develop into an autonomous rogue entity.
3. Avoidance of the word “harm”
Focusing on risk rather than harm allows corporations to deflect accountability for the negative consequences of their AI systems. Corporations can shift attention away from real-world damage by framing AI concerns as speculative risks or hazards rather than existing harms. This enables companies to engage in public discussions about AI safety without committing to concrete changes that would address current societal and ethical challenges.
4. Hype and sensationalism
Media hype also plays a role in emphasising the concept of superintelligent AI wiping out humanity, capturing public attention more effectively than a discussion about biased algorithms in hiring software. Sensational headlines drive engagement, leading to an overemphasis on existential risks rather than immediate, everyday harms, which have effectively been pushed aside in a broader debate about where and how the government should be putting its regulatory focus.
Some researchers and AI labs tend to focus on the risks of AI as it is intellectually and mathematically compelling. This enables them to draw on fields like control theory and game theory. Meanwhile, the business sector may shift discussions toward existential risks rather than addressing the more immediate harms caused by their products.
As technology evolves, new harms of AI will continue to emerge, making it difficult to define and mitigate associated risks.
How to improve clarity and consistency
Adopting clearer and more consistent terminology across different cultures, industries and domains is never straight-forward or simple.
Nevertheless, here are four recommendations that can help:
1. Separate risks from harms in policy discussions
Regulations should clearly distinguish between AI risks and harms. This means enacting policies that mitigate current harms, such as data privacy violations and algorithmic bias, while also supporting long-term research on existential risks without allowing it to overshadow urgent concerns.
For example, policymakers should prioritise laws that mandate transparency in AI decision-making, protect against automated discrimination, and enforce accountability in high-impact sectors such as finance, healthcare, and criminal justice.
2. Hold business accountable for present-day harms
AI developers and deployers must take responsibility for the real-world impacts of their systems. This means conducting audits for biases, ensuring explainability in decision-making, and mitigating risks associated with misinformation and automation-driven job displacement.
Instead of framing their ethical commitments around abstract future risks, businesses should implement robust governance frameworks that address the current privacy, societal, and environmental implications of AI deployments.
3. Improve public awareness and media accuracy
Journalists and media organisations should exercise greater caution in reporting the dangers of AI by distinguishing between speculative risks and immediate harms. Balanced reporting can help prevent unnecessary fear-mongering while still addressing AI technology’s ethical and policy challenges today.
Public awareness initiatives should empower people with knowledge about how AI affects their daily lives—whether through data collection, automated decision-making, or surveillance—so they can advocate for more equitable policies.
4. Adopt holistic AI governance
An effective AI governance framework should incorporate a dual focus addressing both present harms while preparing for future risks.
Independent oversight bodies should be established to assess and monitor AI harms in real-time while governments and international bodies should collaborate to ensure a balanced approach to innovation, safety and ethical considerations.
Conclusion
The confusion between risk and harm is not just an academic debate—it has profound, tangible implications for AI regulation, corporate responsibility and public understanding.
If policymakers, businesses, and the media fail to separate these concepts, we face the risk of focusing on distant, speculative threats while ignoring the immediate ethical and societal challenges AI poses today.
To build a responsible future, present-day harms should be acknowledged and addressed while keeping a measured, evidence-based approach to long-term risks.
By fostering clear and consistent language, AI policies and governance structures can genuinely serve society rather than being driven by hype or misplaced fears.