• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Logic of Strategic Assets: From Oil to AI

July 21, 2021

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Jeffrey Ding and Allan Dafoe]


Overview: Does AI qualify as a strategic good? What does a strategic good even look like? The paper aims to provide a framework for answering both of these questions. One thing’s for sure; AI is not as strategic as you may think.


Introduction

Is AI a strategic good for countries? What is strategic nowadays? The theory proposed serves to aid policymakers and those on the highest level to identify strategic goods and accurately interpret the situation. What a strategic good involves will now be discussed, both in terms of the importance of externalities and whether AI qualifies.

Key Insights

What is a strategic good?

The crux of the paper centres on the problem of accurately identifying a strategic good. The paper suggests that such goods “require attention from the highest levels of the state to secure national welfare against interstate competition”. While this may be wide-reaching, the authors offer the following formula:

“Strategic level of asset = Importance x Externality x Nationalization”

The importance of the asset is based on both military and economic terms. For example, oil that fuels a country’s naval fleet vs cotton being used to manufacture high-end fashion.

The externality part is about positive externalities. Here, the more positive externalities produced, the more strategic the product. Private actors are discouraged from investing in the good as they cannot receive all the positive externalities exclusively. For example, wind turbines offer positive externalities in clean energy, but private actors can’t exclusively own this.

Nationalisation then focuses on how localised the externalities are. The good becomes less strategic if the externalities derived from it can spread to other countries.

Strategic goods in terms of externalities

The externalities brought by strategic goods can be classed in three ways: cumulative-strategic logics, infrastructure-strategic logics and dependency-strategic logics:

  • Cumulative-strategic logics term how strategic goods are to possess high barriers to entry. This leads to low market investment and the need for government consent for the product to be purchased (such as aircraft engines). On the other hand, Uranium isn’t a cumulative-strategic logic as a country’s purchasing of uranium doesn’t put up barriers to entry for others.
  • Infrastructure-strategic logics note how strategic goods in the form of fundamental technologies tend to upgrade society. The diffuse positive externalities produced echo throughout the community and the military, such as the steam train in the Industrial Revolution.
  • Dependency-strategic logics focus on whether extra market forces and few substitutes determine the supply of a good or not. For example, the good becomes more strategic if a nation can cut supplies of a specific good to other countries (such as lithium).

As a result, a strategic good is based on the good itself and the country’s strategy with it. For example, the US’s use of oil in 1941 allowed them to be the supplier of 80% of Japan’s oil. Hence, when the US decided to cut the oil supply to Japan as part of the war effort, it had devastating effects on the Japanese military.

It’s important to note how the good’s positive externalities must be both important and strategic, as seen in this case. For example, oil was able to produce positive externalities in the form of modernising travel. However, standard-issue military rifles can be necessary for a country’s military, but not strategic. They are easy to manufacture (cannot produce a dependency-strategic logic), do not have high barriers to entry, and do not change society too much. Hence, the more logics employed at the same time, the more strategic the good is. 

What this theory means for strategic goods

A strategic asset is then where “there is an externality that is both important and rivalrous [(strategic)].”. Strategic goods are no longer based on military significance, where a good would be strategic if it could be used in the war effort. Under this framework, such goods would not require a high level of attention, so they would not be classed as strategic. Instead, the important and rivalrous externalities derived from technology that can reduce CO2 emissions solely in the country that uses it can be tagged as strategic. 

The strategic aspect of the development of AI

AI then becomes an interesting case in determining whether it is a strategic asset or not. Here, there is a low rate of cumulative-strategic logics. There are no high barrier entries to AI while also possessing high infrastructural logics through its potential to modernise society. From there, a potential emerging dependency-logic between the US and China could begin to surface, with time only telling whether the US’s computing power can be restricted to China. If so, a dependency-logic can be taken advantage of, and if not, China can continue to surge in the AI power rankings.

Between the lines

AI can certainly be classed as a strategic good in my book, but I thought it would be classified more strongly according to the formula at hand. At times, the lower barrier to entry to gain a foothold in the AI arena is often overlooked. This sobering realisation can contribute to what I believe in strongly: seeing AI for what it is.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Our Top-5 takeaways from our meetup “Protecting the Ecosystem: AI, Data and Algorithms”

    Our Top-5 takeaways from our meetup “Protecting the Ecosystem: AI, Data and Algorithms”

  • Mapping the Responsible AI Profession, A Field in Formation (techUK)

    Mapping the Responsible AI Profession, A Field in Formation (techUK)

  • Research summary:  Laughing is Scary, but Farting is Cute: A Conceptual Model of Children’s Perspect...

    Research summary: Laughing is Scary, but Farting is Cute: A Conceptual Model of Children’s Perspect...

  • The philosophical basis of algorithmic recourse

    The philosophical basis of algorithmic recourse

  • Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing

    Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing

  • Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

    Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

  • A Holistic Assessment of the Reliability of Machine Learning Systems

    A Holistic Assessment of the Reliability of Machine Learning Systems

  • Consequences of Recourse In Binary Classification

    Consequences of Recourse In Binary Classification

  • Promoting Bright Patterns

    Promoting Bright Patterns

  • Unprofessional Peer Reviews Disproportionately Harm Underrepresented Groups in STEM (Research Summar...

    Unprofessional Peer Reviews Disproportionately Harm Underrepresented Groups in STEM (Research Summar...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.