• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Ethical AI Startup Ecosystem 01: An Overview of Ethical AI Startups

June 5, 2022

🔬 Original article by Abhinav Raghunathan, the creator of EAIDB who publishes content related to ethical ML / AI from both theoretical and practical perspectives.


This article is a part of our Ethical AI Startups series that focuses on the landscape of companies attempting to solve various aspects of building ethical, safe, and inclusive AI systems.


There’s no questioning the ubiquity of artificial intelligence (AI). There’s no argument that can be made that AI is not at the core of Western society, part of the fabric of our everyday lives.

Over the past three decades, a greater emphasis has been placed on growth, on scale, on the positive potential of AI. It is now used constantly in every context as a solution to every problem. But times have changed. Only recently have thought leaders in the space transitioned the thinking away from unparalleled growth (AI has grown enough) and towards controlling AI risk — the negative potential for AI to perpetuate bias, generate disinformation, and much more. When AI fails, it fails explosively.

Solutions came in droves as soon as it became clear to investors, governments, and business owners that AI risk can be dangerous and costly and that consumer trust (which directly translates to profit) is increasingly hard-earned in a world with so many cases of AI run rampant. Many ethical AI vendors are in their infancy — startups attempting to combat the wide world of irresponsible AI. The ethical AI space itself is a relatively small blip on the funding worlds’ radar: underrepresented, underfunded, and underrated.

This column is a comprehensive guide to the five different categories of “Ethical AI” startups and the dynamics between them. We will analyze trends, make predictions, and identify the strengths and weaknesses of each category of this fascinating and critical startup ecosystem.

The Ethical AI Database (EAIDB)

Everyone throughout the company lifecycle (from founders to investors to end users) are gradually becoming more aware and more receptive to transforming AI into a more ethical version of itself. To accelerate the conversation, the industry must be made more transparent — the companies recognized, the founders and investors alerted, the policymakers aware. This is where EAIDB comes into play.

EAIDB is a live database of startups that either provide tools to make existing AI systems ethical or build products that remediate elements of bias, unfairness, or “unethicalness” in society. EAIDB also publishes quarterly market maps / reports and spotlights constituent companies.

Preview of the EAIDB Market Map for Q1 2022.

Startups that dedicate their services and products to enabling responsible technology are broken into five categories: 

  1. Data for AI
  2. ModelOps, Monitoring, & Observability
  3. AI Audits, Governance, Risk, & Compliance
  4. Targeted AI Solutions & Technologies
  5. Open-Sourced Solutions

In subsequent pieces, each of these five categories will be explored in detail.

Growth + Trends

The data EAIDB has collected on its 140+ ethical AI startups shows that both investor and founder interest is growing. As of 2016, only 28 of EAIDB’s constituents were active. In 2022, we recorded 153 active companies (a total growth of about 446% and a CAGR of about 32.7%). Clearly, this space is profitable and motivation to make technology responsible exists.

Total active Ethical AI startups by year.

A category-wise breakdown reveals that, in recent years, the buzz has been mostly around data-related operations (“Data for AI”) and GRC (“AI Audits, Governance, Risk, & Compliance).

Growth in categories of EAIDB by founding year.

Most thought leaders in this space agree that motivation is only increasing — whether through fear or through willingness. Policy changes made by those like the New Zealand and Scottish governments, the State of New York and California, and others will surely drive a stronger business need to mitigate AI risk. There is sound logic behind the claim that the “ethical AI” sector might approximate the growth curves of the privacy boom of the mid-2010s or the cybersecurity boom in the late-2000s. Only time will tell.

The next issue of Ethical AI Startups will cover the first of our categories: Data for AI.

To learn more about EAIDB, visit the dedicated website at https://eaidb.org.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • A Case Study: Increasing AI Ethics Maturity in a Startup

    A Case Study: Increasing AI Ethics Maturity in a Startup

  • Risks vs. Harms: Unraveling the AI Terminology Confusion

    Risks vs. Harms: Unraveling the AI Terminology Confusion

  • The coming AI 'culture war'

    The coming AI 'culture war'

  • Challenges of AI Development in Vietnam: Funding, Talent and Ethics

    Challenges of AI Development in Vietnam: Funding, Talent and Ethics

  • The Impact of AI Art on the Creative Industry

    The Impact of AI Art on the Creative Industry

  • The Chief AI Ethics Officer: A Champion or a PR Stunt?

    The Chief AI Ethics Officer: A Champion or a PR Stunt?

  • Algorithmic Impact Assessments – What Impact Do They Have?

    Algorithmic Impact Assessments – What Impact Do They Have?

  • Regulating AI to ensure Fundamental Human Rights: reflections from the Grand Challenge EU AI Act

    Regulating AI to ensure Fundamental Human Rights: reflections from the Grand Challenge EU AI Act

  • Computer vision and sustainability

    Computer vision and sustainability

  • Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

    Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.