• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Abhishek Gupta on AI Ethics at the HBS Tech Conference (Keynote Summary)

March 15, 2021

Keynote summary by Connor Wright, our Partnerships Manager.

[Link to original paper + authors at the bottom]


Overview: The AI space has been dominated by controversy in recent months, especially surrounding moves made by Google. Our founder Abhishek Gupta, in his talk at the Harvard Business School Tech Conference aims to explain how the space is to go about affecting the necessary change. From business executives to rank and file employees, there are things everyone can do to take part in this change, but they must be done sooner rather than later.


Introduction

In an AI space dominated by big ethical headlines such as the unfortunate events surrounding Timnit Gebru and Margaret Mitchell, as well as the ACM Conference of Fairness, Accountability and Transparency (FAccT) suspending its Google sponsorship, Abhishek Gupta offers his view on how the space can move forward. Centering on community involvement, open source models and how such changes can be incentivised, Gupta acknowledges that the required changes will take time. However, his recommended actions to take now are relevant for everyone, from company executives, to rank and file workers. Everyone’s part to play is now, and the following is how we can do it.

Community

The crux of Gupta’s ambition and inspiration behind his talk is his complete desire to more closely involve the community in which an AI product will be employed. Even when this consideration is taken into account, more attention needs to be paid towards how we rely on second hand sources to represent community experiences, which do not provide an accurate picture of the community involved at all. In this way, false gatekeepers start to gather more momentum as representative of the community’s thoughts, even though this may not be at all accurate. 

To resolve this, the short run only offers very rigid options, such as sending out more surveys, delaying the deployment of the product etc. Nonetheless, Gupta points out that these are stop-gap measures that we can apply in the short term, captured in the form of inviting stakeholders into the early stages of the design cycle so we’re directly informed by their views. Actively working with the community to co-design and cooperate on such projects will require some hard work, the dividends will pay out when the AI products eventually produced actually aid the community in which they are deployed.

Open source: a free lunch or are there problems?

While efforts being made towards the community can contribute to an open source model (like we adopt here at MAIEI) making strides towards democratising AI, Gupta acknowledges that it requires a lot of investment and effort in order to maintain and keep open source packets up to date. Such a burden then ends up falling on a core group of maintainers who then tend to suffer from burnout as they’re carrying out the task on a volunteer basis. So,  support from some big companies is needed in order to lighten the workload for these maintainers.

Such necessary help from big companies then brings questions surrounding where the companies have accrued their funding. Receiving help for such open source projects would be a major boost, but more harm than good would be done if such funding comes from companies who are not aligned with the goals of the open source model. Even if such funding was available, what does it actually look like? Thus, the norm of private cloud ownership has been established, with companies more so focusing on producing their own cloud infrastructures, which openly designed and source initiatives may just never be able to match.

In order not to laden his talk completely with dismay, Gupta does refer to how companies such as Compute Canada that provide necessary cloud infrastructures for research to take place in an openly sourced manner. From there, federated learning is helping to democratise the space through creating large scale models, as well as Tiny ML distilling some larger ML models for researchers to use. In this way, while the current private cloud infrastructure  may be better, the hacker collective movement is a nudge in the right direction which, given time, can help to democratise the AI space.

Why haven’t we seen change?

Despite the presence of initiatives such as open source models, Gupta considered reasons as to why little change has been seen in the business world? His response places the locus of the problem in the current top-down initiatives in companies. Some at the top have initiated top-down approaches and implemented them into the business, but some companies have not done so. At the moment, Gupta acknowledges how some company initiatives are clouded by how issues generated from AI are two steps removed from those making the decision, meaning less weight is contributed to solving them due to their intangibility. In this sense, the immediacy of the effects of AI-orientated decisions count, where the effects of such harms are to be made clear.

One further problem can then be seen in some instances of a lack of practicing what you preach within the AI space. For example, BJ Fogg and team at Stanford lab gave rise to addictive technologies despite talking about ethics since 2000. So, why is this the case and how can we prevent future renditions?

In response, Gupta alludes to an absence of policy which gives rise to two options: companies continue to gain dominance in the space and evade ethical commitments, or social movements arise to strive to correct this. So, the concern generated by a lack of policy is that the dominance being accrued by different organisations cannot be adequately halted by any social movement.

How do we incentivise changing this?

As real a threat as this is, Gupta offers different ways that companies, low-level workers and engineers can move towards affecting the required change in the space. For example, a chief ethics officer position with the power to actually implement change (rather than just being a token position) is a great step forward for companies to be adequately held to account on their actions. Such a top-down approach as previously mentioned can then help make it easier for rank and file employees and engineers to make the adequate changes being desired.  Rank and file employees reading this piece may then ask how they can be seen to have the power to imbue such big changes, which Gupta meets with his call for authenticity.  Rank and file employees have more power than they may think, and bringing their uncompromised selves will go a long way towards cultivating a responsible AI culture.

Between the lines

For me, the importance of involving the community is my key takeaway from the talk. As I mentioned in my TEDx youth talk, bridging the gap between the AI and the public is the way towards creating truly helpful AI products that prioritises the community, rather than something for the community to adjust to. The incentives to actually enact such change, whether top-down or through open-source model adoption, are long-term goals in the AI space, but Gupta’s talk is one of many that push us in the right direction.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits

    Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits

  • Human-AI Collaboration in Decision-Making: Beyond Learning to Defer

    Human-AI Collaboration in Decision-Making: Beyond Learning to Defer

  • Research summary: Detecting Misinformation on WhatsApp without Breaking Encryption

    Research summary: Detecting Misinformation on WhatsApp without Breaking Encryption

  • ChatGPT and the media in the Global South: How non-representative corpus in sub-Sahara Africa are en...

    ChatGPT and the media in the Global South: How non-representative corpus in sub-Sahara Africa are en...

  • Research summary: Legal Risks of Adversarial Machine Learning Research

    Research summary: Legal Risks of Adversarial Machine Learning Research

  • The Ethical AI Startup Ecosystem 01: An Overview of Ethical AI Startups

    The Ethical AI Startup Ecosystem 01: An Overview of Ethical AI Startups

  • Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms

    Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms

  • Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

    Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

  • Research summary: Principles alone cannot guarantee ethical AI

    Research summary: Principles alone cannot guarantee ethical AI

  • Research Summary: Geo-indistinguishability: Differential privacy for location-based systems

    Research Summary: Geo-indistinguishability: Differential privacy for location-based systems

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.