• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Abhishek Gupta on AI Ethics at the HBS Tech Conference (Keynote Summary)

March 15, 2021

Keynote summary by Connor Wright, our Partnerships Manager.

[Link to original paper + authors at the bottom]


Overview: The AI space has been dominated by controversy in recent months, especially surrounding moves made by Google. Our founder Abhishek Gupta, in his talk at the Harvard Business School Tech Conference aims to explain how the space is to go about affecting the necessary change. From business executives to rank and file employees, there are things everyone can do to take part in this change, but they must be done sooner rather than later.


Introduction

In an AI space dominated by big ethical headlines such as the unfortunate events surrounding Timnit Gebru and Margaret Mitchell, as well as the ACM Conference of Fairness, Accountability and Transparency (FAccT) suspending its Google sponsorship, Abhishek Gupta offers his view on how the space can move forward. Centering on community involvement, open source models and how such changes can be incentivised, Gupta acknowledges that the required changes will take time. However, his recommended actions to take now are relevant for everyone, from company executives, to rank and file workers. Everyone’s part to play is now, and the following is how we can do it.

Community

The crux of Gupta’s ambition and inspiration behind his talk is his complete desire to more closely involve the community in which an AI product will be employed. Even when this consideration is taken into account, more attention needs to be paid towards how we rely on second hand sources to represent community experiences, which do not provide an accurate picture of the community involved at all. In this way, false gatekeepers start to gather more momentum as representative of the community’s thoughts, even though this may not be at all accurate. 

To resolve this, the short run only offers very rigid options, such as sending out more surveys, delaying the deployment of the product etc. Nonetheless, Gupta points out that these are stop-gap measures that we can apply in the short term, captured in the form of inviting stakeholders into the early stages of the design cycle so we’re directly informed by their views. Actively working with the community to co-design and cooperate on such projects will require some hard work, the dividends will pay out when the AI products eventually produced actually aid the community in which they are deployed.

Open source: a free lunch or are there problems?

While efforts being made towards the community can contribute to an open source model (like we adopt here at MAIEI) making strides towards democratising AI, Gupta acknowledges that it requires a lot of investment and effort in order to maintain and keep open source packets up to date. Such a burden then ends up falling on a core group of maintainers who then tend to suffer from burnout as they’re carrying out the task on a volunteer basis. So,  support from some big companies is needed in order to lighten the workload for these maintainers.

Such necessary help from big companies then brings questions surrounding where the companies have accrued their funding. Receiving help for such open source projects would be a major boost, but more harm than good would be done if such funding comes from companies who are not aligned with the goals of the open source model. Even if such funding was available, what does it actually look like? Thus, the norm of private cloud ownership has been established, with companies more so focusing on producing their own cloud infrastructures, which openly designed and source initiatives may just never be able to match.

In order not to laden his talk completely with dismay, Gupta does refer to how companies such as Compute Canada that provide necessary cloud infrastructures for research to take place in an openly sourced manner. From there, federated learning is helping to democratise the space through creating large scale models, as well as Tiny ML distilling some larger ML models for researchers to use. In this way, while the current private cloud infrastructure  may be better, the hacker collective movement is a nudge in the right direction which, given time, can help to democratise the AI space.

Why haven’t we seen change?

Despite the presence of initiatives such as open source models, Gupta considered reasons as to why little change has been seen in the business world? His response places the locus of the problem in the current top-down initiatives in companies. Some at the top have initiated top-down approaches and implemented them into the business, but some companies have not done so. At the moment, Gupta acknowledges how some company initiatives are clouded by how issues generated from AI are two steps removed from those making the decision, meaning less weight is contributed to solving them due to their intangibility. In this sense, the immediacy of the effects of AI-orientated decisions count, where the effects of such harms are to be made clear.

One further problem can then be seen in some instances of a lack of practicing what you preach within the AI space. For example, BJ Fogg and team at Stanford lab gave rise to addictive technologies despite talking about ethics since 2000. So, why is this the case and how can we prevent future renditions?

In response, Gupta alludes to an absence of policy which gives rise to two options: companies continue to gain dominance in the space and evade ethical commitments, or social movements arise to strive to correct this. So, the concern generated by a lack of policy is that the dominance being accrued by different organisations cannot be adequately halted by any social movement.

How do we incentivise changing this?

As real a threat as this is, Gupta offers different ways that companies, low-level workers and engineers can move towards affecting the required change in the space. For example, a chief ethics officer position with the power to actually implement change (rather than just being a token position) is a great step forward for companies to be adequately held to account on their actions. Such a top-down approach as previously mentioned can then help make it easier for rank and file employees and engineers to make the adequate changes being desired.  Rank and file employees reading this piece may then ask how they can be seen to have the power to imbue such big changes, which Gupta meets with his call for authenticity.  Rank and file employees have more power than they may think, and bringing their uncompromised selves will go a long way towards cultivating a responsible AI culture.

Between the lines

For me, the importance of involving the community is my key takeaway from the talk. As I mentioned in my TEDx youth talk, bridging the gap between the AI and the public is the way towards creating truly helpful AI products that prioritises the community, rather than something for the community to adjust to. The incentives to actually enact such change, whether top-down or through open-source model adoption, are long-term goals in the AI space, but Gupta’s talk is one of many that push us in the right direction.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Balancing Data Utility and Confidentiality in the 2020 US Census

    Balancing Data Utility and Confidentiality in the 2020 US Census

  • Against Interpretability: a Critical Examination

    Against Interpretability: a Critical Examination

  • How Culturally Aligned are Large Language Models?

    How Culturally Aligned are Large Language Models?

  • Technological trajectories as an outcome of the structure-agency interplay at the national level: In...

    Technological trajectories as an outcome of the structure-agency interplay at the national level: In...

  • The Moral Machine Experiment on Large Language Models

    The Moral Machine Experiment on Large Language Models

  • The Narrow Depth and Breadth of Corporate Responsible AI Research

    The Narrow Depth and Breadth of Corporate Responsible AI Research

  • Are we ready for a multispecies Westworld?

    Are we ready for a multispecies Westworld?

  • Faith and Fate: Limits of Transformers on Compositionality

    Faith and Fate: Limits of Transformers on Compositionality

  • Recess: Your wrist, your data, their access: Are you trading convenience for control?

    Recess: Your wrist, your data, their access: Are you trading convenience for control?

  • The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

    The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.