• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Abhishek Gupta on AI Ethics at the HBS Tech Conference (Keynote Summary)

March 15, 2021

Keynote summary by Connor Wright, our Partnerships Manager.

[Link to original paper + authors at the bottom]


Overview: The AI space has been dominated by controversy in recent months, especially surrounding moves made by Google. Our founder Abhishek Gupta, in his talk at the Harvard Business School Tech Conference aims to explain how the space is to go about affecting the necessary change. From business executives to rank and file employees, there are things everyone can do to take part in this change, but they must be done sooner rather than later.


Introduction

In an AI space dominated by big ethical headlines such as the unfortunate events surrounding Timnit Gebru and Margaret Mitchell, as well as the ACM Conference of Fairness, Accountability and Transparency (FAccT) suspending its Google sponsorship, Abhishek Gupta offers his view on how the space can move forward. Centering on community involvement, open source models and how such changes can be incentivised, Gupta acknowledges that the required changes will take time. However, his recommended actions to take now are relevant for everyone, from company executives, to rank and file workers. Everyone’s part to play is now, and the following is how we can do it.

Community

The crux of Gupta’s ambition and inspiration behind his talk is his complete desire to more closely involve the community in which an AI product will be employed. Even when this consideration is taken into account, more attention needs to be paid towards how we rely on second hand sources to represent community experiences, which do not provide an accurate picture of the community involved at all. In this way, false gatekeepers start to gather more momentum as representative of the community’s thoughts, even though this may not be at all accurate. 

To resolve this, the short run only offers very rigid options, such as sending out more surveys, delaying the deployment of the product etc. Nonetheless, Gupta points out that these are stop-gap measures that we can apply in the short term, captured in the form of inviting stakeholders into the early stages of the design cycle so we’re directly informed by their views. Actively working with the community to co-design and cooperate on such projects will require some hard work, the dividends will pay out when the AI products eventually produced actually aid the community in which they are deployed.

Open source: a free lunch or are there problems?

While efforts being made towards the community can contribute to an open source model (like we adopt here at MAIEI) making strides towards democratising AI, Gupta acknowledges that it requires a lot of investment and effort in order to maintain and keep open source packets up to date. Such a burden then ends up falling on a core group of maintainers who then tend to suffer from burnout as they’re carrying out the task on a volunteer basis. So,  support from some big companies is needed in order to lighten the workload for these maintainers.

Such necessary help from big companies then brings questions surrounding where the companies have accrued their funding. Receiving help for such open source projects would be a major boost, but more harm than good would be done if such funding comes from companies who are not aligned with the goals of the open source model. Even if such funding was available, what does it actually look like? Thus, the norm of private cloud ownership has been established, with companies more so focusing on producing their own cloud infrastructures, which openly designed and source initiatives may just never be able to match.

In order not to laden his talk completely with dismay, Gupta does refer to how companies such as Compute Canada that provide necessary cloud infrastructures for research to take place in an openly sourced manner. From there, federated learning is helping to democratise the space through creating large scale models, as well as Tiny ML distilling some larger ML models for researchers to use. In this way, while the current private cloud infrastructure  may be better, the hacker collective movement is a nudge in the right direction which, given time, can help to democratise the AI space.

Why haven’t we seen change?

Despite the presence of initiatives such as open source models, Gupta considered reasons as to why little change has been seen in the business world? His response places the locus of the problem in the current top-down initiatives in companies. Some at the top have initiated top-down approaches and implemented them into the business, but some companies have not done so. At the moment, Gupta acknowledges how some company initiatives are clouded by how issues generated from AI are two steps removed from those making the decision, meaning less weight is contributed to solving them due to their intangibility. In this sense, the immediacy of the effects of AI-orientated decisions count, where the effects of such harms are to be made clear.

One further problem can then be seen in some instances of a lack of practicing what you preach within the AI space. For example, BJ Fogg and team at Stanford lab gave rise to addictive technologies despite talking about ethics since 2000. So, why is this the case and how can we prevent future renditions?

In response, Gupta alludes to an absence of policy which gives rise to two options: companies continue to gain dominance in the space and evade ethical commitments, or social movements arise to strive to correct this. So, the concern generated by a lack of policy is that the dominance being accrued by different organisations cannot be adequately halted by any social movement.

How do we incentivise changing this?

As real a threat as this is, Gupta offers different ways that companies, low-level workers and engineers can move towards affecting the required change in the space. For example, a chief ethics officer position with the power to actually implement change (rather than just being a token position) is a great step forward for companies to be adequately held to account on their actions. Such a top-down approach as previously mentioned can then help make it easier for rank and file employees and engineers to make the adequate changes being desired.  Rank and file employees reading this piece may then ask how they can be seen to have the power to imbue such big changes, which Gupta meets with his call for authenticity.  Rank and file employees have more power than they may think, and bringing their uncompromised selves will go a long way towards cultivating a responsible AI culture.

Between the lines

For me, the importance of involving the community is my key takeaway from the talk. As I mentioned in my TEDx youth talk, bridging the gap between the AI and the public is the way towards creating truly helpful AI products that prioritises the community, rather than something for the community to adjust to. The incentives to actually enact such change, whether top-down or through open-source model adoption, are long-term goals in the AI space, but Gupta’s talk is one of many that push us in the right direction.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • A call for a critical look at the metrics for success in the evaluation of AI

    A call for a critical look at the metrics for success in the evaluation of AI

  • The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

    The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

  • Meet the inaugural cohort of the MAIEI Summer Research Internship!

    Meet the inaugural cohort of the MAIEI Summer Research Internship!

  • The Most Important Question in AI Alignment

    The Most Important Question in AI Alignment

  • 5 Questions & Answers from StradigiAI's Twitter Roundtable

    5 Questions & Answers from StradigiAI's Twitter Roundtable

  • Response to the AHRC and WEF regarding Responsible Innovation in AI

    Response to the AHRC and WEF regarding Responsible Innovation in AI

  • SECure: A Social and Environmental Certificate for AI Systems

    SECure: A Social and Environmental Certificate for AI Systems

  • Response to the European Commission’s white paper on AI (2020)

    Response to the European Commission’s white paper on AI (2020)

  • On the Construction of Artificial Moral Agents Agents

    On the Construction of Artificial Moral Agents Agents

  • Montreal AI Symposium Presentation at Polytechnique

    Montreal AI Symposium Presentation at Polytechnique

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.