• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

3 activism lessons from Jane Goodall you can apply in AI Ethics

October 8, 2019

Jane Goodall, one of the world’s most influential and beloved advocates for nature conservation, delivered the 2019 Beatty Lecture at McGill University on Thursday, September 26. Dr. Goodall delivered her first Beatty Lecture in 1979, where she shared stories about her groundbreaking research on chimpanzee behaviour in Gombe, Tanzania. To celebrate the 65th anniversary of the Beatty Lecture, Dr. Goodall returned to McGill forty years later to talk about the critical need for environmental stewardship and the power each individual has to bring about change. She is the first repeat lecturer in the Beatty’s history.


AI Ethics as a field needs more activists. It isn’t enough to know what’s wrong, and what needs to be done — we need people campaigning to bring about the change they want to see. This is an uphill battle, however, and activism has never been glamorous. But by inspiring ourselves through studying activists from other fields, we can make our jobs a little easier. To that end, let’s look at 3 principles we can extract from Jane Goodall’s environmental activism and apply in AI Ethics. 

1) The first step is to build career capital: a set of rare and valuable skills. A snowball effect will pursue. 

Jane didn’t start by going to college and studying biology or the environment — in fact, she couldn’t afford college, so she went to secretary school instead. Once she acquired that set of skills, it was by sheer chance that Louis Leaky, the paleoanthropologist who took a chance on Jane and would change the course of her life, needed a secretary for a research project he was going to pursue in Africa. She saw this as an apprenticeship opportunity that would allow her to learn something she was interested in. 

She pursued her passion for animals and Africa to Gombe, Tanzania, at the age of 26, where she began her pioneering research into the behaviour of wild chimpanzees. Her discovery in 1960 that chimpanzees make and use tools rocked the scientific world and redefined the relationship between humans and animals. In 1961, she entered Cambridge University as a Ph.D. candidate, one of the few people in history to be admitted there without a university degree. She earned her Ph.D. in ethology in 1966.

If you want to learn how to build career capital in AI Ethics, this 80,000 Hours article is a comprehensive guide on how to do that: Guide to working in AI policy and strategy.

2) Not being an “insider” with regards to domain expertise can be an advantage.

Some of the criticisms that Jane received in her early days working with chimpanzees was that she was breaking all the traditional rules. For example, she was giving the chimps names. She also talked about them having their own minds, emotions, and culture. Her peers were explicitly trained to suppress these types of behaviors. But it was exactly this “weakness” of Jane’s that later turned out to be a strength: it allowed her to bond with and truly understand the chimps at a deeper level than anybody else, which would lay the groundwork for her ensuing groundbreaking discoveries. In fact, her supervisor later told Jane that he’d learned more about animal behavior from her in Gombe than he had in the entirety of his career.

In the AI Ethics community, we often hear from people who aren’t AI technical specialists that they feel they’re at a disadvantage. But what seems like a disadvantage may turn out to be a strength in disguise: we need more people who aren’t already loaded with the exact same mental models as everyone else in the field. We need social scientists, including anthropologists, economists, and sociologists. In fact, OpenAI wrote a paper called AI Safety Needs Social Scientists. Here’s an excerpt:

“Properly aligning advanced AI systems with human values will require resolving many uncertainties related to the psychology of human rationality, emotion, and biases. These can only be resolved empirically through experimentation — if we want to train AI to do what humans want, we need to study humans.”

3) You may have to stop doing the work you love, to do higher impact work.

Although she loved doing research with the chimpanzees in Africa, she realized there was this greater problem of chimpanzee and general ecosystem preservation across the world. It was bittersweet, but after going to a conference on environmental preservation, she felt that it was her duty. “When I went to that conference, I was a scientist. When I left, I was an activist”, she says. 

In 1977, she established the Jane Goodall Institute (JGI) to advance her work around the world and for generations to come. JGI continues the field research at Gombe and is a global leader in the effort to protect chimpanzees and their habitat. JGI is also widely recognized for building on Dr. Goodall’s work in community-centred conservation, which recognizes the central role that people play in the well-being of animals and the environment. 

In 1991, she founded Roots & Shoots, a global program that guides young people in more than 50 countries in becoming conservation activists and leaders in their daily lives. Today, Dr. Goodall travels the world, speaking about the threats facing chimpanzees, environmental crises, and her reasons for hope. Dr. Goodall is a UN Messenger of Peace and Dame of the British Empire.

In the AI Ethics community, we’re seeing a lot of software engineers, data scientists, and product managers who love their jobs moving away from their work to make an impact by working on the ethics side. One example would be Tristan Harris, who left his job at Google to build The Center for Humane Technology, whose mission is to “reverse human downgrading by realigning technology with our humanity.”

If you want to learn more about conservation and activism, take Jane’s Master Class here. The broad principles can certainly be applied in other fields, including AI Ethics. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

related posts

  • CRUSH: Contextually Regularized and User Anchored Self-Supervised Hate Speech Detection

    CRUSH: Contextually Regularized and User Anchored Self-Supervised Hate Speech Detection

  • Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

    Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

  • Project Let’s Talk Privacy (Research Summary)

    Project Let’s Talk Privacy (Research Summary)

  • “A Proposal for Identifying and Managing Bias in Artificial Intelligence”. A draft from the NIST

    “A Proposal for Identifying and Managing Bias in Artificial Intelligence”. A draft from the NIST

  • AI Bias in Healthcare: Using ImpactPro as a Case Study for Healthcare Practitioners’ Duties to Engag...

    AI Bias in Healthcare: Using ImpactPro as a Case Study for Healthcare Practitioners’ Duties to Engag...

  • The TESCREAL Bundle: Eugenics and the promise of utopia through artificial general intelligence

    The TESCREAL Bundle: Eugenics and the promise of utopia through artificial general intelligence

  • Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems

    Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems

  • Why was your job application rejected: Bias in Recruitment Algorithms? (Part 1)

    Why was your job application rejected: Bias in Recruitment Algorithms? (Part 1)

  • Democratising AI: Multiple Meanings, Goals, and Methods

    Democratising AI: Multiple Meanings, Goals, and Methods

  • Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed E...

    Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed E...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.