• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Tech Futures: Co-opting Research and Education

February 3, 2026

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

✍️ By Ismael Kherroubi Garcia.

Ismael is Founder & Co-lead of the Responsible Artificial Intelligence Network (RAIN), and Founder & CEO of Kairoi.


📌 Editor’s Note: This article marks the launch of Tech Futures, a collaborative series between the Montreal AI Ethics Institute (MAIEI) and the Responsible Artificial Intelligence Network (RAIN). The series challenges mainstream AI narratives by centering rigorous research over industry claims. In this first instalment, RAIN examines anti-science currents running through Big Tech.


I guess, some day, we will have ‘God AI.’

This is what Nvidia CEO Jensen Huang had to say on the No Priors podcast just a few weeks ago. The claim was that ‘God AI’ will eventually come about, but it will take much, much longer than a few months, years, or even decades. That “galactic” timeframe, Huang believes, means that concerns about whatever “God AI” will be shouldn’t be front and centre in discussions about AI.

With this claim, Huang charged directly at the “Effective Altruism” movement, which holds that the well-being of future generations should guide our actions today. In Effective Altruism circles, Huang’s “God AI” is often referred to as “superintelligence,” following the book of the same name by now-disgraced philosopher Nick Bostrom. What’s more, in the long term, Effective Altruists see superintelligence as capable of causing existential risk.

This speculative threat is what inspired the 2023 open letter calling for a pause on “giant AI experiments.”

Huang’s charge against Effective Altruism culminated in the following statement during the interview: “When PhDs of this and CEOs of that go to government [describing] end-of-the-world scenarios and extremely dystopian futures, you have to ask yourself, ‘what is the purpose of that narrative?’”

Huang says he does not know the answer, but one of the interviewers infers that the narrative may help large corporations promote regulations that make it impossible for new startups to pose any major threat; that is, the narrative supports regulatory capture.

In claiming ignorance about CEOs’ motivations when peddling certain narratives, Huang ignores that he, too, is a CEO peddling his own narrative. And there is a subtle but important component in his remark: a retort against “PhDs.” Indeed, what have PhD candidates –often working under precarious conditions– done to warrant Huang’s frustration? The threat Huang perceives is science.

Science has become a problem for Big Tech CEOs in the AI space.

“Artificial intelligence” was coined in 1955 to refer to a new academic field of research that sought to encode human capabilities in machines. Today, some AI research continues to involve the question posed in 1955; this might be termed fundamental research. But AI research now also encompasses a wide range of practices and techniques that are valuable to many other domains, such as biology, quantum mechanics and materials science.

Moreover, academia has been a space for numerous studies that critique the ongoing proliferation of commercial AI products. Scientific conferences such as FAccT and academic publications such as AI and Ethics instigate and host important reflections that often counter the narratives that Big Tech CEOs and investors want the general public to believe.

Grounding AI in rigorous research threatens the narratives that have made the stock values of the Magnificent 7 (Alphabet, Amazon, Apple, Tesla, Meta, Microsoft, and Nvidia) so successful in recent years. It is the new ambiguity surrounding the term “AI” that allows Big Tech to exploit confusion. 

Framing the issue as a fight that Big Tech is taking to science helps explain why Anthropic’s CEO spoke of a “powerful AI” that is “smarter than a Nobel Prize winner across most relevant fields” in 2024, and why the CEOs behind AI chatbots “Grok” and “ChatGPT” spoke of their products operating at a “PhD level” in 2025.

That same framing explains Big Tech’s obsession with education, a sector they have drowned in a plethora of shiny AI tools that undermine the learning process. The result is not good.

As the OECD reported on January 19th:

When AI removes the productive struggle essential for learning, students may complete tasks faster and achieve better immediate results, but their understanding may be less deeply consolidated. This can diminish cognitive stamina, deep reading, sustained attention and perseverance. Without a clear pedagogical purpose, GenAI can foster what researchers call ‘metacognitive laziness’ and disengagement.

In a similar vein, some Big Tech firms have produced AI training courses that further reinforce their interests and perspectives on AI. This is true of even the UK government’s AI Skills Hub, where 60% of free content is by tech companies.

With this, what stands in the way of financial gains for Big Tech is a well-informed consumer. Knowledge is power, and taking control of what the consumer knows and thinks about AI is the path Big Tech has chosen to concentrate theirs.

MAIEI has long stood at the forefront of making AI knowledge accessible. In this collaborative series with RAIN, we will endeavour to further bridge the gap between researchers at the cutting edge of AI and the diverse publics affected by AI policies and products through this segment, Tech Futures.

Photo credit:

Zoya Yasmine / Better Images of AI / CC BY 4.0

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • What is Sovereign Artificial Intelligence?

    What is Sovereign Artificial Intelligence?

  • The coming AI 'culture war'

    The coming AI 'culture war'

  • Should AI-Powered Search Engines and Conversational Agents Prioritize Sponsored Content?

    Should AI-Powered Search Engines and Conversational Agents Prioritize Sponsored Content?

  • Bridging the Gap: Addressing the Legislative Gap Surrounding Non-Consensual Deepfakes

    Bridging the Gap: Addressing the Legislative Gap Surrounding Non-Consensual Deepfakes

  • Social Context of LLMs - the BigScience Approach, Part 3: Data Governance and Representation

    Social Context of LLMs - the BigScience Approach, Part 3: Data Governance and Representation

  • Regulating computer vision & the ongoing relevance of AI ethics

    Regulating computer vision & the ongoing relevance of AI ethics

  • AI Policy Corner: The Texas Responsible AI Governance Act

    AI Policy Corner: The Texas Responsible AI Governance Act

  • AI Policy Corner: Transparency in AI Lab Governance: Comparing OpenAI and Anthropic’s Approaches

    AI Policy Corner: Transparency in AI Lab Governance: Comparing OpenAI and Anthropic’s Approaches

  • AI Policy Corner: Discussing the White House’s 2025 AI Action Plan

    AI Policy Corner: Discussing the White House’s 2025 AI Action Plan

  • AI Policy Corner: AI for Good Summit 2025

    AI Policy Corner: AI for Good Summit 2025

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.