
✍️ By Ismael Kherroubi Garcia.
Ismael is Founder & Co-lead of the Responsible Artificial Intelligence Network (RAIN), and Founder & CEO of Kairoi.
📌 Editor’s Note: This article marks the launch of Tech Futures, a collaborative series between the Montreal AI Ethics Institute (MAIEI) and the Responsible Artificial Intelligence Network (RAIN). The series challenges mainstream AI narratives by centering rigorous research over industry claims. In this first instalment, RAIN examines anti-science currents running through Big Tech.
I guess, some day, we will have ‘God AI.’
This is what Nvidia CEO Jensen Huang had to say on the No Priors podcast just a few weeks ago. The claim was that ‘God AI’ will eventually come about, but it will take much, much longer than a few months, years, or even decades. That “galactic” timeframe, Huang believes, means that concerns about whatever “God AI” will be shouldn’t be front and centre in discussions about AI.
With this claim, Huang charged directly at the “Effective Altruism” movement, which holds that the well-being of future generations should guide our actions today. In Effective Altruism circles, Huang’s “God AI” is often referred to as “superintelligence,” following the book of the same name by now-disgraced philosopher Nick Bostrom. What’s more, in the long term, Effective Altruists see superintelligence as capable of causing existential risk.
This speculative threat is what inspired the 2023 open letter calling for a pause on “giant AI experiments.”
Huang’s charge against Effective Altruism culminated in the following statement during the interview: “When PhDs of this and CEOs of that go to government [describing] end-of-the-world scenarios and extremely dystopian futures, you have to ask yourself, ‘what is the purpose of that narrative?’”
Huang says he does not know the answer, but one of the interviewers infers that the narrative may help large corporations promote regulations that make it impossible for new startups to pose any major threat; that is, the narrative supports regulatory capture.
In claiming ignorance about CEOs’ motivations when peddling certain narratives, Huang ignores that he, too, is a CEO peddling his own narrative. And there is a subtle but important component in his remark: a retort against “PhDs.” Indeed, what have PhD candidates –often working under precarious conditions– done to warrant Huang’s frustration? The threat Huang perceives is science.
Science has become a problem for Big Tech CEOs in the AI space.
“Artificial intelligence” was coined in 1955 to refer to a new academic field of research that sought to encode human capabilities in machines. Today, some AI research continues to involve the question posed in 1955; this might be termed fundamental research. But AI research now also encompasses a wide range of practices and techniques that are valuable to many other domains, such as biology, quantum mechanics and materials science.
Moreover, academia has been a space for numerous studies that critique the ongoing proliferation of commercial AI products. Scientific conferences such as FAccT and academic publications such as AI and Ethics instigate and host important reflections that often counter the narratives that Big Tech CEOs and investors want the general public to believe.
Grounding AI in rigorous research threatens the narratives that have made the stock values of the Magnificent 7 (Alphabet, Amazon, Apple, Tesla, Meta, Microsoft, and Nvidia) so successful in recent years. It is the new ambiguity surrounding the term “AI” that allows Big Tech to exploit confusion.
Framing the issue as a fight that Big Tech is taking to science helps explain why Anthropic’s CEO spoke of a “powerful AI” that is “smarter than a Nobel Prize winner across most relevant fields” in 2024, and why the CEOs behind AI chatbots “Grok” and “ChatGPT” spoke of their products operating at a “PhD level” in 2025.
That same framing explains Big Tech’s obsession with education, a sector they have drowned in a plethora of shiny AI tools that undermine the learning process. The result is not good.
As the OECD reported on January 19th:
When AI removes the productive struggle essential for learning, students may complete tasks faster and achieve better immediate results, but their understanding may be less deeply consolidated. This can diminish cognitive stamina, deep reading, sustained attention and perseverance. Without a clear pedagogical purpose, GenAI can foster what researchers call ‘metacognitive laziness’ and disengagement.
In a similar vein, some Big Tech firms have produced AI training courses that further reinforce their interests and perspectives on AI. This is true of even the UK government’s AI Skills Hub, where 60% of free content is by tech companies.
With this, what stands in the way of financial gains for Big Tech is a well-informed consumer. Knowledge is power, and taking control of what the consumer knows and thinks about AI is the path Big Tech has chosen to concentrate theirs.
MAIEI has long stood at the forefront of making AI knowledge accessible. In this collaborative series with RAIN, we will endeavour to further bridge the gap between researchers at the cutting edge of AI and the diverse publics affected by AI policies and products through this segment, Tech Futures.
Photo credit:
Zoya Yasmine / Better Images of AI / CC BY 4.0
