đŹ Research Summary by Anuja Jaiswal, a human rights consultant with a special focus on tech accountability, transitional justice, and gender equality.
[Original paper by Timnit Gebru and Ămile P. Torres]
Overview: Many organizations in the Artificial Intelligence (AI) field aim to develop Artificial General Intelligence (AGI), envisioning a âsafeâ system with unprecedented intelligence that is âbeneficial for all of humanity.â This paper argues that such a system with âundefinedâ applications cannot be built for safety and situates the push to develop AGI in the Anglo-American eugenics tradition of the twentieth century.
Introduction
Imagine a world where we could defer to a superintelligent entity designed to solve all our problems. Could it achieve world peace? End world hunger? Stop climate change? In reality, these issues do not persist simply because of a lack of intelligence or ability. Solving them requires collaboration and systemic change on an unprecedented level.
Nevertheless, there has been a recent increase in organizations aiming to develop Artificial General Intelligence (AGI). Some claim that their products are close to achieving this goal. The authors point out that previous attempts to develop AGI have been largely unsuccessful and, more importantly, have resulted in real harm.
Rather than assuming the continued push emerges from misguided optimism, they ask: âWhat ideologies are driving the race to attempt to build AGI?â They investigate these ideologies in three ways: (1) analyzing primary sources by leading figures in the movement to build AGI, (2) garnering information from investigative reporting on projects and financial connections between organizations, and (3) analyzing secondary literature on the history of eugenics, transhumanism, and other social phenomena.
Ultimately, the authors draw a disturbing link between the goal of building AGI and the Anglo-American eugenics movement via transhumanism. In doing so, they coin the phrase âTESCREAL bundle,â which includes Transhumanism, Extropianism, Singularitarianism, (modern) Cosmism, Rationalism, Effective Altruism, and Longtermism. They conclude by encouraging researchers to prioritize safety and concentrate on building well-defined and well-scoped systems.
Methodology
The authors begin by tracing the historical background underpinning modern eugenics, partitioning it into two âwaves.â First-wave eugenics originated in the post-Darwinian work of Francis Galton, eventually declining in the 1970s. These eugenicists sought to improve the âhuman stockâ by controlling reproductive patterns in two ways: (1) increasing the frequency of desirable traits (positive eugenics) and (2) preventing âunfitâ individuals from passing their genes on (negative eugenics). Second-wave eugenicists emerged in the 1990s after advancements in genetic engineering and biotechnology made human âimprovementsâ theoretically possible within a single generation.
Properties of the TESCREAL bundle
Although second-wave eugenicists affirm that their beliefs are not related to the discriminatory attitudes underpinning first-wave eugenics, the authors suggest this claim is dubious. They do so by introducing the TESCREAL bundle, which âexemplifiesâ second-wave eugenics. After describing the development of each constituent ideology – Transhumanism, Extropianism, Singulatarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism – the authors highlight four properties shared by the TESCREAL bundle.
- Historical roots and contemporary communities:
TESCREALâs constituent ideologies are intimately connected to transhumanism, initially developed by twentieth-century eugenicists. Therefore, they âhave a common genealogy going back to first-wave eugenics.â The authors also highlight âsignificant overlapâ in contemporary communities, with many membersâsuch as Sam Altman and Elon Muskâfalling into multiple categories within the bundle.
- Eschatology – the part of theology concerned with death, judgment, and the ultimate fate of humanity.
Like religion, the TESCREAL bundle shares both utopian and apocalyptic convictions about humanity. The authors point to how the transhumanist project has been associated with both âparadise-engineeringâ and âa clear and future dangerâ to humanity. The latter is specifically connected to AGI. The authors outline the tension between two popular viewpoints: a âvalue-alignedâ AGI could solve all our problemsâŚbut if it isnât properly aligned, the âdefault outcomeâ is âdoom.â However, the same prominent figures warning of existential risks contend that the potential benefits are worth it.
- Discriminatory attitudes
The authors state that âthe same discriminatory attitudes that animated first-wave eugenics are pervasive within the TESCREAL community.â They quote problematic statements by key figures and point out support for the work of Charles Murray, known for his scientific racism. They also emphasize the widespread emphasis on âintelligenceâ or IQ throughout the TESCREAL community since the âobsession with IQ can be traced back to first-wave eugenicists.â
- Influence and variants
The TESCREAL bundle of ideologies garners immense influence, especially within the tech industry. Figures who subscribe to such ideologies and/or their techno-utopian vision of the future include Elon Musk, Peter Thiel, Jaan Tallinn, Sam Altman, Dustin Moskovitz, Vitalik Buterin, Sam Bankman-Fried, and Marc Andreessen. Collectively, these current and former billionaires have contributed tens of billions of dollars to the TESCREAL bundle, which, the authors argue, has motivated much funding and research centred on creating AGI.
The AGI utopia and apocalypse: Two sides of the same coin
In the next section, the authors describe major figures and organizations associated with the TESCREAL movement, establishing a link between transhumanism and AGI. Although they acknowledge that people working on AGI may not see their proximity to TESCREAL views and communities, they argue that âTESCREAList ideologies drive the AGI race even though not everyone associated with the goal of building AGI subscribes to these worldviews.â
The development of AGI is frequently associated with utopia and apocalypse. The authors frame these discussions as âtwo sides of the same coin.â They describe the adverse impacts of efforts to build AGI and prevent the hypothetical AGI apocalypse.
- Building unscoped systems
The authors read the push to build AGI as emanating from earlier eugenicist ideals, affirming that the âquest to create a superior being akin to a machine-god has resulted in current (real, non-AGI) systems that are unscoped and thus unsafe.â The race to create systems that purport to perform any task compromises safety since âone cannot design appropriate tests to determine what systems should and should not be used for.â
- Building resource-intensive systems
A number of researchers have documented the staggering environmental costs of developing models for systems advertised as stepping stones toward AGI. Furthermore, the size of the datasets required for such models worsens the consequences of unscoped systems since model-builders are less likely to document – or understand – their datasets at that level. Both the environmental impacts and the unsafe outputs of these systems disproportionately affect marginalized groups – in other words, âthe AGI race not only perpetuates these harms to marginalized groups, but it does so while depleting resources from these same groups to pursue the race.â
- Evading accountability
Framing AI systems as akin to humans allows organizations to avoid accountability for the exploitative and deceptive practices that actively fuel the race to build AGI. The authors highlight several examples of harm caused to both workers and users.
- Co-opting safety
Emphasizing the âexistentialâ risks of advanced AI systems allows those attempting to build AGI to evade accountability for the harms already caused by their attempts. The authors point out that âframing the AGI agenda as a safety issue allows companies working toward it to describe themselves as âAI safetyâ organizations safeguarding humanityâs future, while simultaneously creating unsafe products, centralizing power, and evading accountability.â
Although the race to build AGI is described as a scientific and engineering endeavour, the authors contend that it is neither and functions as a vehicle for eugenic ideals, diverting resources and attention away from useful research directions. Ultimately, they assert that attempting to build an âeverything systemâ is an inherently unsafe practice, urging researchers and practitioners to focus on building âwell-defined, well-scoped systems that prioritize peopleâs safetyâ instead.
Between the lines
Several leading figures in the AI field have considered how the development of Artificial General Intelligence may affect the future of humanity. This paper unravels the ideologies underpinning these discussions, showing how attempts to build AGI and address its âexistential risksâ both cause real, documented harm to people in the present. The authorsâ analysis of primary and secondary literature on the history of eugenics allows them to draw a link between this movement and contemporary ideologies espoused by leading figures in the AI field. Their use of such research methods speaks to the interdisciplinary scope of this paper, as it draws connections between the concerns of people in multiple fields, including computer science, human rights, public policy, and history.