🔬 Research summary by Andrea Owe, Environmental, space, and AI ethicist, and Research Associate at the Global Catastrophic Risk Institute.
[Original paper by Andrea Owe and Seth D. Baum]
Overview: AI can have significant effects on domains associated with sustainability, such as aspects of the natural environment. However, sustainability work to date, including work on AI and sustainability, lacks clarity on the ethical details, such as what is to be sustained, why, and for how long. Differences in these details have important implications for what should be done, including for AI. This paper provides a foundational ethical analysis of sustainability for AI and calls for work on AI to adopt a concept of sustainability that is non-anthropocentric, long-term oriented, and morally ambitious.
Introduction
Sustainability is widely considered a good thing, especially a good thing related to environment-society interactions. It is in this spirit that recent initiatives on AI and sustainability have emerged, such as the conference AI for People: Towards Sustainable AI, of which this paper is part. But what exactly should be sustained, and why? Should, for example, the natural environment be sustained only to the extent that it supports the sustaining of human populations, or should natural ecosystems and nonhuman populations be sustained for their own sake? Is it enough to sustain something for a few generations or should sustainability endure into the distant future? Is sustainability even enough, or should we strive toward loftier aspirations? These are important ethical questions whose answers carry diverging implications for AI.
This paper surveys existing work on AI sustainability, finding that it lacks clarity on its ethical dimensions. This is shown through quantitative analysis of AI ethics principles and research on AI and sustainability. The paper then makes a case for a concept of sustainability for AI that is long-term oriented, including time scales in the astronomically distant future, and non-anthropocentric, meaning that humans should not be the only entities sustained for their own sake. The paper additionally suggests the more ambitious goal of optimization rather than sustainability.
Key Insights
The ethical dimensions of sustainability
To understand the ethics of sustainability for AI, it is essential to first understand the ethics of sustainability. In its essence, “sustainability” simply refers to the ability of something to continue over time; the thing to be sustained can be good, bad, or neutral. However, common usage of the term assumes that the thing to be sustained is some combination of social and ecological systems, with the most prominent definition being that of the 1987 Brundtland Report, defining sustainable development as “meeting the needs of the present without compromising the ability of future generations to meet their own needs.” Since then, “sustainability” has been widely applied, often in ways that are imprecise or inconsistent with the basic idea of the ability to sustain something. This paper argues that usage of the term should be sharpened, and specifically that it should address three ethics questions:
- What should be able to be sustained, and why? For example, common conceptions of sustainability are anthropocentric in that they only aim to sustain humans for their own sake, with the natural environment or other nonhumans sustained only for the benefit of humans. In contrast, a wide range of moral philosophy calls for non-anthropocentric ethics that value both humans and nonhumans for their own sake.
- For how long should it be able to be sustained? There is a big difference between sustaining something for a few days or indefinitely into the distant future. For example, the Brundtland Report’s emphasis on future generations implies a time scale of at least decades, but how many future generations? The limits of known physics suggest that it may be possible to sustain morally valuable entities for millions, billions, or trillions of years into the future, or even longer.
- How much effort should be made for sustainability? Should a person or an organization give “everything they’ve got” to advance sustainability or is just a little effort enough? How much should sustainability be emphasized relative to other competing values? The Brundtland definition was specifically crafted to acknowledge the competing values of present and future generations.
The paper additionally compares sustainability to the ethics concept of optimization. Sustainability means enabling something to be sustained in at least some minimal form, whereas optimization means making something be the best that it can be. For example, the Brundtland Report calls for the present generation to act “without compromising the ability of future generations to meet their needs”. Arguably, the present generation should act to enable future generations to do much better than meeting their basic needs. Likewise, if human civilization has to focus on sustaining itself rather than loftier goals like optimization, then it is in a very bad situation.
Empirical findings: AI and Sustainability
Based on these ethical dimensions, the paper presents a quantitative analysis of published sets of AI ethics principles and academic research on AI and sustainability. The paper finds that most work on AI and sustainability focuses on common conceptions of socio-environmental sustainability, with smaller amounts of work on the sustainability of AI systems and other miscellaneous things. Further, most work is oriented toward sustaining human populations, with AI and the environment having value insofar as they support human populations. Most work does not specify the timescales of sustainability, nor the degree of effort to be taken, and overall lack clarity on the ethical dimensions presented above.
The case for long-term, non-anthropocentric sustainability
Following these findings, the paper gives its own answers on the ethical dimensions. First, sustainability should be non-anthropocentric, meaning that both humans and nonhumans should be sustained for their own sake. This is motivated by the scientific observation that humans are members of the animal kingdom and part of nature, and that nonhumans often possess attributes that are considered to be morally significant, such as the ability to experience pleasure and pain or have a life worth living. Second, sustainability should focus on long timescales, including the astronomically distant future. This is motivated by a principle of equality across time: everything should be valued equally regardless of what time period it exists in. Third, a large amount of effort should be made toward sustainability, and optimization should be emphasized over sustainability where the two diverge. Long-term sustainability of any Earth-originating entities will eventually require expansion into space, making it necessary to first handle any major threats on Earth, such as global warming and nuclear warfare. Additionally, the astronomically distant future offers astronomically large opportunities for advancing moral value, making an objective to optimize moral value diverge significantly from an objective of sustaining moral value only.
Implications for AI
Finally, the paper presents implications of the above for AI.
- First, AI should be used to improve long-term sustainability and optimization. For current and near-term forms of AI, this includes addressing immediate threats to the sustainability of global civilization, such as global warming and pandemics.
- Second, attention should be paid to long-term forms of AI, which could be particularly consequential for long-term sustainability and optimization due to its potential. Long-term AI is seldom discussed in relation to sustainability, but the paper argues that these topics are a more appropriate focus for work on AI and sustainability. Long-term AI could bolster efforts to address threats such as global warming, and it could also pose threats of its own, especially for runaway AI scenarios. Furthermore, it could play an important role in space expansion, which is central to the long-term sustainability and optimization of moral value.
Between the lines
In sum, this paper calls for work on AI and sustainability to be specific about its ethical basis and to adopt non-anthropocentric, long-term oriented concepts of sustainability or optimization. In practice, that entails focusing on applying AI to address major global threats and improving the design of long-term AI, in order to ensure the long-term sustainability of civilization and to pursue opportunities to expand civilization into outer space. Actions involving AI are among the most significant ways to affect the distant future. The field of AI therefore has special opportunities to make an astronomically large positive difference.