Top-level summary: When pushing for adoption of ethical values and guidelines in the development and deployment of AI systems, we often face resistance by various parts of an organization with rationalizations ranging from how it might negatively impact the business to a deferment strategy making largely hollow promises to address those concerns at an indeterminate time in the future. This paper by Anton Korinek presents some of the conflicts that arise between the economic and ethical perspectives and how taking a single-minded approach will lead to solutions that are skewed and can negatively impact people. While there are a wide variety of economic concerns, what caught our eye was the elucidation of the disparity in economic values which are usually expressed as single dimensional metrics that are easily quantifiable and amenable to decision making and reasoning by humans.
This is contrasted with the multi-dimensional nature of ethical values which are subjective, abstract and require significant effort to reason and hence aren’t very amenable to justifiable decision making, a little bit akin to the black box nature of deep learning systems, which is encoded in our brains and isn’t easily explainable. The paper also highlights how traditional economic theory and ethical theories in isolation stand at loggerheads and squaring them within the cultural context of a society is crucial to arriving at framework that helps to steer the progress in AI such that it can bring as much societal benefits as possible. It concludes with a look at the far future where we might have superintelligence and how humans might be replaced akin to oxen in the Industrial Revolution who found themselves out competed by machines whose costs were lower than their benefits. To avoid such a fate where we find ourselves snared again in a Malthusian Trap, we have to take an active role in steering the progress of AI rather than succumbing to technological-fatalism.
The rise of AI systems leads to an unintended conflict between economic pursuits which seek to generate profits and value resources appropriately with the moral imperatives of promoting human flourishing and creating societal benefits from the deployment of these systems. This puts forth a central question on what the impacts of creating AI systems that might surpass humans in a general sense which might leave humans behind.
Technological progress doesn’t happen on its own, it is driven by conscious human choices that are influenced by the surrounding social and economic institutions. We are collectively responsible for how these institutions take shape and thus impact the development of technology – submitting to technological-fatalism isn’t a productive way to align our ethical values with this development. We need to ensure that we play an active role in the shaping of the most consequential piece of technology. While the economic system relies on the market prices to gauge what people place value on, by no means is that a comprehensive evaluation. For example, it misses out on the impact of externalities which can be factored in by considering ethical values as a complement in guiding our decisions on what to build and how to place value on it.
When thinking about losses from AI-enabled automation, an outright argument that economists might make is that if replacing labor lowers the costs of production, then it might be market-efficient to invest in technology that achieves that. From an ethicist’s perspective, there are severe negative externalities from job loss and thus it might be unethical to impose labor-saving automation on people.
When unpacking the economic perspective more, we find that job loss actually isn’t correctly valued by wages as price for labor. There are associated social benefits like the company of workplace colleagues, sense of meaning and other social structural values which can’t be separately purchased from the market. Thus, using a purely economic perspective in making automation technology adoption decisions is an incomplete approach and it needs to be supplemented by taking into account the ethical perspective.
Market price signals provide useful information upto a certain point in terms of the goods and services that society places value on. Suppose that people start to demand more eggs from chickens that are raised in a humane way, then suppliers will shift their production to respond to that market signal. But, such price signals can only be indicated by consumers for the things that they can observe. A lot of unethical actions are hidden and hence can’t be factored into market price signals. Additionally, several things like social relations aren’t tradable in a market and hence their value can’t be solely determined from the market viewpoint.
Thus, both economists and ethicists would agree that there is value to be gained in steering the development of AI systems keeping in mind both kinds of considerations. Pure market-driven innovation will ignore societal benefits in the interest of generating economic value while the labor will have to make unwilling sacrifices in the interest of long-run economic efficiency. Economic market forces shape society significantly, whether we like it or not. There are professional biases based on selection and cognition that are present in either side making its arguments as to which gets to dominate based on their perceived importance. The point being that bridging the gap between different disciplines is crucial to arriving at decisions that are grounded in evidence and that benefit society holistically.
There are also differences fundamentally between the economic and ethical perspective – namely that economic indicators are usually unidimensional and have clear quantitative values that make them easier to compare. On the other hand, ethical indicators are inherently multi-dimensional and are subjective which not only make comparison hard but also limit our ability to explain how we arrive at them. They are encoded deep within our biological systems and suffer from the same lack of explainability as decisions made by artificial neural networks, the so-called black box problem.
Why is it then, despite the arguments above, that the economic perspective dominates over the ethical one? This is largely driven by the fact that economic values provide clear, unambiguous signals which our brains, preferring ambiguity aversion, enjoy and ethical values are more subtle, hidden, ambiguous indicators which complicate decision making. Secondly, humans are prosocial only upto a point, they are able to reason between economic and ethical decisions at a micro-level because the effects are immediate and observable, say for example polluting the neighbor’s lawn and seeing the direct impact of that activity. On the other hand, for things like climate change where the effects are delayed and not directly observable (as a direct consequence of one’s actions) that leads to behaviour where the individual prioritizes economic values over ethical ones.
Cynical economists will argue that there is a comparative advantage in being immoral that leads to gains in exchange, but that leads to a race to the bottom in terms of ethics. Externalities are an embodiment of the conflict between economic and ethical values. Welfare economics deals with externalities via various mechanisms like permits, taxes, etc. to curb the impacts of negative externalities and promote positive externalities through incentives. But, the rich economic theory needs to be supplemented by political, social and ethical values to arrive at something that benefits society at large.
From an economic standpoint, technological progress is positioned as expanding the production possibilities frontier which means that it raises output and presumably standards of living. Yet, this ignores how those benefits are distributed and only looks at material benefits and ignores everything else.
Prior to the industrial revolution, people were stuck in a Malthusian trap whereby technological advances created material gains but these were quickly consumed by population growth that kept standards of living stubbornly low. This changed post the revolution and as technology improvement outpaced population growth, we got better quality of life. The last 4 decades have had a mixed experience though, whereby automation has eroded lower skilled jobs forcing people to continue looking for jobs despite displacement and the lower demand for unskilled labor coupled with the inelastic supply of labor has led to lower wages rather than unemployment. On the other hand, high skilled workers have been able to leverage technological progress to enhance their output considerably and as a consequence the income and wealth gaps between low and high skilled workers has widened tremendously.
Typical economic theory points to income and wealth redistribution whenever there is technological innovation where the more significant the innovation, the larger the redistribution. Something as significant as AI leads to crowning of new winners who own these new factors of production while also creating losers when they face negative pecuniary externalities. These are externalities because there isn’t explicit consent that is requested from the people as they’re impacted in terms of capital, labor and other factors of production.
The distribution can be analyzed from the perspective of strict utilitarianism (different from that in ethics where for example Bentham describes it as the greatest amount of good for the greatest number of people). Here it is viewed as tolerating income redistribution such that it is acceptable if all but one person loses income as long as the single person making the gain has one that is higher than the sum of the losses. This view is clearly unrealistic because it would further exacerbate inequities in society. The other is looking at the idea of lump sum transfers in which the idealized scenario is redistribution, for example by compensating losers from technology innovation, without causing any other market distortions. But, that is also unrealistic because such a redistribution never occurs without market distortions and hence it is not an effective way to think about economic policy.
From an ethics perspective, we must make value judgments on how we perceive dollar losses for a lower socio-economic person compared to the dollar gains made by a higher socio-economic person and if that squares with the culture and value set of that society. We can think about the tradeoff between economic efficiency and equality in society, where the level of tolerance for inequality varies by the existing societal structures in place. One would have to also reason about how redistribution creates more than proportional distortions as it rises and how much economic efficiency we’d be willing to sacrifice to make gains in how equitably income is distributed.
Thus, steering progress in AI can be done based on whether we want to pursue innovation that we know is going to have negative labor impacts while knowing full well that there aren’t going to be any reasonable compensations offered to the people based on economic policy.
Given the pervasiveness of AI and by virtue of it being a general-purpose technology, the entrepreneurs and others powering innovation need to take into account that their work is going to shape larger societal changes and have impacts on labor. At the moment, the economic incentives are such that they steer progress towards labor-saving automation because labor is one of the most highly-taxed factors of production. Instead, shifting the tax-burden to other factors of production including automation capital will help to steer the direction of innovation in other directions. Government, as one of the largest employers and an entity with huge spending power, can also help to steer the direction of innovation by setting policies that encourage enhancing productivity without necessarily replacing labor.
There are novel ethical implications and externalities that arise from the use of AI systems, an example of that would be (from the Industrial Revolution) that a factory might lead to economic efficiency in terms of production but the pollution that it generates is so large that the social cost outweighs the economic gain.
Biases can be deeply entrenched in the AI systems, either from unrepresentative datasets, for example, with hiring decisions that are made based on historical data. But, even if the datasets are well-represented and have minimal bias, and the system is not exposed to protected attributes like race and gender, there are a variety of proxies like zipcode which can lead to unearthing those protected attributes and discriminating against minorities.
Maladaptive behaviors can be triggered in humans by AI systems that can deeply personalize targeting of ads and other media to nudge us towards different things that might be aligned with making more profits. Examples of this include watching videos, shopping on ecommerce platforms, news cycles on social media, etc. Conversely, they can also be used to trigger better behaviors, for example, the use of fitness trackers that give us quantitative measurements for how we’re taking care of our health.
An economics equivalent of the paper clip optimizer from Bostrom is how human autonomy can be eroded over time as economic inequality rises which limits control of those who are displaced over economic resources and thus, their control over their destinies, at least from an economic standpoint. This is going to only be exacerbated as AI starts to pervade into more and more aspects of our lives.
Labor markets have features built in them to help tide over unemployment with as little harm as possible via quick hiring in other parts of the economy when the innovation creates parallel demands for labor in adjacent sectors. But, when there is large-scale disruption, it is not possible to accommodate everyone and this leads to large economic losses via fall in aggregate demand which can’t be restored with monetary or fiscal policy actions. This leads to wasted economic potential and welfare losses for the workers who are displaced.
Whenever there is a discrepancy between ethical and economic incentives, we have the opportunity to steer progress in the right direction. We’ve discussed before how market incentives trigger a race to the bottom in terms of morality. This needs to be preempted via instruments like Technological Impact Assessments, akin to Environmental Impact Assessments, but often the impacts are unknown prior to the deployment of the technology at which point we need to have a multi-stakeholder process that allows us to combat harms in a dynamic manner. Political and regulatory entities typically lag technological innovation and can’t be relied upon solely to take on this mantle.
The author raises a few questions on the role of humans and how we might be treated by machines in case of the rise of superintelligence (which still has widely differing estimates for when it will be realized from the next decade to the second half of this century). What is clear is that the abilities of narrow AI systems are expanding and it behooves us to give some thought to the implications on the rise of superintelligence.
The potential for labor-replacement in this superintelligence scenario, from an economic perspective, would have significant existential implications for humans, beyond just inequality, we would be raising questions of human survival if the wages to be paid to labor fall below subsistence levels in a wide manner. It would be akin to how the cost of maintaining oxen to plough fields was outweighed by the benefits that they brought in the face of mechanization of agriculture. This might be an ouroboros where we become caught in the Malthusian trap again at the time of the Industrial Revolution and no longer have the ability to grow beyond basic subsistence, even if that would be possible.
Original piece by Anton Korinek: https://www.nber.org/papers/w26130.pdf