By Volha Litvinets, Ph.D. student at Sorbonne University in Paris, philosopher & tech ethicist with a digital marketing background. Writing a thesis on the ethics of artificial intelligence.
We are living in an interesting time: Information has never been more accessible than now, and the quantity of data produced is growing at an incredible speed. With the help of artificial intelligence, we have learned to better process this data, by creating models and making predictions. Technological innovations impact all of us by simplifying our daily actions, both intellectual and routine. But along with its many benefits, artificial intelligence ​​and its technical ecosystem also come with fears, challenges and ethical risks that should not be ignored.
The paradox of privacy: beyond the GDPR
61% of users prefer to place their voice assistant in the kitchen, and find it very useful: voice assistants can search for information, order pizza and even tell stories. But what types of information do Alexa, Siri, Cortana, Google Home, Yandex Alice and other “connected home” tools really collect? Our voices, our children’s voices, our faces, our search histories, all of our connected objects, our habits, our friends and an impressive amount of other private data are only a few examples of what these ‘assistants’ know about us. The question is: what are we going to do with all this data? It is not AI that we must be afraid of, but its use.
Despite the undeniable progress created by the GDPR, the ethical aspect of massive data collection and use is based on the paradox of privacy: even without deliberately exposing our personal lives online, cookies still retrace all the keywords of our searches, the content of our mobile phone and computer, as well as phone calls and email data. In the U.S., for example, ethics and politics are engaged in an open conflict, with one side warning humanity against the government’s potential intrusion on citizens’ private lives, and another, though democratic, arguing the importance of such measures for security reasons.
These issues span beyond simple technical errors, caused by negligence and a difficulty to foresee all the decisions of an AI. Such cases occurred when Google tagged African Americans as gorillas and completed texts with anti-Semitic and sexist phrases, or when Microsoft chatbot was deleted after posting racist tweets. While these cases are frustrating and offensive, they can be corrected relatively easily.
Social rating
In contrast, cases like Facebook’s Fake news and the subsequent manipulation of public opinion are not easy to solve. The heart of this problem does not necessarily lie in large internet companies, whose main purpose is not to monitor people but to sell their services and make a profit. The real political and ethical difficulty is when a government, such as in China, partakes in actions like deliberately using AI to create a social rating. All online information about a specific person – their behavior, their purchases, their credit history, their movements in the city, their social environment and more – is combined in a system to give them points. This was already portrayed in the famous television series Black Mirror which seems to have anticipated reality. Total surveillance worries those it is supposed to protect, raising questions about democracy, privacy, and security. Given these challenges, should we rethink the role of humans in the technological ecosystem?
How do we define humans?
Let’s talk philosophy. According to Hegel, human beings, much like animals, have natural needs and desires such as food, drink, shelter, and above all, self-preservation. But a human is fundamentally different from other animals because it wants to be acknowledged. This need for acknowledgement pushes a human being to produce even more, which favors the development of technology but can also lead to hyper consumption that goes beyond meeting human needs and can ultimately cause harm. It should be added that the essential desire to be acknowledged has promoted the rapid development of all sorts of social networks, which, as a side effect, has contributed to the development of AI technologies like “big data” and an exhaustive compilation of people’s personal information.
According to Lewis Mumford, the human is the “one who produces tools.” But does he still produce them, or do they now produce themselves? Technology, in fact, seems to have become a final goal. We live in technologies, for technologies and because of technologies. GĂĽnther Anders once brilliantly defended a conception of The Obsolescence of Humankind and showed how humans end up being excluded from the technical process. At the same time, humans become a sort of appendage to the omnipresent advertising phenomenon in the world of mass consumption. The machines produce the machines in a technical runaway that Heidegger had, in his time, characterized as an endless trial. Francis Fukuyama’s idea of ​​the end of history and the last man, developed more recently, underlines the dangers of the prosperous era brought about by the triumph of technology: according to him, such widespread consumption could lead to the destruction of humanity by the development of biotechnology. Far from perfecting human nature, the latter would instead lead to the creation of a frightening “post-humanity”.
Data as a product
AI ​​technologies are deeply implemented in marketing and advertising, becoming more and more ubiquitous and inconspicuous. In consequence, we are decreasingly aware of the extent to which our choices are manipulated and how little freedom of choice we have left.
Is it not this point that highlights the terrible paradox of our increasingly AI-dominated world? Indeed, modern Western societies were built on an ideal of freedom of choice. This freedom is referred to as the “autonomy” of the individual, which, through democratic political institutions, is considered to be the foundation of the sovereignty of the people. However, our contemporary societies seem to be handing much of the individual’s control over to AI, to the point where the autonomy given to individuals is eventually denied.
Many technologies are challenging personal freedom because they are developed by companies whose business models rely on the collection and processing of users’ personal data for selling. The product for sale is therefore completely transformed: it is no longer a good or service, but the data itself. Our hobbies, interests and personalities have become legal tender because when everything is free, the user becomes the product.
What is the connection between artificial intelligence, marketing, and philosophy?
Philosophers certainly help us ask the right questions by pointing out that AI technologies, when in the service of marketing and advertising, should be developed according to ethical and social principles. Digital marketing companies are increasingly aware of the importance of sustainability and are now looking to create products and offer services that respect human ethical values. In Europe, the subject of sustainability is widely discussed. The negative impacts of recent technological and scientific progress make us think about our responsibility as individuals and as a society. These impacts include industrial risks, especially in the areas of ​​information technology, network development, and information security. The concept of sustainable development, inspired by Hans Jonas’ philosophy and popularized by the Brundtland’ Report in 1987, is a fundamental ethical benchmark in this field. It puts forward a simple criterion: all development must meet our present needs without compromising the ability of future generations to respond as freely as possible to their own needs. Our technological progress should therefore not seriously compromise the ability of choice, and of course survival, for future generations.
Are the companies engaged in the rise of AI able to integrate these principles into the development of their technologies? What solutions do the key players adopt in this area? Big tech companies are already consulting and recruiting philosophers to develop processes for the ethical evaluation of research.
The approach to marketing itself needs to change. By working together, philosophers, technicians, data engineers, marketers and programmers are able to create values and develop industries while respecting the principle of sustainable development. The only way to escape the risk of human obsolescence is to act with responsibility and overcome our ignorance so as not to lose our humanity.
Bibliography:
GĂĽnther Anders, The Obsolescence of Humankind
Hans Jonas, The principle of responsibility, an ethic for a technological civilization
Francis Fukuyama, The End of History and the Last Man
Lewis Mumford, The Myth of the Machine
French version of this article is available here: Vuiz.com