AI research, development and deployment have “a social sciences deficit.” Predominantly AI systems are built by technologists, using datasets that are often divorced from the collection contexts. After they have been developed and deployed in society, social scientists then evaluate the societal harms and benefits. We believe social science perspectives should be introduced earlier in the AI development workflow, from conception to development to deployment as well as maintenance. The authors of this column will bring sociological perspectives to the AI and AI ethics communities.
Our Project
The motivating goal of our column is to bring together sociology and AI. Specifically, we want to show how we can use sociological theory, methods, and research to inform the building of ethical AI. We believe that AI is a set of socio-technical systems that can benefit or harm humanity depending on how we approach them. In other words, our sociological perspective informs our understanding of AI as a social practice as much as a constellation of hardware and software. The way we build AI, from research to design to deployment, matters.
Each week, this column will summarize an article, book, or piece of sociological research and explore how it can be applied to the field of ethical AI. By demystifying sociological concepts and eschewing disciplinary jargon as much as we can, we hope to bring actionable sociological insights to a technical audience that is actively developing AI and wants to do it responsibly.
Who Are We?
Iga Kozlowska is a sociologist and works on Microsoft’s Ethics & Society team where she’s tasked with guiding responsible AI innovation. She received her PhD in sociology from Northwestern University in 2017. Her dissertation research used a variety of qualitative methods to explore how collective memories of communism have shaped European integration and cultural identity. Now, her passion is in bringing sociology to tech to make tech and thereby society better for all.
Nga Than is a doctoral candidate in the Sociology program at City University of New York – The Graduate Center. She is a Mellon digital publics fellow at the Center for the Humanities at the Graduate Center and a research affiliate at University of Buffalo. Her research interests are in computational social science, social media, sociology of work, and entrepreneurship. As a mixed methods scholar, she has conducted qualitative research using interviewing, and participant observation, as well as employed methodological developments in machine learning to analyze text data, and administrative data.
Abhishek Gupta is the Founder and Principal Researcher at the Montreal AI Ethics Institute and a Machine Learning Engineer and CSE Responsible AI Board Member at Microsoft. His work focuses on applied technical and policy measures to build ethical, safe, and inclusive AI systems. He has built the largest public consultation group on AI ethics and he frequently advises national governments and public institutions on AI ethics and national AI strategies. His forthcoming book Actionable AI Ethics provides practical guidance to engineers in the field to put AI ethics principles into practice. You can find out more about his work here.
Why Sociology?
We center sociology because we think it has a tremendous amount of untapped potential to help make AI more ethical. And it happens to be the field we know best. However, we are committed to an interdisciplinary approach, and we will frequently borrow from other disciplines investigating AI like anthropology, philosophy, media and communications studies, legal studies, etc.
We acknowledge that sociology is, in fact, a little late to the party as other fields like psychology, cognitive sciences, human-computer interaction, anthropology not to mention computer science, have been analyzing the ethics of AI for a long time. For too long sociologists have marginalized the study of technology on the premise that machines and computers are not social beings and therefore do not fall within our scope of inquiry. However, this is rapidly changing, particularly in science and technology studies (STS), where there is growing admission that technological artefacts do shape human interaction and behavior, from the micro individual level to the most macro societal level.
We also want to recognize that sociology itself is not a discretely bounded discipline, so we will not split hairs over what kind of work is in or out. We’re most interested in inviting a multitude of diverse and new perspectives that will help technologists build better AI, regardless of which field they come from. We acknowledge that our readers, who come from all walks of life may or may not be very familiar with social science fields in general. With this column, we hope to bridge that disciplinary gap and expect other social scientists working on AI to help us along the way.
What Can Sociology Do For Me?
Computer scientists working on AI are likely familiar with legal, behavioral, or psychological perspectives that center the individual “user.” They may get this exposure from their design and UX/UI researcher colleagues or even privacy or legal compliance professionals. Sociology complements that individualized perspective by taking a more holistic view.
A sociological perspective will prompt us to zoom out and consider not only how the direct user of the technology will be impacted but also how entire groups of people across shared demographics might be affected. Pushing further, sociology asks how entire institutions, nations, or cultures are shaped by technology, and vice versa “how technological outcomes are shaped by those factors/actors”. By expanding our horizon, sociology helps us uncover social patterns that we previously may not have noticed. For example, if facial recognition is not working for a Black woman tester of the tool, the sociological imagination would prompt us to question whether this could be a systematic error affecting a whole population. It would guide us toward an intersectional approach to race and gender in the first place.
This socially-aware perspective should prompt AI developers to ask different questions. What is the cost-benefit analysis of this AI tool when I consider a broader set of stakeholders or overlapping identities? How will my technology shift the balance of power between different groups (e.g. employers and employees) in a good or bad way? Sociology also sensitizes us to the complexities of social life. It helps us better understand the nuances of concepts like race, class, and gender that AI measures, aggregates, and scores everyday.
By contextualizing these social concepts, sociology helps us escape the trap of technological determinism, the belief that technology inevitably brings social progress. It enables us to think about problem solving in a socially conscious way which centers human experiences rather than technology. It frees us to consider whether an AI technology should be built at all. Asking these questions is important because it will help technologists build better AI solutions that benefit people and that people can trust. It will also prevent us from building AI that harms people or diminishes our humanity.
To conclude, sociologists will be among the first to point out that AI developers are not solely responsible for building AI ethically. Moreover, many historically and deeply entrenched social problems are not solvable with better AI or any particular technology. These problems will require the engagement of many stakeholders with the expertise of researchers from many disciplines. That’s why we need all hands on deck. Please join us in this conversation to explore how sociology can be part of the solution. We hope you’ll enjoy our column!