• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Sociology of Race and Digital Society

May 24, 2021

🔬 Research summary by Dr. Iga Kozlowska (@kozlowska_iga), a sociologist working on Microsoft’s Ethics & Society team where she guides responsible AI innovation.

✍️ This is part 11 of the ongoing Sociology of AI Ethics series; read previous entries here.


[Original paper by Tressie McMillan Cottom]


Overview: Tressie McMillan Cottom brings together the concepts of platform capitalism and racial capitalism to study how modern-day economic changes wrought by digital technology are reshaping ethnicity, race, and racism. She explores how ideas of race and racial relationships and inequalities are produced and reproduced as more and more of our social lives are mediated online. She argues that by privatizing these interactions the Internet obscures much of these racialized relationships between producers and consumers and that the most vulnerable in society are brought into the fold but usually on exploitative terms.


Introduction

Is the Internet racist? That’s certainly not how Tressie McMillan Cottom would formulate her research question, but in short, the intersections of race/racism and digital society are her key research areas. In this paper, McMillan Cottom argues that the sociology of race has largely ignored the digital, and where the Internet is studied, it is often without a coherent theoretical underpinning of race, ethnicity, and racism. She proposes exploring this space through platform capitalism and racial capitalism and where the two meet. More specifically, she sees racial capitalism through two emergent phenomena: obfuscation as privatization and exclusion by inclusion. Let’s explore these concepts first and then apply them to the design of AI.  

Platform capitalism tends to obfuscate the relationships between producers and consumers behind the digital screen. It hides the large amounts of data that it collects and locks them within walled gardens, making it difficult for consumers, the public, and researchers to access. By privatizing more and more social interactions through digital technologies, opaque commercial interests increasingly structure our relationships. Trade secrets and security are often reasons given for a lack of transparency. 

Platform capitalism excludes through “predatory inclusion” which is the “logic, organization, and technique of including marginalized consumer-citizens into ostensibly democratizing mobility schemes on extractive terms.” For example, online degrees, in theory, expand access to higher education but they also prey on predominantly lower-income African-American women to take out predatory loans. This results in huge costs to the student, particularly if they default, and big profit for the for-profit educational institution and the private loan lenders. We see similar exploitation in the “gig economy” (more from McMillan Cottom on The Hustle Economy). 

Thus given these recent phenomena, McMillan Cottom argues that “the study of race and racism in the digital society should theorize networked scale, the logics of obfuscation, and the mechanisms of predatory inclusion.” Here the theories of racial capitalism – how networked capitalism reshapes global racial hierarchies and desires –  come in handy to better understand how our online and offline lives are shaped and reshaped in racialized ways. So how can the concept of racial capitalism help inform the work of those who design and build platform services? 

Designing Racial Capitalism

As McMillan Cottom describes it, the availability of Internet communications in the last couple of decades has reshaped the economy, producing an informal economy of part-time, gig workers, consultants, freelancers and entrepreneurs who find and get paid for work online rather than through a traditional full-time state/firm employer-employee relationship. This is enabled through platforms that bring together service providers and buyers like TaskRabbit, Upwork, Instacart, Uber, Lyft, and Amazon. This ecosystem of digital employment and services provides those who are unemployed or underemployed or who simply can’t make ends meet with a regular full-time job with an opportunity to make extra cash on a one-off basis without benefits and usually in extractive conditions (little control over scheduling, limited recourse to abuse on the job, digital surveillance etc.). This informal economy relies on the most precariously situated workers in the formal economy, often women, people of colour, and immigrants. This racialized capitalist structure, rather than providing economic opportunity, serves to exacerbate racial and economic inequalities and shift the burden and risks of work from employers onto workers furthering the divide between capital and labour. 

Knowing this, how can technology designers avoid contributing to these processes? Particularly in the space of AI? While many of the solutions will be on a macro-structural scale requiring public policy interventions, there are some things that the technology itself and those that build it can change. Let’s consider some AI design examples at all points of the machine learning development lifecycle.

Model Training: When designing facial recognition technologies for ride-sharing apps, for example, the algorithm needs to be assessed on racial impact to ensure it does not bias against people of colour, since misidentification can lead to job loss or lost pay and aggravate racial economic inequality. Preventing such harms may require retraining the model on better data, which may mean collecting a new dataset. 

Data Collection: When collecting data to improve AI algorithmic accuracy, care must be taken to ensure that the data is racially representative of the problem being solved by the technology. The data collection must match the purpose for which the algorithm trained on that data will be used. The process of data collection must also be culturally sensitive and non-exploitative. This means issues like transparency, meaningful consent, data subject rights, and appropriate remuneration given the cultural and economic context must be considered. While the inclusion of people of colour into training datasets is important so that models can be trained to avoid racial bias, this inclusion must not be predatory, for example taking someone’s image without their consent. 

Model Deployment: Finally, any time algorithms that are to be used for performance evaluation or hiring/firing decisions, at a minimum, must not be racially biased. Because of the sensitivity and impactful consequences of this algorithmically-based decision-making, a human in the loop approach must be considered to avoid automated actions without human review. Additionally, workplace conditions should not be deteriorated through the use of technology (e.g. surveillance mechanisms) that diminishes workers’ freedoms, privacy, and dignity. For example, driver monitoring systems or warehouse worker tracking systems should consider issues around notice and consent, minimization of data collection, time and place of personal data storage, right to object to automated processing, and the right to contest automated decision-making etc. Technology designers and builders should speak up when there is no way to design a system that is not racially and/or economically exploitative given the socioeconomic context in which that technology will be deployed. 

Between the lines

Just as sociologists of digital society must consider race and racism so race scholars must no longer relegate the Internet to the theoretical periphery. The same goes for practitioners. AI/ML researchers, data scientists and engineers, and UX designers can no longer put questions of race/racism and economic inequality to the side. Questions of racial and economic inequality in the age of digital transformation and platform capitalism cannot be peripheral as these very social institutions are shaped and reshaped by the technology we build. The story doesn’t end at “build it and they will come.” Tech builders must ask the inevitable next question: “and then what?”

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

related posts

  • UK’s roadmap to AI supremacy: Is the ‘AI War’ heating up?

    UK’s roadmap to AI supremacy: Is the ‘AI War’ heating up?

  • Futures of Responsible and Inclusive AI

    Futures of Responsible and Inclusive AI

  • The Social Metaverse: Battle for Privacy

    The Social Metaverse: Battle for Privacy

  • Human-AI Interactions and Societal Pitfalls

    Human-AI Interactions and Societal Pitfalls

  • The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices

    The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices

  • Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decisi...

    Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decisi...

  • Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

    Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

  • The Evolution of War: How AI has Changed Military Weaponry and Technology

    The Evolution of War: How AI has Changed Military Weaponry and Technology

  • Artificial Intelligence and the Privacy Paradox of Opportunity, Big Data and The Digital Universe

    Artificial Intelligence and the Privacy Paradox of Opportunity, Big Data and The Digital Universe

  • The Impact of AI Art on the Creative Industry

    The Impact of AI Art on the Creative Industry

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Š 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.