🔬 Research summary by Sarah P. Grant, a freelance writer dedicated to covering the implications of AI and big data analytics.
[Original paper by Gerald C. Kane, Amber Young, Ann Majchrzak, and Sam Ransbotham]
Overview: Broad adoption of machine learning systems could usher in an era of ubiquitous data collection and behaviour control. However, this is only one potential path for the technology, argue Gerald C. Kane et al. Drawing on emancipatory pedagogy, this paper presents design principles for a new type of machine learning system that acts on behalf of individuals within an oppressive environment.
Introduction
“It is capitalism that assigns the price tag of subjugation and helplessness, not the technology,”
asserts Shoshana Zuboff in her bestselling book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.
In contrast, some academics argue that technology itself can be inherently oppressive. In their paper about emancipatory assistants, Kane et al. demonstrate that machine learning systems have several oppressive features, including the tendency to optimize “on outcomes for large samples at the expense of [individual users].”
These oppressive characteristics mean that as systems fuelled by machine learning seep into every aspect of life–from job searching to reading the news–individuals face more limits on their freedoms. Addressing this problem, argue Kane et al., requires innovative approaches to machine learning system design.
Using the emancipatory pedagogy of Brazilian educator and philosopher Paulo Freire as a foundation, the paper presents design principles for a new type of machine learning system called the “emancipatory assistant”–an agent that “would help individuals express and enact their preferences” in a world of pervasive data extraction and behaviour manipulation.
The rise of informania
To illustrate how an emancipatory machine learning system would work, the authors paint a picture of a dystopian future called Informania. In Informania, machine learning systems “optimize on outcomes for millions (or billions) of users, with little regard for individual rights within the collective.”
Such a future is becoming more likely, the authors state, pointing to China’s social credit system and the US Justice Department’s COMPAS algorithm. The authors also describe how, in a system of unchecked free-market capitalism, “multiple organizations could develop [machine learning] infrastructures….resulting in a massive [behaviour] control infrastructure.”
While the authors note that such an outcome represents “the logical conclusion of our current trajectory,” they also emphasize that Informania’s oppression “need not necessarily arise from malicious intent.” Acting on behalf of the individual, an emancipatory assistant would help redress power imbalances within Informania.
Machine Learning Systems: Oppressive Features
Before describing in detail what an emancipatory system would look like, the authors demonstrate how machine learning systems are “inherently oppressive” by applying theoretical constructs of emancipation and oppression. For example, many algorithms use past behaviours to filter the information that appears on newsfeeds and product recommendations. This impacts a person’s “freedom to think” by controlling the amount and type of information available for making decisions.
A significant share of the paper is devoted to specifying how the basic machine learning model is oppressive by nature as compared to code-based systems. The authors state that machine learning systems are oppressive because they:
- optimize on outcome variables, which typically benefit the platform above individual users;
- are based on training data that may reflect historical biases;
- are opaque and difficult to understand;
- typically don’t incorporate user feedback.
Later on in the paper, the authors describe four modifications to the basic machine learning model that will “yield distinctive design features” of an emancipatory assistant.
The role of emancipatory assistants
Referring to past research, the paper describes emancipation as “a theoretical state in which power dynamics between agents are neutral or equal.” Within an oppressive machine learning environment, an emancipatory assistant could act as an intermediary that helps individual users achieve more power.
The authors argue that critical social theory is well-suited for the development of new machine learning design principles. Freire’s emancipatory pedagogy in particular “provides ready-made pedagogical steps to foster concrete gains of emancipation.”Â
For example, Freire did not push for the oppressed to overthrow the oppressors, but rather that they work together in a new type of co-education. In a similar way, the emancipatory assistant could facilitate a process of mutual inquiry, “first by helping an individual uncover his or her authentic preferences and desires and then by providing Informania with a mechanism to factor those desires into its optimization function.”
Key Design Principles
The authors identify key design principles for emancipatory assistants, which optimize for:
- #1. Richness of Preferences
- Emancipatory assistants can help users provide Informania with more details about the individual’s interest. For example, the assistant could help an individual who wants to change careers overcome Informania’s assumption that job history indicates future job preferences.
- #2. Recognizing Conflict
- An emancipatory assistant can help users recognize when their goals conflict with the goals of Informania. For example, the assistant could help users present attributes to Informania that lead to better pricing when finding the best loan options for purchasing a home.
- #3. Personalized Storytelling
- Emancipatory assistants can help users manage information sharing based on different contexts. According to the authors, “users might be more comfortable with complete information to a spouse but might restrict slightly to children and restrict even further to potential employers.”
- #4. Alternative Perspectives
- Rather than encouraging people to click on the same types of news articles they have read in the past, the assistant “can provide a richer article landscape and indirectly encourage critical consciousness.” Emancipatory assistants can help individuals “develop the robust rationality needed to think critically about the world around them.”
While the authors predict that Informania will dominate in the shorter term, they envision a longer term future where there is a more balanced power dynamic between emancipatory assistants and Informania. This would necessitate the establishment of a certification body as well as audit committees to promote compliance to standards for the newer types of machine learning systems.
Between the lines
This is an important paper because it encourages more expansive thinking within the field of machine learning. By drawing on established theories from multiple domains, it could also foster more interdisciplinary collaboration.
While the authors do touch on the subject of algorithmic literacy in this paper, further research could investigate the implications of divisions in algorithmic awareness. For example, one survey of internet users in Norway (where 98% of the population has internet access), found that education is strongly linked to algorithm awareness, with low awareness highest among the least educated group. Groups with low algorithm awareness were more likely to hold neutral attitudes towards algorithms.
It could be argued, then, that many individuals who would benefit from emancipatory assistants may not be motivated or may not have the resources to use such systems. Future research could address how new types of machine learning systems could yield emancipatory outcomes for all users of Internet-based platforms–and not just a privileged few.