*Link to original paper + authors at the bottom
Overview: This paper lays out the implications of the prevalent informal labor markets in India and the associated social hierarchies that impose a double precarity on the lives of the workers because of marginalization through both digital and reinforced societal inequities.
- Informal work : This includes atypical, non-standard, self-generated, and home-based work that is unregistered (or too small to be registered) and is not typically accounted, taxed, or regulated because of a lack of clear employer-employee relationship.
This is particularly prevalent in emerging economies like India where there is a lot of “hidden” labor that faces a disproportionate burden of labor harms because of a lack of being folded into labor protections from a legal standpoint.
- Artificial Intelligence : This report uses AI as a substitute for algorithmic platforms and how they alter the work arrangements for those who demand and supply services and products on those platforms.
- Heteromation : An interesting term! This talks about positioning human labor alongside a machine rather than a machine replacing humans. The thing to note here is that such a positioning makes a natural segue into human-in-the-loop (HITL) conversations but see some of my work on why HITL is insufficient as a lens to assess these automated systems.
The Indian context
Unsurprisingly, the most common uses of algorithmic labor assignment are in the domains of ride-hailing (Ola, Uber), food delivery (Zomato, Swiggy), and in logistics and retail (Flipkart, Amazon).
The particular issue of double marginalization, as we will explore further in this summary, is that these platforms are typically, from a supply-side, staffed by those who are already marginalized and that they are often testing grounds for new automation technologies. Specifically, this refers to the use of surveillance techniques in the workplace and monitoring workplace productivity which might violate labor laws, but because this being experimented on in the informal sector with little to no regulation, it is a place where these platforms fine-tune their algorithmic approaches.
Societal context in India
Traditionally, because of strong hierarchical norms and caste-systems in India, certain occupations for the domain of people from certain backgrounds and even geographic mobility was restricted in terms of who you knew in the place that you are trying to migrate to and if they have the right connections to get you set up. This meant that there were ceilings on social and financial mobility that continued to reinforce social hierarchies. So, in a sense, such platforms have allowed these workers to bypass gatekeepers that prevent the inclusion of workers from different backgrounds into various subsegments of the labor market. Thus, it offers a pathway to reintermediation.
Another positive outcome from this is the framing of the work that is offered by the platforms that offers a higher degree of dignity to the labor compared to the very strong biases in the Indian context against some occupations as not being dignified labor. The rebranding of some of this labor has allowed workers to have higher levels of dignity, with of course the stated benefits of flexibility in work schedules and more. Though, this is not without the many, many harms that come from an always-on and available workforce that has other negative implications.
The participants that were surveyed as a part of this report indicated that they were doing such on-demand work as a temporary measure to meet short-term financial needs like paying off loans, gathering money for a wedding, etc.
But, as we said, this is not without consequences. Most of these platforms cater to the rising middle-class in India that skews to a certain sociodemographic and hence those who were hitherto viewed as “risky subjects” are now well-policed through these platforms when they are providing services to this mobile, urban, nouveau-riche class. That is, the venture capital dollars (or rupees) are funding the reinforcement of social hierarchies through platforms by placing some people above others and justifying workplace surveillance and other unethical uses of automated technologies to pander to this demographic in the interest of turning a profit.
As is the case with any new technology that has the potential to disrupt the incumbents, it doesn’t come without unexamined risks. We need to think critically about the second-order effects of the deployment of such technologies and if we are preying on those who are already marginalized in the interest of large corporate interests.
What does this mean for Actionable AI Ethics?
- As a practitioner, it is more important than ever to have someone on your team who understands the local context of where the system is going to be deployed. Without that you risk entrenching societal inequities unwittingly.
- In the creation of some of the automated systems, having meaningful controls for those who are on the supply-side of the platform is important so that they are not exploited.
- In the design process, I also believe that it is important to present the workers in a humane way so that the demand-side understands that there are real humans on the other side fulfilling their requests and it doesn’t abstract them away as just some cogs in a giant wheel.
Questions that I am exploring
*If you have answers to any of these questions, please tweet and let me know!
- It seems that there is a homogenous treatment of the Global South and this paper does a good job of at least breaking out India and talking about it separately. Are there other studies that do the same?
- What are some other areas in AI ethics where because of the lack of geographic sensitivity, we might be imposing Western ethical norms?
- How do we better engage with scholars and researchers from different places to have a more holistic view of some of these concerns?
My piece on AI ethics groups are repeating one of society’s classic mistakes in the MIT Technology Review touches on some of these ideas.
Original paper by Noopur Raval: https://www.giswatch.org/node/6202