• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Sociotechnical Specification for the Broader Impacts of Autonomous Vehicles

November 14, 2022

šŸ”¬ Research Summary by Thomas Krendl Gilbert, a Postdoctoral Fellow at Cornell Tech’s Digital Life Initiative, and has a Ph.D. in Machine Ethics and Epistemology from the University of California, Berkeley.

[Original paper by Thomas Krendl Gilbert, Aaron J. Snoswell, Michael Dennis, Rowan McAllister, Cathy Wu]

NOTE: The cover image of this article was generated by Gilbert in DALL-E 2. The prompt was ā€œA pencil and watercolor drawing of a fleet of self-driving cars and buses disrupting the transportation system in a futuristic city. Beautiful green trees and pedestrians are beside the road.ā€


Overview: As automated vehicle (AV) fleets increase in size and capability, they will fundamentally transform how people interact with transportation systems. These effects may include regulating traffic flow, affecting ease of access to different locations, and reshaping the relationship between pedestrians, cyclists, and cars on the roadway. This paper highlights the technical and social problems that arise as more and more features of the transportation system are placed under the control of AV designers.Ā 


Introduction

In the long term, automated vehicles (AVs) will remake public roads in their own image. For example, even if AVs are electric, they could still worsen greenhouse gas emissions by making traffic so efficient that roads could support twice as many cars as today. Alternatively, AVs could even alter the housing market over time by making it more or less affordable to live in certain neighborhoods, physically reshaping our cities as housing is built to match AV capabilities. Indeed, China’s Ministry of Transportation recently released draft regulations to allow AVs to help operate and manage public transport services. 

This paper argues that designers should steer AVs’ transformative effects toward pro-social outcomes. We propose a frame to understand the broader impacts of AVs on transportation systems, give examples of ways that AV designers might contend with these issues, and highlight issues that may need broader societal engagement before being subject to quantitative analysis. Based on our proposed use cases, we argue that AV fleet behaviors could not only support the public interest but could even advance public policy in ways previously impossible based on the ability to control and improve traffic flow in real-time.

Key Insights

How will AVs remake transportation systems?

The present discussion of improving transportation systems typically rests on making AVs safer, but this ignores potential new and powerful societal affordances enabled by AVs. For example, the computer vision software of an AV could be refined to more accurately recognize pedestrians, or more and more data could be labeled by human operators so that the software can better handle edge cases. In the interest of public accountability, AV companies have to report data on the operation of their fleet–in particular crashes and vehicle disengagements–to federal and state regulators. However, this enterprise ignores a deeper opportunity: how the flow of traffic itself is affected by fleets of AVs that act more like mobile traffic bottlenecks than single human drivers. This situates AVs as a powerful new affordance to reshape the aggregate behavior of the transportation system. At present, AV designers tend to ignore the social implications of this problem, viewing it as ā€œout of scopeā€ for how AV systems are built. Yet the emergence of highly-capable AV fleets, and the affordances they offer, may be a critical means to advance public policy in ways that transcend physical safety. Road access could be made cheaper or more equitable; residents could obtain easier access to nutritious food and grocery options; road repair could be less costly or frequent due to more predictable traffic flows. Given the clear public benefits, our view is that these changes ought to be intentionally designed and considered ā€œin scopeā€ for AV development. 

What is the sociotechnical specification?

The proactive design of AV systems for the general welfare does not mean that private companies should commandeer roads or dictate public policy unilaterally. On the contrary, AV designers need to better judge which elements of the transportation system can be reasonably placed in or out of scope for AV development. This requires an interdisciplinary awareness of how other fields of experts have approached features of public concern (e.g., housing or environmental policy) and what it would take to simulate those features to achieve optimal control over them. We define sociotechnical specification as the act of deliberating about the design of AV fleets concerning the political and economic context of their deployment. The sociotechnical specification comprises problem areas that are:

  • Technical: Transportation elements ā€˜merely’ need to be implemented and controlled for
  • Sociotechnical: Elements whose control must be further elaborated or justified, and
  • Social: Elements without clear definition due to lack of prior consensus

How can sociotechnical specification help address these challenges?

While the long-term public policy landscape of AV fleets is vast and in many ways uncharted, cataloging open questions as technical, sociotechnical, or social at least makes them tractable with various methods. In the paper, we demonstrate how various domains adjacent to transportation have components that could be worked on now if designers wanted to. For example, concerning the environmental impacts of AVs, it is possible to distinguish AV’s direct effects on air quality from the effects of extracting critical earth elements like cobalt in the developing world for AV battery production. As we show in the paper, once these problems’ respective technical and social stakes are acknowledged, it is possible to formulate well-defined metrics for the former and re-envision supply chains to make the latter more politically and legally defensible. It is further possible to approach the aggregate effects of AVs on induced demand by delineating ecological, fiscal, and social factors for which metrics could be defined pending further research. The point is that rather than allowing each of these open questions to fester and become tragic consequences of AVs’ development, designers could deliberately align them with distinct research methods and then take them up as appropriate for AVs to manage within the transportation system in question.

Between the lines

Our goal in this research is to outline how AV designers might take an active–rather than passive–role in how AVs will automate and transform transportation components. Sociotechnical specification is an ongoing conversation between the ethical and technical aspects of AI development, and we welcome participation from researchers and activists to help work on its many open problems. A secondary goal is for other stakeholder groups to leverage the framework of sociotechnical specification to advance their own interests. Our taxonomy of technical, sociotechnical, and social issues is not a firm statement of what AV designers should strive to control but a roadmap for how other political interests could be represented and empowered within AV design as desired.

Many open questions remain. For example, it is not clear how to coordinate sociotechnical commitments between private companies and governments. It seems likely that AVs will either have to be made public utilities or else accountable to public officials such that their specification demonstrably does not violate clear commitments. Either way, it is clear that for AV fleets to succeed in the long term, their operation must successfully cover a much wider range of affordances than today.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • An Uncommon Task: participator Design in Legal AI

    An Uncommon Task: participator Design in Legal AI

  • Against Interpretability: a Critical Examination

    Against Interpretability: a Critical Examination

  • Studying up Machine Learning Data: Why Talk About Bias When We Mean Power?

    Studying up Machine Learning Data: Why Talk About Bias When We Mean Power?

  • Learning to Prompt in the Classroom to Understand AI Limits: A pilot study

    Learning to Prompt in the Classroom to Understand AI Limits: A pilot study

  • Visions of Artificial Intelligence and Robots in Science Fiction: a computational analysis

    Visions of Artificial Intelligence and Robots in Science Fiction: a computational analysis

  • De-platforming disinformation: conspiracy theories and their control

    De-platforming disinformation: conspiracy theories and their control

  • Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

    Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

  • Research Summary: Towards Evaluating the Robustness of Neural Networks

    Research Summary: Towards Evaluating the Robustness of Neural Networks

  • Confidence-Building Measures for Artificial Intelligence

    Confidence-Building Measures for Artificial Intelligence

  • What lies behind AGI: ethical concerns related to LLMs

    What lies behind AGI: ethical concerns related to LLMs

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.