• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: AI Mediated Exchange Theory by Xiao Ma and Taylor W. Brown

March 9, 2020

This paper by Xiao Ma and Taylor W. Brown puts forth a framework that extends the well studied Social Exchange Theory (SET) to study human-AI interactions via mediation mechanisms. The authors make a case for how current research needs more interdisciplinary collaboration between technical and social science scholars stemming from a lack of shared taxonomy that places research in similar areas on separate grounds. They propose two axes of human/AI and micro/macro perspectives to visualize how researchers might better collaborate with each other. Additionally, they make a case for how AI agents can mediate transactions between humans and create potential social value as an emergent property of those mediated transactions.

As the pace of research progress quickens and more people from different fields engage in work on the societal impacts of AI, it is essential that we build on top of each other’s work rather than duplicating efforts. Additionally, because of conventional differences in how research is published and publicized in the social sciences and technical domains, there’s often a shallowness in the awareness of the latest work being done at the intersection of these two domains. What that means is that we need a shared taxonomy that allows us to better position research such that potential gaps can be discovered and areas of collaboration can be identified. The proposed two axes structure in the paper goes some distance in helping to bridge this current gap. 

AI systems are becoming ever more pervasive in many aspects of our everyday lives and we definitely see a ton of transactions between humans that are mediated by automated agents. In some scenarios, they lead to net positive for society when they enable discovery of research content faster as might be the case for medical research being done to combat covid-19 but there might be negative externalities as well where they can lead to echo chambers walling off content from a subset of your network on social media platforms thus polarizing discussions and viewpoints. A better understanding of how these interactions can be engineered to skew positive will be crucial as AI agents get inserted to evermore aspects of our lives, especially ones that will have a significant impact on our lives. 

We also foresee an emergence of tighter interdisciplinary collaboration that can shed light on these inherently socio-technical issues which don’t have unidimensional solutions. With the rising awareness and interest from both social and technical sciences, the emerging work will be both timely and relevant to addressing challenges of the societal impacts of AI head on. As a part of the work being done at MAIEI we push for each of our undertakings to have an interdisciplinary team as a starting point towards achieving this mandate.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

AI Policy Corner: AI for Good Summit 2025

AI Policy Corner: Japan’s AI Promotion Act

related posts

  • Lanfrica: A Participatory Approach to Documenting Machine Translation Research on African Languages ...

    Lanfrica: A Participatory Approach to Documenting Machine Translation Research on African Languages ...

  • Prompt Middleware: Helping Non-Experts Engage with Generative AI

    Prompt Middleware: Helping Non-Experts Engage with Generative AI

  • Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

    Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

  • Universal and Transferable Adversarial Attacks on Aligned Language Models

    Universal and Transferable Adversarial Attacks on Aligned Language Models

  • Social media polarization reflects shifting political alliances in Pakistan

    Social media polarization reflects shifting political alliances in Pakistan

  • The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

    The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

  • Hiring Algorithms Based on Junk Science May Cost You Your Dream Job

    Hiring Algorithms Based on Junk Science May Cost You Your Dream Job

  • Common but Different Futures: AI Inequity and Climate Change

    Common but Different Futures: AI Inequity and Climate Change

  • When Are Two Lists Better than One?: Benefits and Harms in Joint Decision-making

    When Are Two Lists Better than One?: Benefits and Harms in Joint Decision-making

  • Perspectives and Approaches in AI Ethics: East Asia (Research Summary)

    Perspectives and Approaches in AI Ethics: East Asia (Research Summary)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.