• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Machines as teammates: A research agenda on AI in team collaboration

August 23, 2021

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Isabella Seebera , Eva Bittnerb , Robert O. Briggsc , Triparna de Vreeded , Gert-Jan de Vreeded, *, Aaron Elkinsc , Ronald Maiera , Alexander B. Merza , Sarah Oeste-Reiße , Nils Randrupf , Gerhard Schwabeg , Matthias Söllner]


Overview: The importance of collaboration in the AI space is not only between humans but also between humanity and AI. Imagining working with an AI teammate may no longer be imaginary in the future, and understanding how this will affect collaboration will be essential. For, understanding this will highlight the importance of the human cog to the human-AI machine.


Introduction

Have you ever imagined consulting an AI co-worker? What would you like them to be like? The implications of AI as a teammate are considered within this piece, stretching from how they look to how they could upset human team dynamics. While we must consider the benefits of this collaboration, the human element to the process must remain, especially in terms of human development itself.

Key Insights

A different kind of teammate

Whether the AI is in a physical robot or an algorithm, it cannot be compared to a regular human teammate. One key difference is its ability to assess millions of different alternatives and situations at a time, proving impossible for humans. While useful, the form in which the communication of this assessment arrives would need to be determined. It could be in speech or text, with or without facial expressions for visual feedback. Questions like these lead us to question what we prefer in an AI teammate over a human.

What do we want in an AI team member?

The paper holds the classic Alan Turing definition that “AI refers to the capability of a machine or computer to imitate intelligent human behaviour or thought”. In this sense, should our thinking about AI collaborators be centred in human terms? Like with chatbots, similar considerations are brought into play, such as whether the AI should have a gender, can it differentiate between serious and social chatter etc. Our decisions on these questions will then certainly affect how the team dynamic plays out.

The effect on collaboration

In this regard, it’s essential to differentiate between AI as a teammate and AI as an assistant. Collaboration with AI as a tool is not as thought-provoking as holding it in the ‘higher’ regard as a counterpart. 

In this way, collaborating with such an entity could enhance or negatively impact the team dynamic. The AI could become a leader in the group on specific issues it can handle best, yet depending too much on the machine could lose human competencies. Furthermore, the AI teammate could prove excellent at drawing insights from data, but the lack of out-the-box thinking could reinforce already present views. Hence, while collaboration is undoubtedly affected by introducing AI, the right balance still needs to be struck to make the most of it.

Considerations when collaborating

Given the novelty of this practice and AI in general, why an AI would suggest a particular course of action becomes a critical question. In addition, the extent to which we recognise the AI’s involvement can also have far-reaching impacts. Should the AI become a leader on a topic, should it be credited with its work? Much of this stems from whether AI can be creative or not, which can be found on poetry, fashion and music.

Between the lines

While collaboration with AI teammates may be essential practice in the future, I would be cautious against throwing such collaboration into every problem possible. Sure, using AI’s analytical capabilities will nearly always be helpful, but that pertains more to AI as an assistant rather than a counterpart. Hence, problems such as trying to solve world hunger, I believe, would not benefit from an AI as a teammate intervention, mainly due to how AI can never actually feel or understand what being hungry feels like. What’s for sure is that while AI collaboration can reap benefits, human involvement remains paramount.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

related posts

  • The Algorithm Audit: Scoring the Algorithms That Score Us (Research Summary)

    The Algorithm Audit: Scoring the Algorithms That Score Us (Research Summary)

  • Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines (Research Summary)

    Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines (Research Summary)

  • LLMCarbon: Modeling the end-to-end Carbon Footprint of Large Language Models

    LLMCarbon: Modeling the end-to-end Carbon Footprint of Large Language Models

  • Now I’m Seen: An AI Ethics Discussion Across the Globe

    Now I’m Seen: An AI Ethics Discussion Across the Globe

  • Explaining the Principles to Practices Gap in AI

    Explaining the Principles to Practices Gap in AI

  • Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback an...

    Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback an...

  • Representation and Imagination for Preventing AI Harms

    Representation and Imagination for Preventing AI Harms

  • Scoping AI Governance: A Smarter Tool Kit for Beneficial Applications

    Scoping AI Governance: A Smarter Tool Kit for Beneficial Applications

  • In AI We Trust: Ethics, Artificial Intelligence, and Reliability

    In AI We Trust: Ethics, Artificial Intelligence, and Reliability

  • The AI Gambit – Leveraging Artificial Intelligence to Combat Climate Change

    The AI Gambit – Leveraging Artificial Intelligence to Combat Climate Change

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.