• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Machines as teammates: A research agenda on AI in team collaboration

August 23, 2021

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Isabella Seebera , Eva Bittnerb , Robert O. Briggsc , Triparna de Vreeded , Gert-Jan de Vreeded, *, Aaron Elkinsc , Ronald Maiera , Alexander B. Merza , Sarah Oeste-Reiße , Nils Randrupf , Gerhard Schwabeg , Matthias Söllner]


Overview: The importance of collaboration in the AI space is not only between humans but also between humanity and AI. Imagining working with an AI teammate may no longer be imaginary in the future, and understanding how this will affect collaboration will be essential. For, understanding this will highlight the importance of the human cog to the human-AI machine.


Introduction

Have you ever imagined consulting an AI co-worker? What would you like them to be like? The implications of AI as a teammate are considered within this piece, stretching from how they look to how they could upset human team dynamics. While we must consider the benefits of this collaboration, the human element to the process must remain, especially in terms of human development itself.

Key Insights

A different kind of teammate

Whether the AI is in a physical robot or an algorithm, it cannot be compared to a regular human teammate. One key difference is its ability to assess millions of different alternatives and situations at a time, proving impossible for humans. While useful, the form in which the communication of this assessment arrives would need to be determined. It could be in speech or text, with or without facial expressions for visual feedback. Questions like these lead us to question what we prefer in an AI teammate over a human.

What do we want in an AI team member?

The paper holds the classic Alan Turing definition that “AI refers to the capability of a machine or computer to imitate intelligent human behaviour or thought”. In this sense, should our thinking about AI collaborators be centred in human terms? Like with chatbots, similar considerations are brought into play, such as whether the AI should have a gender, can it differentiate between serious and social chatter etc. Our decisions on these questions will then certainly affect how the team dynamic plays out.

The effect on collaboration

In this regard, it’s essential to differentiate between AI as a teammate and AI as an assistant. Collaboration with AI as a tool is not as thought-provoking as holding it in the ‘higher’ regard as a counterpart. 

In this way, collaborating with such an entity could enhance or negatively impact the team dynamic. The AI could become a leader in the group on specific issues it can handle best, yet depending too much on the machine could lose human competencies. Furthermore, the AI teammate could prove excellent at drawing insights from data, but the lack of out-the-box thinking could reinforce already present views. Hence, while collaboration is undoubtedly affected by introducing AI, the right balance still needs to be struck to make the most of it.

Considerations when collaborating

Given the novelty of this practice and AI in general, why an AI would suggest a particular course of action becomes a critical question. In addition, the extent to which we recognise the AI’s involvement can also have far-reaching impacts. Should the AI become a leader on a topic, should it be credited with its work? Much of this stems from whether AI can be creative or not, which can be found on poetry, fashion and music.

Between the lines

While collaboration with AI teammates may be essential practice in the future, I would be cautious against throwing such collaboration into every problem possible. Sure, using AI’s analytical capabilities will nearly always be helpful, but that pertains more to AI as an assistant rather than a counterpart. Hence, problems such as trying to solve world hunger, I believe, would not benefit from an AI as a teammate intervention, mainly due to how AI can never actually feel or understand what being hungry feels like. What’s for sure is that while AI collaboration can reap benefits, human involvement remains paramount.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

related posts

  • Promises and Challenges of Causality for Ethical Machine Learning

    Promises and Challenges of Causality for Ethical Machine Learning

  • Group Fairness Is Not Derivable From Justice: a Mathematical Proof

    Group Fairness Is Not Derivable From Justice: a Mathematical Proof

  • Bridging the Gap: The Case For an ‘Incompletely Theorized Agreement’ on AI Policy (Research Summary)

    Bridging the Gap: The Case For an ‘Incompletely Theorized Agreement’ on AI Policy (Research Summary)

  • The Ethical Need for Watermarks in Machine-Generated Language

    The Ethical Need for Watermarks in Machine-Generated Language

  • Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

    Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

  • Research summary: A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic ...

    Research summary: A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic ...

  • Longitudinal Fairness with Censorship

    Longitudinal Fairness with Censorship

  • Research summary: Different Intelligibility for Different Folks

    Research summary: Different Intelligibility for Different Folks

  • Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

    Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

  • Research summary: Acting the Part: Examining Information Operations Within #BlackLivesMatter Discour...

    Research summary: Acting the Part: Examining Information Operations Within #BlackLivesMatter Discour...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.