• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Machines as teammates: A research agenda on AI in team collaboration

August 23, 2021

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Isabella Seebera , Eva Bittnerb , Robert O. Briggsc , Triparna de Vreeded , Gert-Jan de Vreeded, *, Aaron Elkinsc , Ronald Maiera , Alexander B. Merza , Sarah Oeste-Reiße , Nils Randrupf , Gerhard Schwabeg , Matthias Söllner]


Overview: The importance of collaboration in the AI space is not only between humans but also between humanity and AI. Imagining working with an AI teammate may no longer be imaginary in the future, and understanding how this will affect collaboration will be essential. For, understanding this will highlight the importance of the human cog to the human-AI machine.


Introduction

Have you ever imagined consulting an AI co-worker? What would you like them to be like? The implications of AI as a teammate are considered within this piece, stretching from how they look to how they could upset human team dynamics. While we must consider the benefits of this collaboration, the human element to the process must remain, especially in terms of human development itself.

Key Insights

A different kind of teammate

Whether the AI is in a physical robot or an algorithm, it cannot be compared to a regular human teammate. One key difference is its ability to assess millions of different alternatives and situations at a time, proving impossible for humans. While useful, the form in which the communication of this assessment arrives would need to be determined. It could be in speech or text, with or without facial expressions for visual feedback. Questions like these lead us to question what we prefer in an AI teammate over a human.

What do we want in an AI team member?

The paper holds the classic Alan Turing definition that ā€œAI refers to the capability of a machine or computer to imitate intelligent human behaviour or thoughtā€. In this sense, should our thinking about AI collaborators be centred in human terms? Like with chatbots, similar considerations are brought into play, such as whether the AI should have a gender, can it differentiate between serious and social chatter etc. Our decisions on these questions will then certainly affect how the team dynamic plays out.

The effect on collaboration

In this regard, it’s essential to differentiate between AI as a teammate and AI as an assistant. Collaboration with AI as a tool is not as thought-provoking as holding it in the ā€˜higher’ regard as a counterpart. 

In this way, collaborating with such an entity could enhance or negatively impact the team dynamic. The AI could become a leader in the group on specific issues it can handle best, yet depending too much on the machine could lose human competencies. Furthermore, the AI teammate could prove excellent at drawing insights from data, but the lack of out-the-box thinking could reinforce already present views. Hence, while collaboration is undoubtedly affected by introducing AI, the right balance still needs to be struck to make the most of it.

Considerations when collaborating

Given the novelty of this practice and AI in general, why an AI would suggest a particular course of action becomes a critical question. In addition, the extent to which we recognise the AI’s involvement can also have far-reaching impacts. Should the AI become a leader on a topic, should it be credited with its work? Much of this stems from whether AI can be creative or not, which can be found on poetry, fashion and music.

Between the lines

While collaboration with AI teammates may be essential practice in the future, I would be cautious against throwing such collaboration into every problem possible. Sure, using AI’s analytical capabilities will nearly always be helpful, but that pertains more to AI as an assistant rather than a counterpart. Hence, problems such as trying to solve world hunger, I believe, would not benefit from an AI as a teammate intervention, mainly due to how AI can never actually feel or understand what being hungry feels like. What’s for sure is that while AI collaboration can reap benefits, human involvement remains paramount.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

AI Policy Corner: AI for Good Summit 2025

AI Policy Corner: Japan’s AI Promotion Act

related posts

  • The Social Metaverse: Battle for Privacy

    The Social Metaverse: Battle for Privacy

  • Research summary:  Algorithmic Bias: On the Implicit Biases of Social Technology

    Research summary: Algorithmic Bias: On the Implicit Biases of Social Technology

  • Disaster City Digital Twin: A Vision for Integrating Artificial and Human Intelligence for Disaster ...

    Disaster City Digital Twin: A Vision for Integrating Artificial and Human Intelligence for Disaster ...

  • Deployment corrections: An incident response framework for frontier AI models

    Deployment corrections: An incident response framework for frontier AI models

  • Do Less Teaching, Do More Coaching: Toward Critical Thinking for Ethical Applications of Artificial ...

    Do Less Teaching, Do More Coaching: Toward Critical Thinking for Ethical Applications of Artificial ...

  • Language (Technology) is Power: A Critical Survey of ā€œBiasā€ in NLP (Research summary)

    Language (Technology) is Power: A Critical Survey of ā€œBiasā€ in NLP (Research summary)

  • Embedding Ethical Principles into AI Predictive Tools for Migration Management in Humanitarian Actio...

    Embedding Ethical Principles into AI Predictive Tools for Migration Management in Humanitarian Actio...

  • An Empirical Study of Modular Bias Mitigators and Ensembles

    An Empirical Study of Modular Bias Mitigators and Ensembles

  • Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

    Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

  • Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

    Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.