• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Machines as teammates: A research agenda on AI in team collaboration

August 23, 2021

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Isabella Seebera , Eva Bittnerb , Robert O. Briggsc , Triparna de Vreeded , Gert-Jan de Vreeded, *, Aaron Elkinsc , Ronald Maiera , Alexander B. Merza , Sarah Oeste-Reiße , Nils Randrupf , Gerhard Schwabeg , Matthias Söllner]


Overview: The importance of collaboration in the AI space is not only between humans but also between humanity and AI. Imagining working with an AI teammate may no longer be imaginary in the future, and understanding how this will affect collaboration will be essential. For, understanding this will highlight the importance of the human cog to the human-AI machine.


Introduction

Have you ever imagined consulting an AI co-worker? What would you like them to be like? The implications of AI as a teammate are considered within this piece, stretching from how they look to how they could upset human team dynamics. While we must consider the benefits of this collaboration, the human element to the process must remain, especially in terms of human development itself.

Key Insights

A different kind of teammate

Whether the AI is in a physical robot or an algorithm, it cannot be compared to a regular human teammate. One key difference is its ability to assess millions of different alternatives and situations at a time, proving impossible for humans. While useful, the form in which the communication of this assessment arrives would need to be determined. It could be in speech or text, with or without facial expressions for visual feedback. Questions like these lead us to question what we prefer in an AI teammate over a human.

What do we want in an AI team member?

The paper holds the classic Alan Turing definition that ā€œAI refers to the capability of a machine or computer to imitate intelligent human behaviour or thoughtā€. In this sense, should our thinking about AI collaborators be centred in human terms? Like with chatbots, similar considerations are brought into play, such as whether the AI should have a gender, can it differentiate between serious and social chatter etc. Our decisions on these questions will then certainly affect how the team dynamic plays out.

The effect on collaboration

In this regard, it’s essential to differentiate between AI as a teammate and AI as an assistant. Collaboration with AI as a tool is not as thought-provoking as holding it in the ā€˜higher’ regard as a counterpart. 

In this way, collaborating with such an entity could enhance or negatively impact the team dynamic. The AI could become a leader in the group on specific issues it can handle best, yet depending too much on the machine could lose human competencies. Furthermore, the AI teammate could prove excellent at drawing insights from data, but the lack of out-the-box thinking could reinforce already present views. Hence, while collaboration is undoubtedly affected by introducing AI, the right balance still needs to be struck to make the most of it.

Considerations when collaborating

Given the novelty of this practice and AI in general, why an AI would suggest a particular course of action becomes a critical question. In addition, the extent to which we recognise the AI’s involvement can also have far-reaching impacts. Should the AI become a leader on a topic, should it be credited with its work? Much of this stems from whether AI can be creative or not, which can be found on poetry, fashion and music.

Between the lines

While collaboration with AI teammates may be essential practice in the future, I would be cautious against throwing such collaboration into every problem possible. Sure, using AI’s analytical capabilities will nearly always be helpful, but that pertains more to AI as an assistant rather than a counterpart. Hence, problems such as trying to solve world hunger, I believe, would not benefit from an AI as a teammate intervention, mainly due to how AI can never actually feel or understand what being hungry feels like. What’s for sure is that while AI collaboration can reap benefits, human involvement remains paramount.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

related posts

  • Analysis of the ā€œArtificial Intelligence governance principles: towards ethical and trustworthy arti...

    Analysis of the ā€œArtificial Intelligence governance principles: towards ethical and trustworthy arti...

  • Investing in AI for Social Good: An Analysis of European National Strategies

    Investing in AI for Social Good: An Analysis of European National Strategies

  • Use case cards: a use case reporting framework inspired by the European AI Act

    Use case cards: a use case reporting framework inspired by the European AI Act

  • Post-Mortem Privacy 2.0: Theory, Law and Technology

    Post-Mortem Privacy 2.0: Theory, Law and Technology

  • Towards Sustainable Conversational AI

    Towards Sustainable Conversational AI

  • Getting from Commitment to Content in AI and Data Ethics: Justice and Explainability

    Getting from Commitment to Content in AI and Data Ethics: Justice and Explainability

  • Research summary: What does it mean for ML to be trustworthy?

    Research summary: What does it mean for ML to be trustworthy?

  • Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

    Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

  • The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices

    The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices

  • Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

    Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.