• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Using Multimodal Sensing to Improve Awareness in Human-AI Interaction

April 26, 2020

Top-level summary: With increasing capabilities of AI systems, and established research that demonstrates how human-machine combinations operate better than each in isolation, this paper presents a timely discussion on how we can craft better coordination between human and machine agents with the aim of arriving at the best possible understanding between them. This will enhance trust levels between the agents and it starts with having effective communication. This paper by Joshua Newn discusses how framing this from a human-computer interaction (HCI) approach will lead to achieving this goal. This is framed with intention-, context-, and cognition-awareness being the critical elements which would be responsible for the success of effective communication between human and machine agents.

In line with the recommendations that we shared in our response to the OPCC consultation for incorporating privacy legislation amendments to accommodate for the impacts from AI, we see that the augmenting of human capacity via AI-enabled solutions will play an important role in them becoming better partners.

The transfer of knowledge between the human and artificial agents is important for building trust and becoming better partners with machines. This can happen when machines are able to better predict the actions and responses of human agents and human agents are able to explain the actions of the machines such that they can make informed decisions. 

Effective communication in the above situation will happen when it is as “natural” as possible even if it comes at high costs. The high costs can arise from the more computationally expensive methods that might need to be deployed and explanation mechanisms which are taxing from a resource and time perspective. 

In the paper, the author explores projects that will enable us to build better human-aware AI systems, for example, using eye-tracking software with future potential to use other sensory capabilities. Gaze can be used to make a determination of the intention of different agents and serve as a great starting point for artificial intelligence agents to use non-verbal cues from humans in predicting actions and functioning as more capable partners. But, once this starts to work, it is also important to be able to communicate why and how a particular decision was made and that’s where the author crafted a study to determine how humans formed predictions and explained them. They used a game called Ticket to Ride as a playground to test this.The system was able to abstract complex multimodal information into situation-awareness and also provide a natural language explanation for it. This is provided in the form of a visual example in the paper. 

The effort showcases how insights from the field of HCI can be used to enhance two-way communication between AI and human agents doing things like finding the right amount of information to communicate given what has already been communicated. Additionally, the aim of some of this research is to detect intentions unobtrusively such that the system can find the right time to intervene or whether to intervene at all based on the cognitive load and attention levels.

The HCI approach in the paper is centred on 3 key ideas: 

  1. Intention-awareness: figuring out what the human is planning to do so as to allow for maximum autonomy while providing crucial information ahead of time of performing an action. 
  2. Context-awareness: analyzing the context within which the human is operating to provide them with context-relevant information, reflective of how humans communicate and interact with one another.
  3. Cognition-awareness: based on the cognitive load of the human agent, figuring out when to deliver information to optimize for effectiveness of communication.

Original piece by Joshua Newn: https://drive.google.com/file/d/1_9u7-p0E6YfV0y4xeYv9xEMzgEdKixFl/view

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

related posts

  • Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

    Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

  • The Narrow Depth and Breadth of Corporate Responsible AI Research

    The Narrow Depth and Breadth of Corporate Responsible AI Research

  • (Re)Politicizing Digital Well-Being: Beyond User Engagements

    (Re)Politicizing Digital Well-Being: Beyond User Engagements

  • Code Work: Thinking with the System in Mexico

    Code Work: Thinking with the System in Mexico

  • Representation and Imagination for Preventing AI Harms

    Representation and Imagination for Preventing AI Harms

  • The Ethical Need for Watermarks in Machine-Generated Language

    The Ethical Need for Watermarks in Machine-Generated Language

  • Bridging the Gap Between AI and the Public (TEDxYouth@GandyStreet)

    Bridging the Gap Between AI and the Public (TEDxYouth@GandyStreet)

  • Sociological Perspectives on Artificial Intelligence: A Typological Reading

    Sociological Perspectives on Artificial Intelligence: A Typological Reading

  • Quantifying the Carbon Emissions of Machine Learning

    Quantifying the Carbon Emissions of Machine Learning

  • An Empirical Study of Modular Bias Mitigators and Ensembles

    An Empirical Study of Modular Bias Mitigators and Ensembles

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.