• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Using Multimodal Sensing to Improve Awareness in Human-AI Interaction

April 26, 2020

Top-level summary: With increasing capabilities of AI systems, and established research that demonstrates how human-machine combinations operate better than each in isolation, this paper presents a timely discussion on how we can craft better coordination between human and machine agents with the aim of arriving at the best possible understanding between them. This will enhance trust levels between the agents and it starts with having effective communication. This paper by Joshua Newn discusses how framing this from a human-computer interaction (HCI) approach will lead to achieving this goal. This is framed with intention-, context-, and cognition-awareness being the critical elements which would be responsible for the success of effective communication between human and machine agents.

In line with the recommendations that we shared in our response to the OPCC consultation for incorporating privacy legislation amendments to accommodate for the impacts from AI, we see that the augmenting of human capacity via AI-enabled solutions will play an important role in them becoming better partners.

The transfer of knowledge between the human and artificial agents is important for building trust and becoming better partners with machines. This can happen when machines are able to better predict the actions and responses of human agents and human agents are able to explain the actions of the machines such that they can make informed decisions. 

Effective communication in the above situation will happen when it is as “natural” as possible even if it comes at high costs. The high costs can arise from the more computationally expensive methods that might need to be deployed and explanation mechanisms which are taxing from a resource and time perspective. 

In the paper, the author explores projects that will enable us to build better human-aware AI systems, for example, using eye-tracking software with future potential to use other sensory capabilities. Gaze can be used to make a determination of the intention of different agents and serve as a great starting point for artificial intelligence agents to use non-verbal cues from humans in predicting actions and functioning as more capable partners. But, once this starts to work, it is also important to be able to communicate why and how a particular decision was made and that’s where the author crafted a study to determine how humans formed predictions and explained them. They used a game called Ticket to Ride as a playground to test this.The system was able to abstract complex multimodal information into situation-awareness and also provide a natural language explanation for it. This is provided in the form of a visual example in the paper. 

The effort showcases how insights from the field of HCI can be used to enhance two-way communication between AI and human agents doing things like finding the right amount of information to communicate given what has already been communicated. Additionally, the aim of some of this research is to detect intentions unobtrusively such that the system can find the right time to intervene or whether to intervene at all based on the cognitive load and attention levels.

The HCI approach in the paper is centred on 3 key ideas: 

  1. Intention-awareness: figuring out what the human is planning to do so as to allow for maximum autonomy while providing crucial information ahead of time of performing an action. 
  2. Context-awareness: analyzing the context within which the human is operating to provide them with context-relevant information, reflective of how humans communicate and interact with one another.
  3. Cognition-awareness: based on the cognitive load of the human agent, figuring out when to deliver information to optimize for effectiveness of communication.

Original piece by Joshua Newn: https://drive.google.com/file/d/1_9u7-p0E6YfV0y4xeYv9xEMzgEdKixFl/view

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Lanfrica: A Participatory Approach to Documenting Machine Translation Research on African Languages ...

    Lanfrica: A Participatory Approach to Documenting Machine Translation Research on African Languages ...

  • Language (Technology) is Power: A Critical Survey of “Bias” in NLP (Research summary)

    Language (Technology) is Power: A Critical Survey of “Bias” in NLP (Research summary)

  • On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

    On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

  • Disaster City Digital Twin: A Vision for Integrating Artificial and Human Intelligence for Disaster ...

    Disaster City Digital Twin: A Vision for Integrating Artificial and Human Intelligence for Disaster ...

  • The Paradox of AI Ethics in Warfare

    The Paradox of AI Ethics in Warfare

  • Reliabilism and the Testimony of Robots (Research Summary)

    Reliabilism and the Testimony of Robots (Research Summary)

  • Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

    Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

  • When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles

    When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles

  • The Chief AI Ethics Officer: A Champion or a PR Stunt?

    The Chief AI Ethics Officer: A Champion or a PR Stunt?

  • Evolution in Age-Verification Applications: Can AI Open Some New Horizons?

    Evolution in Age-Verification Applications: Can AI Open Some New Horizons?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.