• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Using Multimodal Sensing to Improve Awareness in Human-AI Interaction

April 26, 2020

Top-level summary: With increasing capabilities of AI systems, and established research that demonstrates how human-machine combinations operate better than each in isolation, this paper presents a timely discussion on how we can craft better coordination between human and machine agents with the aim of arriving at the best possible understanding between them. This will enhance trust levels between the agents and it starts with having effective communication. This paper by Joshua Newn discusses how framing this from a human-computer interaction (HCI) approach will lead to achieving this goal. This is framed with intention-, context-, and cognition-awareness being the critical elements which would be responsible for the success of effective communication between human and machine agents.

In line with the recommendations that we shared in our response to the OPCC consultation for incorporating privacy legislation amendments to accommodate for the impacts from AI, we see that the augmenting of human capacity via AI-enabled solutions will play an important role in them becoming better partners.

The transfer of knowledge between the human and artificial agents is important for building trust and becoming better partners with machines. This can happen when machines are able to better predict the actions and responses of human agents and human agents are able to explain the actions of the machines such that they can make informed decisions. 

Effective communication in the above situation will happen when it is as “natural” as possible even if it comes at high costs. The high costs can arise from the more computationally expensive methods that might need to be deployed and explanation mechanisms which are taxing from a resource and time perspective. 

In the paper, the author explores projects that will enable us to build better human-aware AI systems, for example, using eye-tracking software with future potential to use other sensory capabilities. Gaze can be used to make a determination of the intention of different agents and serve as a great starting point for artificial intelligence agents to use non-verbal cues from humans in predicting actions and functioning as more capable partners. But, once this starts to work, it is also important to be able to communicate why and how a particular decision was made and that’s where the author crafted a study to determine how humans formed predictions and explained them. They used a game called Ticket to Ride as a playground to test this.The system was able to abstract complex multimodal information into situation-awareness and also provide a natural language explanation for it. This is provided in the form of a visual example in the paper. 

The effort showcases how insights from the field of HCI can be used to enhance two-way communication between AI and human agents doing things like finding the right amount of information to communicate given what has already been communicated. Additionally, the aim of some of this research is to detect intentions unobtrusively such that the system can find the right time to intervene or whether to intervene at all based on the cognitive load and attention levels.

The HCI approach in the paper is centred on 3 key ideas: 

  1. Intention-awareness: figuring out what the human is planning to do so as to allow for maximum autonomy while providing crucial information ahead of time of performing an action. 
  2. Context-awareness: analyzing the context within which the human is operating to provide them with context-relevant information, reflective of how humans communicate and interact with one another.
  3. Cognition-awareness: based on the cognitive load of the human agent, figuring out when to deliver information to optimize for effectiveness of communication.

Original piece by Joshua Newn: https://drive.google.com/file/d/1_9u7-p0E6YfV0y4xeYv9xEMzgEdKixFl/view

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

related posts

  • Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade

    Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade

  • Distributed Governance: a Principal-Agent Approach to Data Governance - Part 1 Background & Core Def...

    Distributed Governance: a Principal-Agent Approach to Data Governance - Part 1 Background & Core Def...

  • Harmonizing Artificial Intelligence: The role of standards in the EU AI Regulation

    Harmonizing Artificial Intelligence: The role of standards in the EU AI Regulation

  • Governance of artificial intelligence

    Governance of artificial intelligence

  • Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

    Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

  • Responsible AI In Healthcare

    Responsible AI In Healthcare

  • An Empirical Study of Modular Bias Mitigators and Ensembles

    An Empirical Study of Modular Bias Mitigators and Ensembles

  • Research summary: Social Work Thinking for UX and AI Design

    Research summary: Social Work Thinking for UX and AI Design

  • Ethics-based auditing of automated decision-making systems: intervention points and policy implicati...

    Ethics-based auditing of automated decision-making systems: intervention points and policy implicati...

  • (Re)Politicizing Digital Well-Being: Beyond User Engagements

    (Re)Politicizing Digital Well-Being: Beyond User Engagements

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.