• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Conversational AI Systems for Social Good: Opportunities and Challenges

January 26, 2022

🔬 Research summary by Julia Anderson, a writer and conversational UX designer exploring how technology can make us better humans.

[Original paper by Peng Qi, Jing Huang, Youzheng Wu, Xiaodong He, Bowen Zhou]


Overview: Conversational artificial intelligence, or ConvAI, has the potential to advance the United Nations’ Sustainable Development Goals (SDGs) for social good due to the technology’s current deployment in various industries and populations. This paper analyzes the challenges that existing and exploratory ConvAI systems may face while advancing social good initiatives. 


Introduction

Despite increasing consumer adoption, little research has focused on ConvAI’s application towards Sustainable Development Goals. “What properties make ConvAI more appealing to, say, their human counterparts?” the paper asks. Since language may be our most natural interface, interacting with computers via natural language provides an accessible, personalized and scalable way to gather or distribute information. 

However, conversational artificial intelligence is subject to similar data gathering constraints and biases as other forms of machine learning. User experience also plays a vital role in the technology’s adoption. Whether it can overcome certain technical and social hurdles will determine how much the technology improves society.

Opportunities for Social Good

Common Uses: Voice assistants and chatbots are commonplace to those with smart devices and internet access. In ConvAI’s early days, the technology often provided users with limited conversational topics using predefined dialog. Now, user-initiative systems, ones that focus on responding to a user request, are more common. Asking a system to set a timer or tell a joke showcase typical task and entertainment uses. The rise of embodied technology such as augmented and virtual reality is one such place ConvAI can be combined to make more interactive systems.

Good Health and Well-being (SDG #3): ConvAI systems in healthcare present new and accessible ways to inform the public. Home smart assistants are answering common COVID-19 questions while automated phone surveys gather important public health information. The need to only have telephone access to conduct these surveys makes this data gathering practice more equitable. However, automatic speech recognition technology must continuously adjust to calls with background noise, disruptions, or underrepresented accents to obtain high-quality data. 

Quality Education (SDG #4): Virtual learning increased dramatically during the pandemic as ConvAI systems strived to improve the allocation of teacher resources. Not only can virtual assistants answer questions after regular hours, but they may provide personalized recommendations to cater to an individual’s learning goals. While pairing physical learning, such as science experiments for chemistry class, with virtual solutions is important, ConvAI could adopt teaching strategies from human instructors to act as a tutor when physical options are not available. 

Reducing Inequalities (SDG #10): “Equitable policies begin with equitable access to governments” states Qi, et al. By using ConvAI over the phone, citizens could provide policy feedback while maintaining privacy. Transcripts could then be triaged and processed further, streamlining accounts of what is on the public’s mind. ConvAI agents are also exploring tailored approaches to charitable donations. For example, if the AI detects that someone is more motivated by one cause over another, it could invite someone to learn about a particular organization. However, without enough transparency, this line between helpful and persuasive technology remains thin. 

Limitations of ConvAI

Technological:  When designing ConvAI systems, it is vital to collect linguistically diverse data so systems avoid amplifying bias. The paper also reminds us that language interfaces may not always be the best solution, especially when graphical interfaces are available. On the technical side, natural language processing, or how the system responds to language, rarely responds as fast or as accurately as humans, which can lead to poor user experience. In reality, conversations typically happen between more than two people, but ConvAI is still perfecting communication with just one other person. As technology advances, it will become easier to impersonate voices through deepfakes, further jeopardizing society’s trust in the technology. 

Abuse: Aggression is often ripe in anonymous online communities and similar frustrations occur when talking to non-human systems. Not only does this skew data (i.e. does the average user communicate like this?), but calls into question what traits these systems should have. For example, should the system be upfront that it is a bot, or is it assuming people know? Since ConvAI is trained, and ultimately personified, based on the data it receives, data gathering should be as transparent and representative as possible. If people are abusive to these systems because of certain traits, then the technology must be redesigned to understand its user’s motivations. 

Human Escalation: A common question for AI is at what point a human should take over the decision-making process. For example, if a person indicates they have severe COVID-19 symptoms to an automated phone survey, should a healthcare worker then reach out? Similarly, while ConvAI may be great at gathering information about one’s mental state, it requires a licensed professional to create a personalized care plan. A related sentiment is shared when it comes to detecting learning disabilities in the classroom.

Between the lines

ConvAI’s widespread consumer adoption indicates a promising future. When successfully combined with other interfaces in the appropriate context, the AI becomes more accessible and enjoyable. To ensure the technology is a compelling solution towards SDGs, ethical data gathering practices and standardized evaluation methods must be shared across academic research and industry development. Ultimately, user trust is at the core of success. If talking to a machine is perceived as valuable and useful, then the sky’s the limit.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

    AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

  • Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications

    Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications

  • Against Interpretability: a Critical Examination

    Against Interpretability: a Critical Examination

  • Eticas Foundation external audits VioGĂ©n: Spain’s algorithm designed to protect victims of gender vi...

    Eticas Foundation external audits VioGén: Spain’s algorithm designed to protect victims of gender vi...

  • Implications of Distance over Redistricting Maps: Central and Outlier Maps

    Implications of Distance over Redistricting Maps: Central and Outlier Maps

  • From AI Winter to AI Hype: The Story of AI in Montreal

    From AI Winter to AI Hype: The Story of AI in Montreal

  • Research summary: Digital Abundance and Scarce Genius: Implications for Wages, Interest Rates, and G...

    Research summary: Digital Abundance and Scarce Genius: Implications for Wages, Interest Rates, and G...

  • A technical study on the feasibility of using proxy methods for algorithmic bias monitoring in a pri...

    A technical study on the feasibility of using proxy methods for algorithmic bias monitoring in a pri...

  • AI Has Arrived in Healthcare, but What Does This Mean?

    AI Has Arrived in Healthcare, but What Does This Mean?

  • The Return on Investment in AI Ethics: A Holistic Framework

    The Return on Investment in AI Ethics: A Holistic Framework

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.