• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Towards Sustainable Conversational AI

May 15, 2022

🔬 Column by Julia Anderson, a writer and conversational UX designer exploring how technology can make us better humans.

Part of the ongoing Like Talking to a Person series


Product launches are complex. Conversational AI, like many AI products, relies on large datasets and diverse teams to bring them to the finish line. However, these product launches are often iterative, requiring future optimization to satisfy customers’ needs.

Sustainable AI is the quest to build AI systems that are as compatible for a company as it is for the people they are serving. This type of sustainability means rethinking the lifecycle of AI products so that the technology can scale while minimizing costs to consumers, society and the environment.

Without the proper mechanisms in place, today’s big product launch can become yesterday’s embarrassing mistake. Beyond maintenance costs, concerns around integration, secure data gathering and ethical design must be addressed. Creating AI that is compatible with scarce resources requires proper AI governance, collaborative research and strategic deployment.

Responsibility as a Cornerstone

Adopting artificial intelligence is appealing to businesses that want technology to guide decision-making, automate processes and save time. While the promise of AI is immense, widespread adoption is much slower in practice. To ensure seamless and scalable integration, various stakeholders must have a seat at the table to determine AI’s vision and governance.

The principles of Responsible AI may serve as a starting point for businesses that are serious about developing an AI strategy. While governance varies within organizations, it helps when the CEO or senior leader accepts responsibility of AI leadership. As responsibilities are assigned, decisions can be held accountable. The challenge then becomes how to measure success. Oftentimes, data scientists and engineers must be involved early on to clear up misunderstandings about how AI can grow alongside a company to solve its problems.

Typically, a formidable culture shift is necessary to implement a new vision. If AI isn’t trusted or understood within the organization, then its customers likely will hold similar perceptions. This is the basis of a social license. In AI a social license is the perception that the technology is using fair and transparent algorithms to benefit the markets they are operating in. For conversational AI such as in-car assistants, this could mean incorporating the objectives of not only automobile drivers, but manufacturers and regulators, as well.

Considering the perspectives of various stakeholders also means constant iteration. Rather than see AI software as “plug-and-play technology with immediate returns” companies should embrace cross-functional teams, user feedback and the reality that the first deployment won’t be the last. Before technology can scale within a business, the company must understand customer complexities as well as what data are required to create a successful experience.

Gathering the Goods

A common, and costly, pitfall is not gathering enough data. For chatbots or voice assistants, this is a limiting factor in not only how the technology understands customer problems, but how they can solve them as well. Choosing the right learning model for the problem along with using a representative training data set can reduce biases early on and yield better long-term results. Without the right model, disasters like Microsoft’s Tay, a bot that adopted the personality of a racist Internet troll, spring up.

In conversational AI, there are usually several pieces of a platform that create the end experience. For enterprise solutions, this incorporates NLP performance, backend data integration, security, data storage and more. The question of building vs. buying a platform is a crucial one, due to the high costs of switching systems later on. When an organization uses one service for speech recognition and another for dialog management, for example, another question arises: “How well can we secure our data?”

Obtaining high-quality datasets requires time and money. One way to make this easier is by using human-in-the-loop solutions. Here, humans can create and label datasets that are reviewed by the AI system and refined by humans. Biases are inevitable but could be reduced if the humans in the loop are working under responsible AI leadership.

Data privacy remains a contentious subject for users who may not understand how a company’s system shares information to deliver customer experiences. Sometimes, companies don’t understand how different elements of the system work together. While many vendors claim to have an “enterprise-ready platform” to create conversational AI solutions, this is often not the case. The secure transfer of data is one of many challenges that vendors must overcome. Once a solution is deployed, companies may feel pressured to stay with a vendor due to the complexities of switching to an alternative.

Environmental Expectations

The misallocation of computing power is a cost to the public. For one, the money necessary to train large data models is a barrier to entry for smaller firms, stifling competition and research projects. With less equitable access comes fewer opportunities to tackle community challenges.

Customer perceptions of AI products are always a factor for businesses trying to scale. Depending on the use case, people may want to solve a problem by talking to humans over conversational AI. When this finding isn’t clear to the designers or developers, then a company may have invested in the wrong solution. The inability for organizations to connect AI investments to business value hinders innovation.

Due to AI’s evolving nature, it is difficult to manage expectations of what success looks like. However, carbon efficiency is one metric being explored. Green Algorithms is one tool striving to quantify carbon footprints at a time when there are no universal standards to measure carbon impact. The Informations & Communications Technology industry, where Machine Learning and AI are fast-growing disciplines, generates about 2% of global CO₂ emissions and counting.

New solutions such as LightOn are finding ways to use less energy for complex computations using novel hardware. Crystal, a conversational AI platform, auto-classifies business requests with half the computing power of similar products. Thoughtful design paired with such innovation can bring sustainability to the forefront of AI solutions.

Conclusion

A narrow AI vision can set the stage for larger operational and ethical issues down the line. By realizing there is no one-size-fits-all approach to emerging technology, sustainable AI can be a nimble solution in a once rigid landscape.

Businesses like growth. When speed is prioritized over collaboration, however, AI deployments may not live up to expectations.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • A Snapshot of the Frontiers of Fairness in Machine Learning (Research Summary)

    A Snapshot of the Frontiers of Fairness in Machine Learning (Research Summary)

  • Before and after GDPR: tracking in mobile apps

    Before and after GDPR: tracking in mobile apps

  • Studying up Machine Learning Data: Why Talk About Bias When We Mean Power?

    Studying up Machine Learning Data: Why Talk About Bias When We Mean Power?

  • Breaking Fair Binary Classification with Optimal Flipping Attacks

    Breaking Fair Binary Classification with Optimal Flipping Attacks

  • Towards Healthy AI: Large Language Models Need Therapists Too

    Towards Healthy AI: Large Language Models Need Therapists Too

  • Disability, Bias, and AI (Research Summary)

    Disability, Bias, and AI (Research Summary)

  • Moral Zombies: Why Algorithms Are Not Moral Agents

    Moral Zombies: Why Algorithms Are Not Moral Agents

  • How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses

    How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses

  • Promoting Bright Patterns

    Promoting Bright Patterns

  • Best humans still outperform artificial intelligence in a creative divergent thinking task

    Best humans still outperform artificial intelligence in a creative divergent thinking task

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.