• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Aging with AI: Another Source of Bias?

January 3, 2023

✍️ Column by Marianna Ganapini, and Myriam Bergamaschi.

Dr. Marianna Ganapini is our Faculty Director and Assistant Professor in Philosophy at Union College.

Myriam Bergamaschi is a feminist and trade union activist, author of numerous books and articles about Italian trade unions, history, feminism, and aging.


Reflections about aging open up philosophical questions about the meaning of life and death: where do we want to be when we age, what kind of existence do we want to live, and what does it mean to age well? And so on. One thing we know for sure, though, is that aging often means becoming dependent on others for care and support. It is also becoming increasingly clear that artificial intelligence (AI) and robots can be used to assist older adults in a variety of different ways. For example, AI-powered technology can help older adults with simple daily tasks such as remembering their medications or keeping track of meetings and appointments. In addition, AI-powered systems can monitor older folks’ health and well-being, alerting caregivers or medical professionals in case of an emergency. In theory, AI could also provide some form of social support and companionship to older adults living alone or far from their families. Hence, the use of AI could help older adults live more independently and safely while providing both assistance and (hopefully) some peace of mind to their caregivers and loved ones.  

Unfortunately, the use of recent technology to assist aging adults is also fraught with ethical challenges. A recent NYT article offers a depressing and disconcerting picture of how ‘robots’ have been recently employed to ‘assist’ patients in hospitals when nurses were unavailable. As the article explains, “patients, many of them already disoriented, were confused by the disembodied voices coming from” the machines, which were ill-fitted to assist them. 

This result is evidence of a more significant trend. A recent paper by the scholar Dafna Burema (2022)  analyzed 96 academic publications to check how older adults are represented in human-robot- interaction (HRI). The paper’s central question is: how are older adults represented by those who design and build robots aimed at elderly care? Through her analysis, the author unveils an “essentialist view” in which older folks are portrayed as unable to participate socially, with fragile health, little mobility, and impaired cognitive and mental function. The author points out that research and design in HRI treat the old as having these inherent features as if this was part of their ‘essence’ as old people. Given the way older adults are portrayed in the field, the goal with which AI-empowered robots are designed is to allow the elderly to “improve” themselves. Thus, it looks as if the technology is there to alleviate the “burdensome” aspects of aging and decline by increasing the independence of elderly users, making them less reliant on others. As the NYT article documents at length, this is also a way for hospitals to potentially cut costs that hardly benefit the patients. 

In contrast, we want to caution against an incorrect, stereotypical vision of old age, portrayed only as an inherently difficult phase of our lives, characterized by periods of increasing fragility and need for care and support. Though it is true that many elderly people face difficulties and often need help to carry on with their lives, frequently, those who are designing the technology for them seem to lack the relevant knowledge about what older folks are, need, or want. When we call for more inclusive and open technology, we often need to remember that a lot of this new tech targets older adults. These are excluded from the conversation about technology’s goals and values; their voices are rarely heard, and their inputs are often not considered relevant. To stop this trend, those of us working to make a more ethical technology should say loud and clear that ageism and lack of knowledge are bound to cause harm and that some in-depth research is needed to understand the actual needs, desires, and abilities of those affected by the tech we build.

References

Burema, D. A critical analysis of the representations of older adults in the field of human–robot interaction. AI & Soc 37, 455–465 (2022). https://doi.org/10.1007/s00146-021-01205-0 

Stypinska, J. AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies. AI & Soc (2022). https://doi.org/10.1007/s00146-022-01553-5 

Robbins R & Thomas K. How a Sprawling Hospital Chain Ignited Its Own Staffing Crisis, The New York Times, (December 2022), https://www.nytimes.com/2022/12/15/business/hospital-staffing-ascension.html

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • The Technologists are Not in Control: What the Internet Experience Can Teach us about AI Ethics and ...

    The Technologists are Not in Control: What the Internet Experience Can Teach us about AI Ethics and ...

  • Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines (Research Summary)

    Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines (Research Summary)

  • The Future of Teaching Tech Ethics

    The Future of Teaching Tech Ethics

  • Can ChatGPT replace a Spanish or philosophy tutor?

    Can ChatGPT replace a Spanish or philosophy tutor?

  • Is ChatGPT for everyone? Seeing beyond the hype toward responsible use in education

    Is ChatGPT for everyone? Seeing beyond the hype toward responsible use in education

  • A Beginner’s Guide for AI Ethics

    A Beginner’s Guide for AI Ethics

  • From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

    From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

  • Should AI-Powered Search Engines and Conversational Agents Prioritize Sponsored Content?

    Should AI-Powered Search Engines and Conversational Agents Prioritize Sponsored Content?

  • The Chief AI Ethics Officer: A Champion or a PR Stunt?

    The Chief AI Ethics Officer: A Champion or a PR Stunt?

  • The Stanislavsky projects approach to teaching technology ethics

    The "Stanislavsky projects" approach to teaching technology ethics

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.