• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

How the TAII Framework Could Influence the Amazon’s Astro Home Robot Development

May 15, 2022

šŸ”¬ Research summary by Elena Guerrero PeƱo, graduate in International Relations of the Complutense University of Madrid and currently volunteer researcher atĀ SocialTechLab.eu.

[Original paper by Elena Guerrero PeƱo]


Overview: In September 2021, Amazon announced in its product event the new Amazon“s Astro robot, a home robot designed to be much more than a home security and safety device. The concerns about privacy and transparency started to raise among experts and users since the announcement of the prototype. This paper discusses the different risks and responsibilities Amazon has to face with this new product and how the Trustworthy Artificial Intelligence Implementation (TAII) could influence its development and the social impact which might result from its use, having as a reference the TAII Framework, which could help to achieve a product development based on ethics and a trustworthy AI system.


Introduction

The Amazon’s Astro home robot is now a reality. A robotic home device based on Artificial Intelligence (AI) technology has not ever been so close since the multinational company has brought it to the present this year. Amazon presents Astro as a smart robot system designed to be much more than a home security and safety device but to be friendly and comfortable ā€œembodying an unique persona… by adding eyes to the display or a whole host of soundsā€, as David Limp mentioned during the official presentation of the Astro bot in the Amazon’s product event in September 2021 [1]. It is a combination of the companyĀ“s best products such as Alexa, Ring cameras, or smart home integration. Between its main functions, it can move autonomously around the house and check specific rooms, things, people or even pets thanks to its simultaneous localization and mapping system and facial recognition. It can recognize its environment, detect people or intruders as well as strange objects and sounds, follow the user during a videocall, set up reminders, bring things to a specific registered person or automatically save videos into the userĀ“s cloud storage.

However, Amazon has already faced some privacy issues with previous devices. For example, there are customer concerns about the Ring cameras and the partnership with the police for surveillance purposes [2] as well as the contract with the National Security Agency. It is really important the trust that customers can put into this kind of products in order to delegate their operations or decisions, as well as common home activities, on the AI systems. Furthermore, there must exist a commitment from the providers to ensure that the product is trustworthy and safe. One of the challenges ahead is the construction of an innovative and friendly environment and at the same time the creation of a high level of protection for the user [3]. 

Therefore, it is crucial to implement Trustworthy Artificial Intelligence (TAI) systems. A TAI system is essentially an AI technology that reflects and possibly adapts the design for the common good and sustainability, it is not opaque in terms of understanding how they make certain decisions and, for these reasons, has public trust, shows clear responsibility, and enables ā€˜dual advantage’ [6]. This way, the company would need a multidisciplinary approach to accomplish TAI systems where technology, education, economy, business, ethics and law are combined.

Trustworthy Artificial Intelligence Implementation

Trustworthy Artificial Intelligence Implementation While many experts affirm that AI technology means potentially a lot of advantages for the future and a contribution to an immense social good [4,5,7-9,14], they also recognize numerous caveats that have to be considered, as legal, economic and ethical worries mostly related to freedoms and human rights [4] that companies usually tend to observe once the products are set in motion [5]. The implementation of the Trustworthy AI technology needs to be put into effect within the organizations, taking into account their own values and organizational ethics perspective, business models and common good [7], to fill the existing gap between technology innovation and protection of privacy [8]. Thereby, AI technology ideation and development within these ethical and philanthropic values generate additional sustainability and common good [9].

In the case of the AmazonĀ“s Astro home robot, as an AI product, we can appreciate its use in empowering and enabling human self‑realization but it should not devaluate human abilities, remove human responsibility or reduce human control [6]. The development of this product must be a human-centric and trustful System in order to maximize benefits and minimize risks [10]. According to the TAII Framework, it is necessary to start with the analysis of ethical inconsistencies and dependencies [7]. The TAII Framework, published in 2021 by Josef Baker-Brunnbauer, founder of SocialTechLab.eu, helps organizations to initiate the AI ethics implementation, oriented on the ethical guidelines for the TAI of the European Commission, the Human Rights, and the Sustainable Development Goals (SDGs), following a non-technical, holistic approach and including perspectives of social impact. It consists of twelve steps from defining the company values, the business model, and the stakeholders, to the merge, execution, and certification of the AI system. Between these steps, it is necessary to justify existing regulations and standards, defining the risks and being in accordance with common good and ethical requirements and guidelines.

In perspective of the previously stated concerns, Amazon and its different stakeholders should analyze the different dilemmas that may arise. The company faces different challenges and risks: from privacy, transparency, personal injury and property damage to manipulation, loss of control, energy consumption and job replacement. The company needs to be aware of the extended notion of risks and challenges that the use of this kind of AI technology involves, as the European Commission proposes [11], and act accordingly taking into consideration the need to fulfill principles such as Human Rights (including the right to freedom and security, protection of personal data or non-discrimination), SDGs (such as decent work and economic growth, reduction of inequalities or responsible consumption and production), OECD values-based principles on AI technology (such as inclusive growth, sustainable development and well-being or human-centered values and fairness) and the aims of the European General Data Protection, among other principles. 

Product Development Implication

Amazon affirms that the storage of facial recognition and mapping is mostly local, but a small part of the mapping data is saved securely in the cloud for remote access via the phone app [12]. Although the company has applied some privacy measures such as the possibility to set up ā€˜out-of-bounds zones’, a ā€˜do not disturb mode’ and the ability to delete configured maps in the app, according to Astro’s privacy page, the concerns about hacking, storage and relationships with official safety organizations worries the customers. Moreover, the function of facial recognition also starts to concern some official authorities, such as the European Parliament which has already called for a ban by law enforcement on the use of facial recognition in public spaces according to the Union data protection [13]. However, this concern and upcoming law enforcement does not involve private areas, exclude products, and has no influence on large companies such as Amazon. 

During the development process and future inputs, Amazon must prioritize AI ethics, plan, and allocate resources and assure the commitment of the stakeholders, as well as strong governance controls, process management audit procedures and additional costs and resources that may conflict with commercial interests for the success of the TAII [7]. In this respect, it is key to document all answers and future iterations of the TAII Framework to improve transparency. This also means to extend the explainability of the product beyond a technical language [14], being necessary a collaboration between engineers and external and internal stakeholders on how ethical issues should be implemented [7] to avoid negative impact and social rejection in order to achieve social acceptability or preferability, two principles that should guide all data science systems that could have an impact in social life [15], as the Astro robot does.

Influence on the Astro Home Robot Development

The company needs a constant assessment during the Astro system“s whole life cycle, which creates constant input for analysis to start the process again and improve transparency, safety, trust and, indeed, the product itself, taking into account the evolution of society, technology, market, etc. [7]. The TAII Framework should be adapted from the beginning to the value chain, legal requirements, risk consequences and common good [7] and at the same time evolve and be revised in order to adapt to the evolution of these factors, taking into consideration the commercial interests expected but also the valuable lessons the company can learn in the process [16].

Between the lines

Although current Amazon“s measures seem to keep in mind some of the stated concerns, the company should do a continuous assessment, prioritizing AI ethics and considering the social impact non-related with software and data engineering setting that this product could have. At the same time, privacy rights and ethical dependencies within the company have to be taken into account at all times. This way, the product would have to continuously be evaluated from step one to twelve of the TAII Framework. An update of the AI system“s brief overview will be necessary and some of the parameters may be changed or updated, although the Amazon“s Astro robot is already on the market.

References

[1] Amazon: Amazon Devices & Services news. September 2021 https://www.aboutamazon.com/news/devices/amazon-devices-services-news-september-2021 (2021). Accessed 26 October 2021

[2] Ng, A.: Ring’s work with police lacks solid evidence of reducing crime. CNET https://www.cnet.com/features/rings-work-with-police-lacks-solid-evidence-of-reducing-crime/ (2020). Accessed 20 October 2021

[3] Wendehorst, C.: Trustworthy AI – the role of law and regulation. From Ambition to Action. A High Level Conference on AI. Keynote Speech, September 2021. https://ai-from-ambition-to-action.com Accessed 19 December 2021.

[4] Jobin, A., Ienca, M. & Vayena, E. The global landscape of AI ethics guidelines. Nat Mach Intell 1, 389–399 (2019). https://doi.org/10.1038/s42256-019-0088-2

[5] Royer, A.: The Short Anthropological Guide to the Study of Ethical AI, Montreal AI Ethics Institute https://arxiv.org/ftp/arxiv/papers/2010/2010.03362.pdf (2020). Accessed 19 October 2021 

[6] Floridi, L., Cowls, J., Beltrametti, M. et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds & Machines 28, 689–707 (2018). https://doi.org/10.1007/s11023-018-9482-5

[7] Baker-Brunnbauer, J.: TAII Framework for Trustworthy AI Systems. ROBONOMICS: The Journal of the Automated Economy, 2, 17. (2021). Retrieved from https://journal.robonomics.science/index.php/rj/article/view/17

[8] Watcher, S.: Data protection in the age of big data. Nature electronics, 2, 6-7 (2019). https://doi.org/10.1038/s41928-018-0193-y

[9] Kurzweil, R.: The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Viking, New York (1999)

[10] European Commission: Ethics Guidelines for Trustworthy AI. High-Level Expert Group on Artificial Intelligence https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419 (2019). Accessed 19 October 2021

[11] European Commission: Second European AI Alliance Assembly. https://digitalstrategy.ec.europa.eu/en/events/second-european-ai-alliance-assembly (2020) Accessed 26 October 2021

[12] Seifert, D.: Say hello to Astro, Alexa on wheels. The Verge. https://www.theverge.com/2021/9/28/22697244/amazon-astro-home-robot-hands-on-features-price (2021) Accessed 26 October 2021.

[13] European Parliament: European Parliament resolution of 6 October 2021 on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters (2020/2016(INI)) https://www.europarl.europa.eu/doceo/document/TA-9-2021-0405_EN.html (2021) Accessed 18 December 2021.

[14] Baker-Brunnbauer, J.: Management perspective of ethics in artificial intelligence. AI Ethics 1, 173–181 (2021). https://doi.org/10.1007/s43681-020-00022-3

[15] Floridi, L., Taddeo, M.: What is data ethics? Phil. Trans. R. Soc. A. 374: 20160360. (2016) http://doi.org/10.1098/rsta.2016.0360
[16] MIT Technology Review Insights: In unpredictable times, a data strategy is key https://wp.technologyreview.com/wp-content/uploads/2021/10/In-unpredictable-times-a-datastrategy-is-key.pdf (2021) Accessed 20 October 2021

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • Research summary: Robot Rights? Let’s Talk about Human Welfare instead

    Research summary: Robot Rights? Let’s Talk about Human Welfare instead

  • Promoting Bright Patterns

    Promoting Bright Patterns

  • ā€œCold Hard Dataā€ – Nothing Cold or Hard About It

    ā€œCold Hard Dataā€ – Nothing Cold or Hard About It

  • A survey on adversarial attacks and defences

    A survey on adversarial attacks and defences

  • A Matrix for Selecting Responsible AI Frameworks

    A Matrix for Selecting Responsible AI Frameworks

  • Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

    Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

  • AI Ethics Maturity Model

    AI Ethics Maturity Model

  • A Hazard Analysis Framework for Code Synthesis Large Language Models

    A Hazard Analysis Framework for Code Synthesis Large Language Models

  • Research summary: Out of the Laboratory and Into the Classroom: The Future of AI in Education

    Research summary: Out of the Laboratory and Into the Classroom: The Future of AI in Education

  • Use case cards: a use case reporting framework inspired by the European AI Act

    Use case cards: a use case reporting framework inspired by the European AI Act

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.