š¬ Original Article by Agustin Polo Friz, volunteer researcher at SocialTechLab.eu after completing his graduate studies from the University of Vienna, Austria.
[In this article, Agustin uses the Tesla Bot as an application in combination with the original paper by Josef Baker-Brunnbauer]
Overview: Could Tesla implement trustworthy AI? This article discusses the possible implementation of the TAII Framework for the new prototype of a humanoid robot called Tesla Bot presented by Elon Musk for next 2022. The intent is to merge technological innovation with trustworthy AI guidelines to incorporate ethical principles in AI systems.
Introduction
On the 20th of August, the Artificial Intelligence Day was held in Palo Alto and Elon Musk took the opportunity to announce that in 2022 Tesla will present a new prototype of a humanoid robot called Tesla Bot. The robot should resemble a human body in shape and should be designed to ānavigate through a world built for humansā by performing tasks that are “dangerous, repetitive, or boring” [6]. Even though Musk shares many concerns regarding AI systems both risks and impacts on human society, he has decided that Tesla should be a leading company in merging ethical AI and robotics, instead of leaving the floor to parties who he thinks would be less responsible. In this context, the Trustworthy Artificial Intelligence Implementation (TAII) Framework [1] through its holistic perspective could help in developing trustworthy AI systems for Tesla Bot.
By centering Tesla Bot on the TAII Frameworkās twelve steps, the new prototype will be able to incorporate company values, morals and, ethical principles essential for Tesla’s culture. Bridging the gap between AI functionalities and Musk’s main societal vision could contribute to avoiding internal contradictions in the development of the product so that high standards of innovation, accountability, and transparency can be preserved. Experimenting the feasibility of the TAII Framework could increase Tesla credibility and commitment to providing a systematic assessment that guarantees compliance with the common good suggested by both the Sustainable Development Goals (SDGs) and the Universal Declaration of Human Rights (UDHR).
From Theory to Facts
Following the guidance proposed in the TAII Framework [1], the subsequent section seeks to show in which practical ways Tesla Bot might benefit from the trustworthy AI system implementation. Firstly, Tesla could create an AI system with a brief overview through which transparently specifying the purpose of Tesla Bot. In this case, it would be crucial to mention how the robot itself should be focused on performing repetitive and boring tasks such as buying groceries at the shop. Stating its main goal will require gathering information on both business model and company values and thus giving the right incentives to align all these elements according to the final vision. Moreover, it would be recommended to carefully select input data that would allow Tesla Bot to perform its tasks effectively. Adopting biased data would be pointless and might cause inconsistencies not only within the company values but also lead to negative external implications [3]. To connect, monitor, and guide the constant interactions between ethical principles and AI implementation, TAII Framework advocates for the establishment of an internal ethics board capable of supervising all AI systems within Tesla.
Stakeholders and Legal Considerations
The next step is to categorize stakeholders by role and identify group leaders capable of representing specific interests within the company. The intention should be to include all people involved in Tesla to raise awareness not only of the benefits concerning AI, but also of its related risks. Furthermore, legal requirements should be taken into consideration given how easy it is to cross borders when humanoid prototypes are entering the legal scene. In the case of Tesla Bot, promoting friendly artificial intelligence could have benign effects on humanity. I refer to friendly AI as machines programmed with the intent of mimicking friendly behavior based on human virtues, as opposed to hostile manners that might be harmful to society [8]. These AI systems could improve the quality of life by automating laborious and time-consuming tasks in many areas such as healthcare, agriculture, logistics supply chain, and industrial production, especially when aligned with the common good embedded in the 17 SDGs. On the other hand, it will also demand legal acumen in designing AI limitations [5]. When it comes to the law, eliminating opaque algorithms is not easy and it might hide insidious hurdles to overcome such as the technical issue of unveiling āblack boxā algorithms, or the legal constraints that could inhibit algorithm transparency when commercial secrets are involved, and finally the challenges related to data privacy legislation that can complicate efforts to disclose information on the training data [4]. The TAII Framework advises assessing risks associated with AI systems by measuring social impacts and potential threats. In other words, Tesla should find an evaluation scheme that mitigates the risk of unintended outcomes produced by its new prototype [9]. In parallel, a thorough inspection might offer the possibility to assess how Tesla Bot is intended to support the achievement of the common good summarized by the 17 SDGs.
Ethical Principles and Conclusive Steps
Even if the prototype would be probably available first in the U.S. and then in Europe, the ethical requirements proposed in the TAII Framework based on the European Commission AI regulation should be incorporated to assure compliance to international regulations [7]. After enlisting the key ethical principles, Tesla should follow a correct translation of those standards to the entire AI system ecosystem. The company could profit from merging all previous steps conducted on the Tesla Bot AI system and visualize its current state. By implementing and executing the results, it would be possible to retrieve a full picture of the process. Moreover, it is recommended to document all the steps that together with the AI brief overview will increase transparency throughout the process by checking every single action and its associated outcome. Optional certifications could validate the safety and the trustworthiness of the Tesla Bot AI system to foster confidence among stakeholders. Additionally, other iterations could be necessary if updates take place according to the TAII Framework and this might be easily the case given the complexity of merchandising a robot.
Between the lines
The success or not of Tesla Bot through the TAII Framework implementation depends on how Tesla will give priority to the ethics of artificial intelligence for its humanoid robot and how much commitment will be provided by people, groups, and institutions involved with this new form of technology [1]. Surely, many challenges will be met along the way. For instance, given Tesla’s great innovative power, there could be even more impediments in moderating the inconsistencies between ethical concerns and AI development [2]. Lack of robust governance, missing quality assurance, improper use of input data, and stakeholders’ negligence during the Tesla Bot life cycle might be only some of the issues related to the implementation of trustworthy AI systems. Another drawback concerns the necessary additional resources that deploying TAII Framework would imply. Identifying an ethics team that comprises interdisciplinary approaches to design AI solutions might be worthy, but it will simultaneously add new expenditures to Tesla’s budget. Moreover, the comprehensive effort of generating effective AI systems within Tesla will miss the point if the company would not be able to clearly show and explain all the necessary steps and iterations followed throughout the Tesla Bot life cycle. Finally, Tesla should illustrate convincingly the linkage between the engineering or technical world intrinsic in any AI system and the functional impacts of Tesla Bot on society.
To sum up, the fundamental challenge for Tesla would be to define ethical and societal objectives and then pass from a broader non-technical perspective to a more specific algorithmic-based formulation that includes both values and principles. Nonetheless, all these potential threats hampering the merchandising of Tesla Bot might be examined and seen as an opportunity to deploy an even better product and enlarge the community of companies adopting trustworthy AI systems.
References
[1] Baker-Brunnbauer, J.: TAII Framework for Trustworthy AI Systems. ROBONOMICS: The Journal of the Automated Economy, 2, 17 (2021). Retrieved from https://journal.robonomics.science/index.php/rj/article/view/17
[2] Borenstein, J., Howard, A.: Emerging challenges in AI and the need for AI ethics education. AI and Ethics 1, 1, 61-65 (2021). https://doi.org/10.1007/s43681-020-00002-7
[3] Brandon, J.: Using unethical data to build a more ethical world. AI and Ethics, 1, 101ā108 (2021). https://doi.org/10.1007/s43681-020-00006-3
[4] Burrell, J.: How the machine āthinksā: Understanding opacity in machine learning algorithms. Big Data and Society 3, 1 (2016). https://doi.org/10.1177/2053951715622512
[5] Chesterman, S.: Artificial intelligence and the limits of legal personality. International and Comparative Law Quarterly 69, 4, 819-844 (2020). https://doi.org/10.1017/s0020589320000366
[6] Daws, R.: AI Day: Elon Musk unveils āfriendlyā humanoid robot Tesla Bot. AI News. https://artificialintelligence-news.com/2021/06/08/razer-clearbot-using-ai-robotics-clean-oceans (2021). Accessed 20 October 2021.
[7] European Commission AI HLEG: Ethics Guidelines for Trustworthy AI (2019). Retrieved from https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419
[8] Frƶding, B., Peterson, M.: Friendly AI. Ethics Inf Technol 23, 207ā214 (2021). https://doi.org/10.1007/s10676-020-09556-w
[9] Ormond, E.: The Ghost in the Machine: The Ethical Risks of AI. The Thinker 83,1, 4-11 (2020). Retrieved from https://journals.uj.ac.za/index.php/The_Thinker/article/view/220/178