🔬 Research summary by Dr. Marianna Ganapini* (@MariannaBergama), our Faculty Director.
[Original paper by World Economic Forum and the Markkula Center for Applied Ethics at Santa Clara University]
* Conflict of interest:Â Marianna is currently collaborating with a research team at IBM led by Francesca Rossi
Overview: In recent summaries, we have stressed the fact that at times private companies have taken the lead in providing guidelines for the responsible use and development of AI technologies. The World Economic Forum and the Markkula Center for Applied Ethics at Santa Clara University are collaborating in surveying the work of these companies, and they recently have focused on IBM. In this summary, we will go through the main points of their most recent white paper, discussing the importance and novelty of the approach taken by IBM.
Introduction
Some tech companies have taken the lead in providing guidelines for the responsible use and development of AI technologies, especially where governments and public institutions are failing to establish clear guidelines and regulations. The World Economic Forum and the Markkula Center for Applied Ethics at Santa Clara University are collaborating in surveying the work of these companies, and in this paper, they have focused on IBM. In what follows, we will go through the main points of their recent white paper discussing the importance and novelty of the approach taken by IBM.
Key Insights
One of the key moments in IBM’s development of AI Ethics strategy was the publication of IBM 5 key commitments to “accountability, compliance and ethics in the age of smart machines”:
- Creating an IBM AI Ethics Board to “discuss, advise and guide (eventually govern) the ethical development and deployment of AI systems (by IBM and its clients)” — since 2019 it is co-chaired by Christina Montgomery (IBM Chief Privacy Officer) and Francesca Rossi (IBM Global Leader of AI Ethics)
- Designing a “company-wide educational curriculum on the ethical development of AI”
- Creating the IBM “Artificial Intelligence, Ethics and Society program”: “a multidisciplinary research programme for the ongoing exploration of responsible development of AI systems aligned with the organization’s values”
- Establishing an ongoing “participation in cross-industry, government and scientific initiatives and events on AI and ethics”
- Establishing a “regular, ongoing IBM-hosted engagement with a robust ecosystem of academics, researchers, policy-makers, non-governmental organizations (NGOs) and business leaders on the ethical implications of AI
How are these commitments being implemented in practice? To understand some of the recent key decisions of the IBM AI Ethics Board, we need to first zoom in on the fact that Trust & Trustworthiness are central concepts to the current IBM strategy, and they emerge out of 5 “pillars of trust”: Explainability, Fairness, Robustness, Transparency, Privacy.
These are the key values that IBM pledges to follow in its design strategies, starting with the creation of ethics-sensitive technologies followed by close monitoring of downstream effects of the use of these technologies.
These pillars have been tackled at IBM by first developing some technical tools to ensure trust for their clients and for the public at large. Let’s see what they are:
- Explainability: when AI is involved in a decision-making process, the reasons for the decisions are to be made available. IBM AI Explainability 360 toolkit aims at tackling some of the technical challenges of ensuring explainability
- Fairness: together with the IBM Cloud Pak for Data, the IBM AI Fairness 360 toolkit for detecting biases in AI is a tool that could help avoid discrimination and unequal treatment in the design of AI technologies.
- Robustness: to shield from adversarial attacks the IBM Adversarial Robustness 360 toolbox is a valuable defense tool
- Transparency: the IBM AI FactSheets 360 and the Uncertainty Quantification 360 toolkit are ways for AI developers to document key aspects of their models to ensure transparency
- Privacy: IBM pledges that “[o]nly necessary data should be collected, and consumers should have clear access to controls over how their data is being used.”
Going beyond the technical tools, to operationalize those 5 pillars and ensure trust, IBM adopts an “ethics by design” approach. In our understanding, that should mean that the above values or pillars are embedded in the design of AI technology not only in the initial design phase but also in considering the downstream consequences and potential misuse of that technology. In some cases, that may require a company to re-design or change the technology altogether.
IBM seems committed to embedding values in this way, as shown, for instance, by their willingness to re-think the use and production of their facial recognition software. More specifically, according to the report, IBM is taking company-wide practical steps to implement its “ethics by design” approach. Some of these important steps are:
- Internal curriculum-development and repeated training activities to promote ethics sensitive design (IBM Garage)
- Fostering diversity, inclusion and equality in the workplace and at the HR level
- Stakeholders engagement with the goal of bringing together “AI corporations with civil society groups for conversations on the best practices for beneficial AI” (e.g. the collaboration with PAI)
- Stakeholders engagement through partnerships with universities (e.g. Notre Dame-IBM Tech Ethics Lab)
- Involvement in governmental discussion on AI (e.g., Francesca Rossi’s involvement in the European Commission’s High-Level Expert Group on AI)
- Promoting AI for social good (e.g. the Science for Social Good initiative; IBM signed the Vatican’s Rome Call for AI Ethics in 2020).
These are some of the concrete initiatives taken by IBM to drive the company toward a more ethical and trustworthy design and use of AI. They can’t do it alone, though: private companies can be fully trustworthy only if they are part of a broader value-sensitive environment that includes independent oversight organizations, a clear legislative framework, and an engaged and informed public.
Between The Lines
IBM has taken the lead in setting the standards for private corporations’ involvement in promoting AI Ethics, trying to learn from their past mistakes while looking for new ways to ensure a trustworthy AI for their clients and society at large. We hope to see more of that kind of engagement and commitment from the private sector going forward. More broadly, we believe that to reach a trustworthy AI we need to put more effort into the following:
- Precise and targeted government’s AI regulations
- A private sector that genuinely committed to a trustworthy AI
- Independent oversight organizations & frameworks (e.g., Independent audit systems)
- Civic competence-promoting initiatives and organizations