Context: Our founder Abhishek Gupta recently spoke at Ignite|The Tour conference in Toronto – Microsoft’s largest technical training conference, attracting over 4500+ developers and IT Pros. There, he shared some of the current challenges in AI ethics, their approach at Microsoft in tackling these, and thoughts on putting these principles into everyday practice. This article is a summary of those thoughts.
On my background in AI ethics, and the importance of creating ethical AI systems
I joined Microsoft recently as a Software Engineer doing Machine Learning in the Commercial Software Engineering group where we work with Microsoft’s partners to co-develop solutions to solve some of the toughest technical challenges that they face. The colleagues that I get to work with everyday imbibe principles of ethics, safety and inclusion down to their fiber and exhibit it in their everyday actions. I’m proud and fortunate to have the opportunity to build and collaborate on practical, actionable insights in AI ethics at Microsoft.
I founded the Montreal AI Ethics Institute with the vision of “defining humanity’s place in a world increasingly driven by algorithms”. The Institute started off as an experiment to see if we could bring together people from a diverse set of backgrounds with varying degrees of expertise in AI and ethics and have them build up their competence to a point where they could start making meaningful contributions to technical and policy measures in the development of ethical, safe and inclusive AI systems.
It is crucial that we consider the issues of ethics, safety and inclusion when it comes to AI systems, especially in the cases where these systems are used to make consequential decisions about a person. In automated contexts when there isn’t a human in the loop, problems can arise in terms of addressing nuanced situations where biases from historical data can negatively influence important aspects of people’s lives like finance, immigration, etc. It is also important when it comes to moving towards more equitable distribution of wealth and opportunities that will be created as a part of the economic progress in deploying AI systems.
That’s why at Microsoft, we believe that AI should augment human ingenuity rather than replace it. We need to reframe our conversations from humans versus machines to humans with machines – especially because there are many mundane aspects of our lives that can be automated. This could then free us up to perform more higher-order, value-adding tasks, whatever those might be in that person’s specific context.
Has there been an evolution of education or skills development within AI? How are tech companies like Microsoft addressing that challenge?
Academic institutions continue to work with tech companies to get advice and insights into their curriculum. There are a few universities that have launched an ethics course as part of the CS degree. But by and large, higher education is still in the early days of addressing ethics in AI.
The importance of ethics, safety and inclusion in AI systems is now discussed much more widely compared to when I started working on this. At Microsoft, we’re doing a lot of work in this area. We actually just introduced guidelines for developing responsible conversational AI.
It’s based on what we have learned both through our own cross-company work focused on responsible AI and by listening to our customers and partners. This is a great resource to leverage when starting to work on developing AI projects within your organization.
We believe that we are all responsible to ensure there are regulations in place that protect citizens globally. This is a joint effort and we’re working with the government, researchers, academics and partners to ensure that AI is applied ethically for the benefit of society. General public competence is also crucial in this regard as it helps to surface a diversity of ideas from various domains that may have faced these challenges albeit in different contexts.
The Microsoft Research team has developed the FATE program that is working on collaborative research projects that address the need for transparency, accountability, and fairness in AI and Machine Learning systems. As part of the research, this program has identified 6 ethical values—fairness, reliability and safety, privacy and security, inclusivity, transparency, and accountability—to guide the cross-disciplinary development and use of artificial intelligence.
Microsoft has also recently launched a series of training programs including the Microsoft Professional Program for AI, including a class called Ethics and Law in Data Analytics. Here are some more details on the program and steps on how to sign up for it.
AI Ethics in 2019
What I’d love to see in 2019 is that we expand the scope of this to include people who are not directly related to the field as well. An example of that is my mission in building public competence in Montreal, where we have a community of about 1400+ people that are now empowered to take back these ideas to their work, research, and community to apply them.
Ethics should not be an afterthought – it shouldn’t be a tab that you tick off at the end of your project. It needs to be integrated from the beginning all the way to the end. It is important to establish guidelines from the early stages that emphasize the development of AI that is responsible and trustworthy from the very genesis of the design process.
The Conversational AI Guidelines that I mentioned earlier encourage companies and organizations to stop and think about how their bot will be used and take the steps necessary to prevent abuse. At the end of the day, the guidelines are all about trust, because if people don’t trust the technology, they aren’t going to use it.
Would you suggest having an engineer trained in ethics or have that role standalone to work as more of a partnership?
A combined approach is best, and likely what will happen based on historical patterns in similar fields.
Ethics should definitely be integrated into the roles of the people that are directly and indirectly playing a role in these systems. What we’re going through today is a bit like where cybersecurity was a few years ago when the security role was separate and applied at the end of the development cycle. Gradually, we moved that earlier and earlier into the design and development phase such that security became something that everyone considered as they enacted their work duties.
While we still have deep experts in security, we will have the same thing where we have ethics, safety and inclusion become core priorities for all stakeholders involved in the development and deployment of these systems. And we will also have experts in ethics that will serve to strengthen the work that is done in the domain.
What sort of learning path would you recommend to ensure that there is sufficient training on bias?
Start with reading introductory texts to familiarize yourself with the ideas, then start applying them to case studies. Why? Because those of us with technical backgrounds assume that ethics are something that everyone understands. In reality, there are obvious gaps in our knowledge, and biases that go unchecked if we just start making those decisions without any familiarity with the insights from the field of philosophy. I recommend Peter Singer’s Practical Ethics as a starting point, and case studies from the Markkula Center for Applied Ethics.
I’d also recommend reading domain-specific case studies from my website. Participating in community discussions around this is also very crucial – find a local meetup related to AI Ethics, or start your own.
Here are some other resources from Microsoft that make these ideas accessible:
- November 22, 2018 – Microsoft’s Team Data Science Process (TDSP)
- November 29, 2018 – Integrating AI in your solutions with Microsoft AI platform
- December 6, 2018 – Considerations for developing using BOT Technology
What is the business value of building a responsible AI system – why is this so important?
When we look at Microsoft’s mission statement and performance over the past half decade, it is evident that success should be measured holistically and our focus on inclusion and building products that bring the greatest benefits to the largest number of people has helped us usher in a new era.
From a business perspective for anyone trying to assess why it is useful to invest in building responsible AI systems, a simple example will serve to illustrate the point – imagine a facial recognition system on unlock your phone that works only well for certain types of faces and not well for others.
For someone where it doesn’t perform well, their experience with your device will be terrible to a point that they wouldn’t want to use your device and perhaps go to a competitor that has built a system that has fewer biases and hence they have a better user experience.
Ultimately, building AI systems that are responsible will enable your products, services to be more accessible and useful to a larger number of people and end up bringing home a larger business value.
Outside of bias itself, what other ethical considerations would you say are critical with AI development?
Reproducibility, accountability, transparency, explicability, ethics, safety and inclusion are all equally important.
As discussions around building responsible AI systems has started to go broader, most of the focus rests on bias because it is the most obvious point where harms can be induced onto users of those systems but these other concerns are at times more subtle that also have significant impacts in how people interact with AI systems.
For example, in building AI systems that will be used in a medical context, it is important that we have traceability of decisions and have some degree of explicability as to why certain diagnoses and recommendations are being made to the patient. Additionally, even in encouraging uptake of these systems and getting approvals from medical associations and doctors, it is crucial that we meet these requirements to evoke trust, otherwise we are asking them to delegate away their decision making abilities and cloud transparency.
How is Microsoft is addressing responsible AI?
Microsoft has taken a very active approach when it comes to applying these principles. I serve on the AI Ethics Review Board for my department which serves as an advisory body to apply well-defined principles when it comes to building responsible AI systems. These principles are applied prior to even the design of the system thus allowing us to be proactive in mitigating concerns. We have a well established pipeline for submitting projects that use AI in any capacity and they are reviewed by a committee of experts before work is started on them.
We continue to see more people who now understand the importance of these principles and actively work with us to not only bring up possible concerns but also work with us in addressing them throughout their projects. This works because we have clear guidelines, structure, processes and alignment with Microsoft’s mission statement that makes the integration of this process smooth. This allows us to apply this to our projects as an integral part of the design, development and deployment cycle.
I’m proud to be working for a company that believes that regulating AI is a joint effort between government, researchers, academics and partners. Our aim is to develop computational techniques that are both innovative and ethical, while drawing on the deeper context surrounding these issues from sociology, history, and science and technology studies.