• Skip to primary navigation
  • Skip to main content
  • LinkedIn
  • RSS
  • Twitter
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy.

  • Content
    • The State of AI Ethics
    • The AI Ethics Brief
    • The Living Dictionary
    • Research Summaries
    • Columns
      • Social Context in LLM Research: the BigScience Approach
      • Recess
      • Like Talking to a Person
      • Sociology of AI Ethics
      • The New Heartbeat of Healthcare
      • Office Hours
      • Permission to Be Uncertain
      • AI Application Spotlight
      • Ethical AI Startups
    • Publications
  • Community
    • Events
    • Learning Community
    • Code of Conduct
  • Team
  • Donate
  • About
    • Our Open Access Policy
    • Our Contributions Policy
    • Press
  • Contact
  • šŸ‡«šŸ‡·
Subscribe

Real talk: What is Responsible AI?

May 30, 2022 by MAIEI

šŸ”¬ Original article by Layla Li, full-stack developer, data scientist, and data-driven global business strategist. Layla isĀ a Co-founder and CEO of KOSA AI.Ā 


It is now clear that two forces will majorly define our future: the advancement of intelligent technologies and the societal response to such an advancement. Many critical decisions relating to business, economic, and social domains rely on AI, so it is essential to ensure ethical frameworks are adhered to. Nevertheless recent breakthroughs have led to questioning the direction of the AI revolution. Over the past few years, many algorithms embedding historic prejudice have contributed to the perpetuation of bias and inequality – primarily affecting those underrepresented in our society.  

In 2019, a study published in Science revealed that US healthcare organizations were using a faulty algorithm to assign patients to certain levels of care. The algorithm was found to place severely ill black patients in the same lower-risk level of much healthier white counterparts, basing its decisions on biased metrics embedded in the data. Coming from lower income backgrounds and social status, black patients spend less on medical costs per year than white patients, leading AI to make biased decisions and deny care to up to 46% of qualifying black patients due to the inaccurate assumption that those who incur the higher costs need crucial care most.

Another 2019 study – involving Kentucky’s justice system – showed that the algorithms supporting judges in bail rulings significantly benefited white individuals over African Americans. 

Also, since the late 2000s, there have been many incidents surrounding predictive policing algorithms, which triangulate location data, events, and historical crime rates to forecast criminal activity. These algorithms, too, have affected black communities the most, leading to a substantial increase of police patrol dispatches in predominantly African American neighborhoods. 

Clearly, we can see that these examples are portraying the AI race bias embedded in the systems. These injustices get amplified even more when other individual attributes such as gender, sexual orientation, education, ethnicity, or social status, and how they overlap with each other, are taken into account. Therefore, building Responsible AI offers a solution to this problem, bringing about a holistic response to the ethical challenges posited by the advancement of modern intelligent systems.

Responsible AI is not a goal, but a way of doing and working

The analysis of the Responsible AI approach is a fascinating one, as it can be seen from different perspectives. The most significant one is how we will deal with the current interlinked crisis and build a better future for everyone, while respecting different experiences and local knowledge. 

It is estimated that by 2023 spending on AI technologies will jump to 97.9 billion dollars- more than two and a half times the spending level that was only a couple of years ago. We might as well use the evolution of AI to create new opportunities to improve the lives of people around the world – from business to healthcare and education. And here comes the reign of responsibility in the AI systems. 

As a first step, we need to detect and mitigate potential AI bias in the algorithms, but this quickly becomes an ongoing process that must be supported by ethics-by-design approaches. This requires the systematic inclusion of ethical values, principles, requirements, and procedures in the AI design, development processes, deployment, and throughout the entire AI pipeline. It is an approach integrating the whole AI lifecycle and everyone related to it, the technical and the non-technical stakeholders, in order to minimize risks and maximize fairness. ā€œEthics by designā€ encourages asking ethical questions all across, which is not happening right now as most  AI projects are concerned with social dilemmas only at the beginning and/or after an issue appears. So, achieving responsible AI systems is an ongoing process that starts with the organization’s true values and goals and the will to constantly work towards understanding different individual attributes, experiences and creating best practices.

Dataset diversity ethicsĀ 

Another important aspect of the Responsible AI approach is changing the narratives through introducing diverse datasets into the AI decision-making. 

A recently published article explains this aspect by the importance of ā€œintroducing a diverse group of people into an environment with biased systems, they can often identify those biases and remove them earlier in the process before they become problematic.ā€ As most AI systems are controlled by the decision-makers included in the design of the algorithms, a noticeable lack of diverse datasets has been created contributing to the above-mentioned risks and injustices. A 2018 research study, assessing the racial disparities in AI diagnosing bipolar disorders, determined that people of African descent are more often misdiagnosed with another disease other than bipolar disorder compared to people with a different ancestry. Additionally, this shows that there are disparities in recruiting patients of African descent to participate in important genomic research. An increase of inclusion and diversity in the development of AI decision-making models, especially in an important sector such as healthcare, could reduce racial disparities and additional economic losses. 

There are great initiatives and organizations that specialize in and offer responsible AI practices allowing organizations to gain insights into their AI systems by an inclusive approach and unique and diverse databases. This also contributes towards the overall evolution of the AI field and the creation of bias-free technology. 

Conclusion

It is a fact that AI has the power to change the system in both good and bad ways, but how we will direct that change is up to the culture we create, ensuring the adoption of ethical standards, values and principles engraved in each step of the AI maturing process. If you are an AI system creator or user, embracing this culture, starting from the dataset diversity and building ā€˜Responsible AI’ practice is a great starting point to contribute to the overall aim of AI technologies for the global population.

Category iconColumns

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We write every week.
  • LinkedIn
  • RSS
  • Twitter
  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2021.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Creative Commons LicenseLearn more about our open access policy here.