• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

March 29, 2021

šŸ”¬ Research summary by Muriam Fancy, our Network Engagement Manager.

[Original paper by Rajeshwari Harsh, Gaurav Acharya, Sunita Chaudhary]

Image credit: Will Francis


Overview: Understanding the implications of employing data ethics in the design and practice of algorithms is a mechanism to tackle privacy issues. This paper addresses privacy or a lack thereof as a breach of trust for consumers. The authors draw on how data ethics can be applied and understood depending on who the application is used for enlists and can build different variations of trust.


Introduction

The role of data ethics is to value the concerns that humans have (privacy, trust, rights, and social norms) with how they can manifest in technology (ML algorithms, sensor data, and statistical analysis). Data ethics is meant to work in between and refine the approach of ethics towards the type of technology that it is being used for. What makes data ethics so important, especially for privacy concerns is that it has been developed from macro ethics, so it can be tailored to focus on specific problems and issues, such as privacy and trust.

Data ethics’ two moral duties

The concern of data privacy is rooted in human psychology. Our concern for our data, such as name, address, community, and education, are essential features of the information that identify us as individuals. However, there is also a concern for group privacy. The article calls on data ethics to balance ā€œtwo moral dutiesā€ such as human rights and improving human welfare. How we can do that is by weighing three variables regarding data protection: (1) individuals, (2) the society that the individual identifies/belongs to, (3) groups and group privacy. 

To effectively address the moral duties presented above, it is necessary to understand the data ethics frameworks applied. There are three specific ethical challenges for which data ethics has a role in addressing. First is data ethics, which concerns research issues such as identification of person or group, and de-identification of those people/groups through mechanisms such as data mining. As a result, the issue is group privacy, group discrimination, trust, transparency of data, and the lack of public awareness, which causes public concerns. The ethics of algorithms is the understanding of the complexity and autonomy of algorithms in machine learning applications. The ethical considerations are moral responsibility and accountability, the ethical design and auditing algorithms, and assessing for ā€œundesirable outcomes.ā€ Individuals who could address these issues are data scientists and algorithm designers. And finally, there is ethics of practice which are the responsibilities of people and organizations responsible for leading data processes and policies. The concern areas for this problem are processional codes and protecting user privacy. Truly to address this issue, the data scientists and developers in these organizations need to be some of the first to bring up the concern. 

What we can do

These ethical challenges are also present in artificial intelligence (AI). To effectively address the concerns brought up above, this paper proposes that AI needs to be developed and introduced by addressing trust, understanding ethics, and civil rights. To do so, AI needs to be designed using ethics, and there are three modules to do so proposed in this paper: ethics by design, ethics in design, and ethics for design. Ultimately, understanding how data ethics concerns privacy and, therefore, user/group trust, the opportunities to improve society are present. Technologies such as the internet of things, robotics, biometrics, facial recognition, and online platforms all require data ethics. 

The paper concludes in address how trust is built-in technology, but more specifically in digital environments. The authors propose that ethics and trust work hand in hand; if one is not present, the other cannot have a meaningful effect. The two working together is how trust in digital environments can be present, which can occur through three situations: 

  1. The Frequency of Trust in Digital Environments: the quantification of communication of the individual in the environment is online trust. There are also two types of online trust: (1) general trust and (2) familiar trust. 
  2. The Nature of Trust in Tech: trust in technology must be differentiated from interpersonal trust. 
  3. Trust as ā€˜Technology and Design’: the notion of built-in trust technology is by humans; if the product/service fails to deliver an iteration of trust, that is a human fault. 

The biggest challenge for data ethics to create trust is distributed morality, which questions the moral interactions between agents in a multi-agent system. Through distributed morality that ā€œinfraethics,ā€ the morally good action of an entire group of agents (privacy, freedom of expression, and openness). 

In short, this article addresses the key challenges and normative ethical frameworks that data ethics harnesses to address trust and privacy. Understanding how trust and privacy are built-in data and data processes is one way to build ethical technology for individual and group use. 

Between the lines

I believe that the perspective the authors take is important, and does to a degree, map out parts of the lifecycle of when data ethics should be considered. However, I would push the paper to discuss issues of how data is scrapped and thus that being an important privacy concern. The issue of consent, which may be a manifestation of moral action taken to build trust. Finally, I would push readers to consider the human element of data ethics, as to ā€œwhoā€ is in the room choosing data sets, but even a setep further, as to which groups are valued when consiering data privacy.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

    Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

  • Research summary: Roles for Computing in Social Change

    Research summary: Roles for Computing in Social Change

  • Cleaning Up the Streets: Understanding Motivations, Mental Models, and Concerns of Users Flagging So...

    Cleaning Up the Streets: Understanding Motivations, Mental Models, and Concerns of Users Flagging So...

  • Harmonizing Artificial Intelligence: The role of standards in the EU AI Regulation

    Harmonizing Artificial Intelligence: The role of standards in the EU AI Regulation

  • A survey on adversarial attacks and defences

    A survey on adversarial attacks and defences

  • Research summary: AI Mediated Exchange Theory by Xiao Ma and Taylor W. Brown

    Research summary: AI Mediated Exchange Theory by Xiao Ma and Taylor W. Brown

  • Trustworthiness of Artificial Intelligence

    Trustworthiness of Artificial Intelligence

  • Russia’s Artificial Intelligence Strategy: The Role of State-Owned Firms

    Russia’s Artificial Intelligence Strategy: The Role of State-Owned Firms

  • The State of AI Ethics Report (Jan 2021)

    The State of AI Ethics Report (Jan 2021)

  • Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application

    Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.