🔬 Research summary by Muriam Fancy, our Network Engagement Manager.
[Original paper by Rajeshwari Harsh, Gaurav Acharya, Sunita Chaudhary]
Overview: Understanding the implications of employing data ethics in the design and practice of algorithms is a mechanism to tackle privacy issues. This paper addresses privacy or a lack thereof as a breach of trust for consumers. The authors draw on how data ethics can be applied and understood depending on who the application is used for enlists and can build different variations of trust.
The role of data ethics is to value the concerns that humans have (privacy, trust, rights, and social norms) with how they can manifest in technology (ML algorithms, sensor data, and statistical analysis). Data ethics is meant to work in between and refine the approach of ethics towards the type of technology that it is being used for. What makes data ethics so important, especially for privacy concerns is that it has been developed from macro ethics, so it can be tailored to focus on specific problems and issues, such as privacy and trust.
Data ethics’ two moral duties
The concern of data privacy is rooted in human psychology. Our concern for our data, such as name, address, community, and education, are essential features of the information that identify us as individuals. However, there is also a concern for group privacy. The article calls on data ethics to balance “two moral duties” such as human rights and improving human welfare. How we can do that is by weighing three variables regarding data protection: (1) individuals, (2) the society that the individual identifies/belongs to, (3) groups and group privacy.
To effectively address the moral duties presented above, it is necessary to understand the data ethics frameworks applied. There are three specific ethical challenges for which data ethics has a role in addressing. First is data ethics, which concerns research issues such as identification of person or group, and de-identification of those people/groups through mechanisms such as data mining. As a result, the issue is group privacy, group discrimination, trust, transparency of data, and the lack of public awareness, which causes public concerns. The ethics of algorithms is the understanding of the complexity and autonomy of algorithms in machine learning applications. The ethical considerations are moral responsibility and accountability, the ethical design and auditing algorithms, and assessing for “undesirable outcomes.” Individuals who could address these issues are data scientists and algorithm designers. And finally, there is ethics of practice which are the responsibilities of people and organizations responsible for leading data processes and policies. The concern areas for this problem are processional codes and protecting user privacy. Truly to address this issue, the data scientists and developers in these organizations need to be some of the first to bring up the concern.
What we can do
These ethical challenges are also present in artificial intelligence (AI). To effectively address the concerns brought up above, this paper proposes that AI needs to be developed and introduced by addressing trust, understanding ethics, and civil rights. To do so, AI needs to be designed using ethics, and there are three modules to do so proposed in this paper: ethics by design, ethics in design, and ethics for design. Ultimately, understanding how data ethics concerns privacy and, therefore, user/group trust, the opportunities to improve society are present. Technologies such as the internet of things, robotics, biometrics, facial recognition, and online platforms all require data ethics.
The paper concludes in address how trust is built-in technology, but more specifically in digital environments. The authors propose that ethics and trust work hand in hand; if one is not present, the other cannot have a meaningful effect. The two working together is how trust in digital environments can be present, which can occur through three situations:
- The Frequency of Trust in Digital Environments: the quantification of communication of the individual in the environment is online trust. There are also two types of online trust: (1) general trust and (2) familiar trust.
- The Nature of Trust in Tech: trust in technology must be differentiated from interpersonal trust.
- Trust as ‘Technology and Design’: the notion of built-in trust technology is by humans; if the product/service fails to deliver an iteration of trust, that is a human fault.
The biggest challenge for data ethics to create trust is distributed morality, which questions the moral interactions between agents in a multi-agent system. Through distributed morality that “infraethics,” the morally good action of an entire group of agents (privacy, freedom of expression, and openness).
In short, this article addresses the key challenges and normative ethical frameworks that data ethics harnesses to address trust and privacy. Understanding how trust and privacy are built-in data and data processes is one way to build ethical technology for individual and group use.
Between the lines
I believe that the perspective the authors take is important, and does to a degree, map out parts of the lifecycle of when data ethics should be considered. However, I would push the paper to discuss issues of how data is scrapped and thus that being an important privacy concern. The issue of consent, which may be a manifestation of moral action taken to build trust. Finally, I would push readers to consider the human element of data ethics, as to “who” is in the room choosing data sets, but even a setep further, as to which groups are valued when consiering data privacy.