*Link to original paper + authors at the bottom
Overview: This project provides some empirical evidence from a design thinking perspective in terms of how privacy policies and bills associated with the passing of different privacy legislations ought to be designed so that they best communicate their message to the intended audiences.
One of the things that I really liked about how the study was conducted and how the results were communicated is that they choose to avoid using the term user which, as they explain, carries with it a connotation of someone being a research subject to be studied, whereas we are talking about real people with rich lives that face impacts from these systems.
Recommendations for policy makers
If you are a policy maker who is reading this, my key takeaway from this, something that I have been discussing with my peers at the Montreal AI Ethics Institute as well, is the importance of obtaining first-hand accounts from stakeholders. Often, in policy making we rely on voices to gain an understanding of what the community thinks about something, but we fall short of gathering first-hand responses and instead relegate our insights gathering to second- or third-hand voices who might not be truly representative of the concerns of the community.
Collaborating directly with the people in the field will help to sandbox and test the policies iteratively to arrive at something that works well for the needs of the community, not what we imagine their needs to be.
As is the case with all laws, a balance needs to be struck between the specificity of the language describing certain pieces of technology but also being open enough so that future evolutions meaningfully fit within the framework that is defined in the legislation.
Communication by policymakers should happen in a fashion that makes their message perspicuous to the audience. Too often we have this communication in the form of speeches and press release that require deep familiarity with their vernacular to make any sense of it.
This doesn’t mean vulgarization to the point that we lose nuance, it just means having a framework in place that emphasizes the need for such communication practices.
Premortem for the policy?
Analyzing all the ways that the policy can go wrong is an engaging and fruitful way to surface pitfalls when the policy goes into effect. Especially surfacing concerns that are otherwise not covered in the theoretical policy making space can come alive from this speculative process and lead to the creation of more robust policies.
Recommendations for design practitioners and technology organizations
- An interesting point brought up here is to be upfront about the rights that the user has and then map those to appropriate controls in a transparent way so that they can exercise those rights.
- Having a shared vocabulary to explicate data governance will be important across activists, technical, and policy stakeholders.
- Another notable point is that we can’t place the onus of upholding rights on those who are impacted by their violations, this mantle needs to be picked up by those who have power, which is often companies and governments.
- Reducing click fatigue and creating empowerment that people can really act on is a tangible goal that industry and regulators should work towards.
Relation to existing work
A lot of work has already been done in the space on the subject of dark patterns that seek to subvert the autonomy of people to nudge them towards behaviours that benefit the platform, more than they do the individuals. An example mentioned in this paper talks about controls for privacy might be hidden in obscure menus discouraging their utilization but still meeting compliance requirements for providing the option to the user.
Privacy bills covered
The researchers looked at the Consumer Online Privacy Rights Act (COPRA), Online Privacy Act (OPA), and Social Media Addiction Reduction Technology (SMART) to meet the goals of their research.
They picked ones that focused on strengthening privacy controls, and advocating for platform design changes as the beachhead for this study.
What was common to all the bills?
- Having a multi-layered approach, akin to the defense in depth principle from cybersecurity where you have multiple checks and balances so that you don’t have one single point of failure.
- My personal preference for consent mechanisms is the notion of progressive disclosure, the idea that you reveal requisite concepts and implications just-in-time (JIT) so that you don’t overwhelm the user into making an uniformed choice.
- Related to the above point, we need to make sure that we don’t desensitize the user to privacy concerns by bombarding them with unnecessary notifications and controls.
- The need for more clarity on the terms used within the bill.
They prototyped the bills to take a look at how the clauses within the bill could actually map to features in platforms and how people perceive those, and judge their efficacy. It also in the qualitative aspect of the study paid attention to proof quotes to bolster arguments and provocative quotes to illustrate points using examples.
What was unique to each of the bills?
- While heavy on the notions of control and notice, an interesting tension to highlight here was the tradeoff between maintaining user agency and coming off as being too paternalistic.
- There were some highly specific recommendations, for example the 30 minutes timer on browsing a feed that felt too specific and could lead to clashes as the platform and preferences evolve in the future.
- Enforcement of individual’s rights and considerations of marginalized populations in case they are left out of the discussions was an important facet of this bill.
- In advocating for the rights in the first place, the proposal of having a new agency might be too much of an ask, perhaps expanding the powers of a body like the FTC is a better approach.
- If we take the view that privacy is a fundamental right, then we must embark on removing as many of the cost barriers as possible in making privacy attainable for everyone.
- The notion of Duty of Loyalty was ambiguous at best: there are many interpretations for it and it requires more clarity for it to be actionable.
Artifacts from the study
One of the outputs from the study was the creation of this website that shows the bills in easy to understand language and in a visually appealing manner.
What I think was particularly interesting was that it might be a great method of communicating difficult to parse and disengaging legalese in a manner that solicits participation and critique from the people who will be impacted by it.
What is privacy anyways?
So, one of the things that particularly caught my eye was the expression of privacy in 4 forms as: solitude, reserve, anonymity, and intimacy. Most conversations that seek to trivialize the notion of privacy being something that is an artifact of the pre-digital era or those who support the argument of I have nothing to hide can benefit from this framing.
While the researchers had surveyed people from many walks of life and experiences, they did find some commonalities: control of your data, privacy as a mechanism to achieve fairness, and risks vs. benefits of having privacy in terms of the products and services that we have and we could have in the future.
Such a multi-dimensional approach to privacy requires that the definition also have multiple dimensions and some of the related work and background talked about in the paper does a good job of familiarizing the reader about it. Such a formulation also has the benefit of meeting different people where they are at in terms of their needs and what matters to them when it comes to the notion of privacy.
Some of the people surveyed as a part of this study did frame privacy in the context of human rights as well which I think is a powerful approach since it can use all the levers that come with the enforcement of human rights to push the agenda for having meaningful privacy.
Generational differences in the perception of privacy
We often get to hear that young people don’t care about privacy because they share their life’s ongoings so freely on the internet. That is only a single dimensional view and this is talked about in the paper to the extent that different generations have different perceptions of what privacy means. Keeping that in mind will help us make better decisions in terms of how we should communicate the policies and their impacts on the design of the platform.
Where do well-meaning legislation fail?
Reiterated in the report, and something that I have experienced in my own work at the Montreal AI Ethics Institute helping policymakers is a lack of interdisciplinary collaboration that leads to conflicting definitions and recommendations that ultimately fail to hit the mark from a technical perspective. In work that I did with a former colleague Mirka Snyder Caron for the Office of the Privacy Commissioner of Canada, we combined legal and technical expertise to make recommendations that weren’t limited in their applicability, precisely because we were cognizant of the shortcomings of a uni-dimensional approach.
Where to well-meaning technical implementations fail?
This is an interesting point of discussion because sometime apps that are secure don’t have the fantastic and slick user experience that we have come to expect from apps that might be made by companies that exploit user data. To a certain extent, if we can hack this design problem, we can reach a place where people will choose these alternatives because they offer everything and give little to no reason to stick to less secure and non-private solutions.
What I liked about this study was that it really paid attention to the inherent tension (at least what I believe to be the case) between technical practitioners and policymakers and how consumers get stuck with subpar interactions from a user experience and exercising of their democratic rights, including that of privacy.
The recommendations made here and the approach taken, I think, has applications beyond just the field of privacy as well and this study can be a great instrument for people seeking to learn more about how to effectively communicate policy decisions and deliberations to their constituents.
What does this mean for Actionable AI Ethics?
- Concretely, from an AI ethics standpoint, one should pay attention to the ideas of progressive disclosures of privacy notices and consent, and put in place simple-to-use controls to allow users to exercise their data rights.
- Stakeholder consultation, especially those whom you can identify from the premortem process for policy analysis should become an integral part of the early stages of your AI lifecycle.
Questions that I am exploring
If you have answers to any of these questions, please tweet and let me know!
- I think dealing with the tension between specificities of the terms used in the policy making process and the potential future evolution of those terms is critical as highlighted in the paper. How do we do better when we as technical practitioners interact with policymakers?
- Are there any secure and privacy-respecting apps that have fantastic and slick user experience?
Potential further reading
A list of papers that I think might be interesting related to this paper.
- PrivacyCheck: Automatic Summarization of Privacy Policies Using Data Mining
Please note that this is a wish list of sorts and I haven’t read through the papers listed here unless specified otherwise (if I have read them, there will be a link from the entry to the page for that.)
I’ll write back here with interesting points that surface from the Twitter discussion.
If you have comments and would like to discuss more, please leave a tweet here.
Original paper by Anna Chung, Dennis Jen, Jasmine McNealy, Pardis Emami Naeni, Stephanie Nguyen: https://letstalkprivacy.media.mit.edu/ltp-full-report.pdf