🔬 Original article by Nils Aoun, Itai Epstein, Sara Parker, and Cella Wardrop from Encode Justice Canada
This is a part of our Recess series in which university students from across Canada briefly explain key concepts in AI that young people should know about: specifically, what AI does, how it works, and what it means for you. The writers are members of Encode Justice Canada, a student-led advocacy organization dedicated to including Canadian youth in the essential conversations about the future of AI.
Introduction
Sara Parker
Social media has become a major part of any young Canadian’s life. Platforms like Instagram, Facebook, and Snapchat allow users to connect with each other at any time from anywhere, while TikTok and YouTube provide endless amounts of content and entertainment. Consequently, users should know what happens “behind the scenes” of their most-used apps to understand the impact they may have on their lives. Furthermore, the dangers – like the benefits – of social media are many; we therefore explore regulations that can mitigate some of these harms.
How Social Media Algorithms Work
Itai Epstein
TikTok and Youtube are some of the most influential and most used social media platforms of the 21st century. The content of their recommendation algorithms is sometimes scarily accurate and has gotten users wondering about the rules behind it and the data required to deliver such results. [1]
The TikTok Recommendation Algorithm
The majority of TikTok’s users view videos through the app’s ‘For You’ page. Here, TikTok’s recommendation algorithm feeds videos to its users. TikTok’s algorithm starts delivering accurate results after only 36 minutes of watch time (around 224 videos). [2] By this point, the algorithm starts to understand individual users’ likes and dislikes and will start to recommend content that they are most likely to engage with.
The inputs the algorithm uses ranges from videos a user likes and shares and the creators they follow to information about the video itself, like hashtags and the audio used. While these are all valuable in categorizing videos, the most important metric the algorithm considers in recommending content is video engagement — essentially, how long a user watches the video. [3]
If a user watches a video fully, they will see more like it, whereas if a video is swiped away or not finished, the algorithm notes not to show similar content to that user often or block it off entirely. This creates what is called a “filter bubble” in which a user is categorized into a specific niche or community of videos that represents the user’s likes and dislikes exactly. [4] In a sense, TikTok “feeds” users content: on the app, you can only see what TikTok thinks you would like.
The YouTube Recommendation Algorithm
Youtube is a video giant that has been around for almost 20 years and its influence is widely known. During this time, its recommendation algorithm has changed tremendously. Currently, Youtube’s recommendation algorithm facilitates over 700,000,000 hours of content watched every day. The algorithm drives more than 70% of total watch time and is thus one of the biggest deciders of what people see. [5]
YouTube functions similarly to TikTok, in that it more heavily weighs the time a user watches a video to determine which other ones they are most likely to engage more with and watch it through entirely. Through this, a “filter bubble” is created. [6[ These bubbles feed users videos that are more of the same, leading them down a rabbit hole of similar content or videos that similar users have interacted with.
Why should this matter to me?
Overall, the Youtube and TikTok recommendation algorithms function primarily the same way. Before recommending new videos to the user, they determine which content they interact with the most, thus recommending them more similar content. Both companies also share the same goal – to increase the time spent on each of their respective platforms – which their recommendation algorithms help them achieve.
As a user, it is important to know the power that recommendation algorithms hold. They control what is seen on the web and knowing how they work can help to bring clarity and insight to what is being consumed while also taking some influence back from the platforms that are used. Entering filter bubbles or “echo chambers” may significantly affect how you see the world, and can make you feel distrustful of or anger towards other groups of people who do not see things the exact way you do. These bubbles may also be sources of political polarization and disinformation, so it is important to know how to recognize when you have fallen into one.
Social Media Monitoring and Regulation
Nils Aoun
The Internet has allowed indescribable advancements with its never-before-seen ability to gather and connect people anywhere around the world. Social media has played a great role in allowing people to share content with other users, enabling them to tell stories, let people on the other side of the globe know what is going on in their country, and much more. Without taking away from the great benefits that social media has given us, it is important to consider the negative aspects of this world-changing technology. Among other things, social media simplified the sharing of harmful content, like hate speech and terrorist propaganda. [7] As seen in the past, with misinformation campaigns looking to interfere with elections, namely the U.S. 2016 Presidential Elections, there are numerous negative effects that ensue from the lack of moderation of online content.
Moderating Hate Speech: Meta Case Study
The main controversy around regulating content is balancing moderation with fundamental rights like freedom of speech, with governments, academics, and others debating on whether and how to hold tech companies accountable for the actions of users on their platforms. The following section will go over Meta (Facebook)’s regulation methods to get a feel for how social media companies currently regulate content.
Meta mostly relies on its AI models to detect and remove hate speech: according to the company, 97% of hate speech on the platform is detected by automated systems before being flagged by a user. [8] However, according to reports obtained by the Wall Street Journal dubbed the “Facebook Files”, these systems are only capable of removing 2% of hate speech on Facebook. While this is a major discrepancy in results, one thing is certain: automated recognition of harmful content online is difficult.
Regulating Online Content
Although challenging, we believe that the regulation of social media platforms is important, much in the way that rules – written and unwritten – governing civil society are important to maintaining peace and safety. To address the concerns that come with the new environment that has risen with new technologies, regulators should ensure incentives for companies to act responsibly, whether they be positive or negative reinforcements – promote companies that do their part and/or punish those that do not. Regulations should consider the global nature of the internet; international cooperation is needed since social media involves cross-border communications. Furthermore, while regulators should consider the impact of their decisions on freedom of expression, they should also look to deepen their understanding of what technology can and cannot do with regards to content moderation, allowing companies “the flexibility to innovate”. However, new rules are not necessary in every case since what happens in real life can find applicability in the online world: “regulators should take into account the severity and prevalence of the harmful content in question, its status in law, and the efforts already underway to address the content.” [9]
To conclude, there are many ways social media companies can monitor and regulate the content on their platforms and for governments to ensure that they do so. A well-regulated web is beneficial for the internet’s success as “articulating clear ways for government, companies, and civil society to share responsibilities and work together.” [10] With a poorly designed legal framework around these issues, negative consequences can ensue and lead to a less safe online environment. It is worth working on these issues as succeeding in creating a framework that encompasses most if not all aspects of social media’s potential harms would benefit everyone – with safe content, credible information, and the protection of users’ privacy.
Free Speech on Social Media Platforms: Canadian Government Regulation
Cella Wardrop
As a co-founder of the Media Freedom Coalition, Canada is a global leader in the promotion of freedom of expression. [11] As written in the Canadian constitution, Canada is committed to “freedom of thought, belief, opinion and expression.” [12] With the rise in online activity, Canada became a member of the Freedom Online Coalition, dedicating itself to the promotion of Internet freedom. [13] Yet Canada, like many other countries, has wrestled with how to regulate online activity without infringing on freedom of expression.
Bill C-10
The Canadian government has presented some notably controversial bills to address concerns around internet services. As an attempt to extend the influence of the Broadcasting Act into a more modern context, the Canadian House of Commons passed Bill C-10 in June, 2021. [14] If it became a law, Bill C-10 would require streaming services and social media platforms to promote content by Canadian creators. [15] However, this proposal received backlash from human rights and internet activists for fear it would give the CRTC too much power over digital platforms and infringe on freedom of expression. [16]
A technical paper
In July 2021, the Canadian Government released a technical paper exploring potential plans to address and monitor harmful social media content. [17] The proposal focuses on addressing the “five types of harmful content,” described as “child sexual exploitation,” “terrorist content,” “content that incites violence,” “hate speech,” and the “non-consensual sharing of intimate images.” [18] This proposal has received notable backlash from human rights and advocacy groups, who claim that, among other issues, these policies would threaten Canadian values of freedom of expression and liberal democracy. [19]
The technical paper aims to regulate Online Communication Services (OCSs), which are defined as online services whose “primary purpose” is “to enable users of the service to communicate with other users of the service, over the internet,” excluding services that only allow users to have “private communications.” [20] Critics say this definition is too vague and does not accurately convey the government’s intention of regulating social media networks like YouTube and Twitter, and excluding internet services like TripAdvisor. [21] The Government’s proposal would require OCSs to remove “harmful content” on their platforms within twenty-four hours of identifying it as harmful. [22] This requirement echoes that of Germany’s controversial NetzDG law, which many authoritarian regimes have modeled their online censorship laws on. [23] Needless to say, the possible integration of authoritarian-style online policy into Canadian law concerns many Canadians. The Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic, based in the University of Ottawa, goes so far as to call these requirements “draconian,” and argues that they will enable “over-removal and censorship of legitimate expression.” [24] The Clinic argues that, in order for the government to flag potentially harmful content, all content must be monitored, giving the government access to all user content. [25]
The government’s proposal also includes the reporting of user data, such as user content and activity, to the Royal Canadian Mounted Police and Canadian Security Intelligence Service, concerning many human rights advocates. [26] Advocates argue that allowing the government and law enforcement access to users’ private information is inconsistent with Canadian values of freedom and democracy. [27]
Another point of contention is the government’s recommendation that OCSs use “automated systems” to identify “harmful content.” [28] Not only can computer algorithms be biased, but they are also unable to differentiate between illegal content and ‘illegal’ content that is used in a legal context, for example as educational or news content. [29] Studies have also shown that content from marginalized communities is taken down by social media platforms at a disproportionate rate compared to more mainstream content, suggesting this regulation may further silence marginalized voices. [30] As one of the stated primary focuses of the Canadian Government’s online regulation is to address online hate, especially its disproportionate effects on marginalized groups like Women and Indigenous Peoples, the proposed legislation may counteract its goal. [31]
After the publication of the Government’s proposal to address the sharing of harmful content online, many human rights advocacy groups have published responses with recommendations, as described above, on how to better achieve the Government’s goal while preserving fundamental Canadian values of freedom and democracy. The public feedback represents a greater struggle for governments to regulate social media content, but the hope of a better regulated internet in the future.
Conclusion
Sara Parker
As illustrated throughout this report, social media is designed to impact its users – and it is therefore essential to ensure that this impact is positive. Young Canadians, in particular, must understand how the apps on their phones may influence them and shape their worldview, while pressuring their government representatives to introduce and enforce regulation that protects the digital space. The online world is undoubtedly the real world now: it is time to act like it.
Notes
[1] Laura Matsakis, “How TikTok’s ‘for you’ algorithm actually works,” Wired, published June 18, 2020: https://www.wired.com/story/tiktok-finally-explains-for-you-algorithm-works.
[2] WSJDigitalNetwork, “How Tiktok’s algorithm figures you out | WSJ,” Youtube, published July 21, 2021: https://www.youtube.com/watch?v=nfczi2cI6Cs&feature=embtitle.
[3] Matsakis, “How TikTok’s ‘for you’ algorithm actually works.”
[4] Ibid.
[5] Wired, “Is the YouTube algorithm controlling us?” YouTube, published November 19, 2020: https://www.youtube.com/watch?v=XuORTmLhIiU.
[6] Guillaume Chaslot, “The toxic potential of YouTube’s feedback loop,” Wired, published July 13, 2019: https://www.wired.com/story/the-toxic-potential-of-youtubes-feedback-loop/.
[7] Monika Bickert, “Charting a Way Forward on Online Content Regulation,” Facebook, published February 17, 2020: https://bit.ly/3CIJb2T.
[8] Mike Schroepfer. “Update on Our Progress on AI and Hate Speech Detection.” Meta Newsroom. 11 February 2021. https://about.fb.com/news/2021/02/update-on-our-progress-on-ai-and-hate-speech-detection/
[9] Ibid.
[10] Ibid.
[11] Global Affairs Canada, “Media Freedom Coalition ministerial communiqué,” Government of Canada published November 2020: https://www.canada.ca/en/global-affairs/news/2020/11/media-freedom-coalition-ministerial-communique.html.
[12] Canadian Charter of Rights and Freedoms, s 2, Part 1 of the Constitution Act, 1982, being Schedule B to the Canada Act 1982 (UK), 1982, c 11.
[13] Freedom Online Coalition, “FREEDOM ONLINE COALITION: Factsheet,” published 2021: https://freedomonlinecoalition.com/wp-content/uploads/2021/05/FOC-Factsheet-2021.docx.pdf, and Government of Canada, “Human rights and inclusion in online and digital contexts,” published November 2020: https://www.international.gc.ca/world-monde/issues_development-enjeux_developpement/human_rights-droits_homme/internet_freedom-liberte_internet.aspx?lang=eng.
[14] Government of Canada, “Bill C-10: An Act to amend the Broadcasting Act and to make consequential amendments to other Acts,” published September 2021: https://www.justice.gc.ca/eng/csj-sjc/pl/charter-charte/c10.html.
[15] Kait Bolongaro, “Trudeau’s Party Passes Bill to Regulate Social Media, Streaming,” Bloomberg.com, published June 22, 2021: https://www.bloomberg.com/news/articles/2021-06-22/trudeau-s-party-passes-bill-to-regulate-social-media-streaming, and Government of Canada. “Bill C-10.”
[16] Bolongaro, “Trudeau’s Party.”
[17] Government of Canada, “Technical paper,” published July 2021: https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content/technical-paper.html.
[18] Government of Canada, “Technical paper.”
[19] Yuan Stevens and Vivek Krishnamurthy, “Overhauling the Online Harms Proposal in Canada: A Human Rights Approach,” Canadian Internet Policy and Public Interest Clinic, 2021, and Cara Zwibel, “Submission in relation to the consultation on addressing harmful content online,” Canadian Civil Liberties Association, published September 25, 2021: https://ccla.org/wp-content/uploads/2021/09/CCLA-Submission-to-Heritage-Online-Harms.pdf.
[20] Government of Canada, “Technical paper.”
[21] Stevens and Krishnamurthy, “Overhauling the Online Harms Proposal in Canada.”
[22] Government of Canada, “Technical paper.”
[23] Stevens and Krishnamurthy, “Overhauling the Online Harms Proposal in Canada.”
[24] Ibid.
[25] Ibid.
[26] Government of Canada, “Technical paper.”
[27] Stevens and Krishnamurthy, “Overhauling the Online Harms Proposal in Canada,” and Michael Geist, “Picking Up Where Bill C-10 Left Off: the Canadian Government’s Non-Consultation on Online Harms Legislation,” michaelgiest.com, published July 30, 2021: https://www.michaelgeist.ca/2021/07/onlineharmsnonconsult/.
[28] Government of Canada, “Technical paper.”
[29] Daphne Keller, “Five Big Problems with Canada’s Proposed Regulatory Framework for “Harmful Online Content,” Tech Policy Press, published August 31, 2021: https://techpolicy.press/five-big-problems-with-canadas-proposed-regulatory-framework-for-harmful-online-content/
[30] Ángel Díaz and Laura Hecht-Felella, “Double Standards in Social Media Content Moderation,” Brennan Center for Justice, published August 4, 2021: https://www.brennancenter.org/sites/default/files/2021-08/Double_Standards_Content_Moderation.pdf.
[31] Government of Canada, “Technical paper.”