• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Report on the Santa Clara Principles ​for Content Moderation

July 3, 2020

Get the paper in PDF formDownload

This work is licensed under a ​Creative Commons Attribution 4.0 International License.


Context: In April 2020, the Electronic Frontier Foundation (EFF) publicly called for comments on expanding and improving the Santa Clara Principles on Transparency and Accountability (SCP), originally published in May 2018. The Montreal AI Ethics Institute (MAIEI) responded to this call by drafting a set of recommendations based on insights and analysis by the MAIEI staff and supplemented by workshop contributions from the AI Ethics community convened during two online public consultation meetups.

Overview of our recommendations

● There should be more diversity in the content moderation process. Potential biases and discriminatory decisions constitute a great concern for content moderation whether performed by a human or machine.

● There is a need for transparency concerning how platforms guide content-ranking, which has the potential to restrict freedom of expression and users’ autonomy, and stifle social change.

● Anonymized data on the training and/or cultural background of the content moderators employed by a platform should be disclosed.

● There are no one-size-fits-all solutions: content moderation tools must be tailored to specific issues. For instance, misinformation may be best addressed through behavioral nudges, whereas hate speech may require more drastic measures. Guidelines to address all the possible types of content moderation tools employed on online platforms are necessary.

● Specific guidelines are needed for messaging applications with regards to data protection in content moderation.

● Cultural differences relevant to what constitutes acceptable behavior online need to be taken into account in content moderation.

● When it comes to political advertising, we need to make sure that platforms are transparent. Integrity policies for political content should be the same as the policies adopted for other types of content.

● The flagging/reporting system provided to users by platforms would benefit from greater transparency, as it may be particularly problematic when used in contexts where the majority of users are prone to discriminate against minority groups.

● When user content is flagged or reported, it must be clear when the flagging and reporting is automated.

● More data should be made available on the types of content removed from platforms online to make this process more transparent.

● Platforms should provide clear guidelines on the appeal process, as well as data on prior appeals. The appeal process should also be intelligible to a layperson, and not make one feel as though they must seek external legal counsel to navigate said process.

● We believe the Principles should be periodically revisited — for instance, every two years — or within a timeframe that allows for any appropriate revisions. This would allow the Principles to reflect various technological advancements, modifications in law and policy, as well as changing trends or movements in terms of platforms’ content moderation.

Get the paper in PDF formDownload
Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Energy and Policy Considerations in Deep Learning for NLP

    Energy and Policy Considerations in Deep Learning for NLP

  • Investing in AI for Social Good: An Analysis of European National Strategies

    Investing in AI for Social Good: An Analysis of European National Strategies

  • The Role of Arts in Shaping AI Ethics

    The Role of Arts in Shaping AI Ethics

  • Probing Networked Agency: Where is the Locus of Moral Responsibility?

    Probing Networked Agency: Where is the Locus of Moral Responsibility?

  • Ethics of AI in Education: Towards a Community-wide Framework

    Ethics of AI in Education: Towards a Community-wide Framework

  • The philosophical basis of algorithmic recourse

    The philosophical basis of algorithmic recourse

  • Virtues Not Principles

    Virtues Not Principles

  • Unsolved Problems in ML Safety

    Unsolved Problems in ML Safety

  • Defining organizational AI governance

    Defining organizational AI governance

  • The State of AI Ethics Report (Jan 2021)

    The State of AI Ethics Report (Jan 2021)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.