• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Characterizing, Detecting, and Predicting Online Ban Evasion

May 15, 2022

šŸ”¬ Research summary by Gaurav Verma, a Ph.D. student in Computer Science at Georgia Tech. His research focuses on investigating and developing the robustness of deep learning models.

[Original paper by Manoj Niverthi, Gaurav Verma, and Srijan Kumar]


Overview: Online ban evasion is the act of circumventing suspensions on online platforms like Wikipedia, Twitter, and Facebook. Focusing specifically on malicious ban evasion, this paper (to appear in ACM WebConf 2022) employs machine learning and data science to understand the behavior of online ban evaders and develops tools that can help moderators in the early and reliable identification of instances of ban evasion.Ā 


Introduction

Ban evasion on online platforms is an issue that has been noted widely – more so in recent times. It has been linked with exacerbating a wide range of malicious behavior prevalent on the web, from harassment and spreading propaganda to even inciting real-world acts of violence. We curate and analyze a large-scale dataset of ban evasion pairs from Wikipedia – where both the parent and evasion accounts were detected to engage in malicious behavior. Our study shows that ban evaders demonstrate characteristically different linguistic and behavioral properties than other malicious users who are not known to evade bans. Using the data-driven understanding of ban evasion, we develop prediction and detection tools that can help moderators of ever-growing online communities.

Behavioral and linguistic markers of ban evasion

Account-level meta-information (like usernames, creation time, ban time, etc.) and the edits that ban evaders make on platforms can provide insights into their behavior. On conducting controlled comparisons with other malicious users on Wikipedia that do not evade bans, we find that ban evaders demonstrate subdued malicious behavior – they have fewer inappropriate usernames, make edits with larger temporal gaps, use objective language, demonstrate lesser usage of swear, informal, and affective words. Overall, these markers indicate that ban evaders are less explicit and more camouflaged than non-evading malicious users. 

How do ban evaders change their behavior?

Even though ban evaders make a new account when their previous accounts are banned, their linguistic attributes follow certain statistical patterns. Most ban evaders choose a similar username for their new account to continue making edits on similar pages, albeit with an increased rate of edits. They also have a notable overlap between the vocabulary of the new account and the old account.  Interestingly, a deeper analysis of ban evaders who are successful at evading the ban for a longer duration demonstrates that they decrease their usage of swear words and aim to appear more logical and objective in their edits. Our results indicate that some ban evaders indeed adapt their behavior to continue malicious operations on the platform. 

Prediction and detection tools

Having uncovered the behavioral patterns associated with ban evasion, we aim to develop tools that can help moderators with their investigations. Using features that can not be manipulated easily (for instance, username similarity can be gamed easily by malicious actors, but linguistic properties are harder to manipulate), we develop machine learning models that can reliably and accurately detect ban evasion with as little as 3 edits on the platform. Evaluation of our models also suggests that a potential evasion account can be matched with its previously banned counterpart with near-perfect accuracy. Such strong attribution capabilities will aid in making moderator investigations more efficient as they are currently carried out manually and are known to be resource-intensive.

Between the lines

Our work takes a step forward in preserving the ethos of online platforms by studying malicious ban evasion. With online communities growing in their sheer size and influence, the need to operationalize such machine learning tools is more pertinent than ever. However, it is worth noting that not all ban evasion is bad and hence the tools should be deployed with caution to supplement human moderators. Users could be banned for incorrect reasons and can come back to platforms with rather innocuous intentions. Overly stringent approaches and complete reliance on automated tools can raise the barriers to entry for newcomers and propagate inequities on online platforms. We recommend researchers and practitioners engage with stakeholders while deploying such tools.Ā 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • On the sui generis value capture of new digital technologies: The case of AI

    On the sui generis value capture of new digital technologies: The case of AI

  • Consequences of Recourse In Binary Classification

    Consequences of Recourse In Binary Classification

  • State of AI Ethics

    State of AI Ethics

  • Scoping AI Governance: A Smarter Tool Kit for Beneficial Applications

    Scoping AI Governance: A Smarter Tool Kit for Beneficial Applications

  • On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models

    On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models

  • Bridging the Gap: The Case For an ā€˜Incompletely Theorized Agreement’ on AI Policy (Research Summary)

    Bridging the Gap: The Case For an ā€˜Incompletely Theorized Agreement’ on AI Policy (Research Summary)

  • Explaining the Principles to Practices Gap in AI

    Explaining the Principles to Practices Gap in AI

  • The Impact of Recommendation Systems on Opinion Dynamics: Microscopic versus Macroscopic Effects

    The Impact of Recommendation Systems on Opinion Dynamics: Microscopic versus Macroscopic Effects

  • AI Deception: A Survey of Examples, Risks, and Potential Solutions

    AI Deception: A Survey of Examples, Risks, and Potential Solutions

  • Fairness Amidst Non-IID Graph Data: A Literature Review

    Fairness Amidst Non-IID Graph Data: A Literature Review

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.