• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Characterizing, Detecting, and Predicting Online Ban Evasion

May 15, 2022

🔬 Research summary by Gaurav Verma, a Ph.D. student in Computer Science at Georgia Tech. His research focuses on investigating and developing the robustness of deep learning models.

[Original paper by Manoj Niverthi, Gaurav Verma, and Srijan Kumar]


Overview: Online ban evasion is the act of circumventing suspensions on online platforms like Wikipedia, Twitter, and Facebook. Focusing specifically on malicious ban evasion, this paper (to appear in ACM WebConf 2022) employs machine learning and data science to understand the behavior of online ban evaders and develops tools that can help moderators in the early and reliable identification of instances of ban evasion. 


Introduction

Ban evasion on online platforms is an issue that has been noted widely – more so in recent times. It has been linked with exacerbating a wide range of malicious behavior prevalent on the web, from harassment and spreading propaganda to even inciting real-world acts of violence. We curate and analyze a large-scale dataset of ban evasion pairs from Wikipedia – where both the parent and evasion accounts were detected to engage in malicious behavior. Our study shows that ban evaders demonstrate characteristically different linguistic and behavioral properties than other malicious users who are not known to evade bans. Using the data-driven understanding of ban evasion, we develop prediction and detection tools that can help moderators of ever-growing online communities.

Behavioral and linguistic markers of ban evasion

Account-level meta-information (like usernames, creation time, ban time, etc.) and the edits that ban evaders make on platforms can provide insights into their behavior. On conducting controlled comparisons with other malicious users on Wikipedia that do not evade bans, we find that ban evaders demonstrate subdued malicious behavior – they have fewer inappropriate usernames, make edits with larger temporal gaps, use objective language, demonstrate lesser usage of swear, informal, and affective words. Overall, these markers indicate that ban evaders are less explicit and more camouflaged than non-evading malicious users. 

How do ban evaders change their behavior?

Even though ban evaders make a new account when their previous accounts are banned, their linguistic attributes follow certain statistical patterns. Most ban evaders choose a similar username for their new account to continue making edits on similar pages, albeit with an increased rate of edits. They also have a notable overlap between the vocabulary of the new account and the old account.  Interestingly, a deeper analysis of ban evaders who are successful at evading the ban for a longer duration demonstrates that they decrease their usage of swear words and aim to appear more logical and objective in their edits. Our results indicate that some ban evaders indeed adapt their behavior to continue malicious operations on the platform. 

Prediction and detection tools

Having uncovered the behavioral patterns associated with ban evasion, we aim to develop tools that can help moderators with their investigations. Using features that can not be manipulated easily (for instance, username similarity can be gamed easily by malicious actors, but linguistic properties are harder to manipulate), we develop machine learning models that can reliably and accurately detect ban evasion with as little as 3 edits on the platform. Evaluation of our models also suggests that a potential evasion account can be matched with its previously banned counterpart with near-perfect accuracy. Such strong attribution capabilities will aid in making moderator investigations more efficient as they are currently carried out manually and are known to be resource-intensive.

Between the lines

Our work takes a step forward in preserving the ethos of online platforms by studying malicious ban evasion. With online communities growing in their sheer size and influence, the need to operationalize such machine learning tools is more pertinent than ever. However, it is worth noting that not all ban evasion is bad and hence the tools should be deployed with caution to supplement human moderators. Users could be banned for incorrect reasons and can come back to platforms with rather innocuous intentions. Overly stringent approaches and complete reliance on automated tools can raise the barriers to entry for newcomers and propagate inequities on online platforms. We recommend researchers and practitioners engage with stakeholders while deploying such tools. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • The Watsons Meet Watson: A Call for Carative AI

    The Watsons Meet Watson: A Call for Carative AI

  • Research summary: The Toxic Potential of YouTube's Feedback Loop

    Research summary: The Toxic Potential of YouTube's Feedback Loop

  • Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

    Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

  • The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

    The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

  • ChatGPT and the media in the Global South: How non-representative corpus in sub-Sahara Africa are en...

    ChatGPT and the media in the Global South: How non-representative corpus in sub-Sahara Africa are en...

  • Structured access to AI capabilities: an emerging paradigm for safe AI deployment

    Structured access to AI capabilities: an emerging paradigm for safe AI deployment

  • Unpacking Invisible Work Practices, Constraints, and Latent Power Relationships in Child Welfare thr...

    Unpacking Invisible Work Practices, Constraints, and Latent Power Relationships in Child Welfare thr...

  • Editing Personality for LLMs

    Editing Personality for LLMs

  • The struggle for recognition in the age of facial recognition technology

    The struggle for recognition in the age of facial recognition technology

  • Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems

    Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.