🔬 Research summary by Gaurav Verma, a Ph.D. student in Computer Science at Georgia Tech. His research focuses on investigating and developing the robustness of deep learning models.
[Original paper by Manoj Niverthi, Gaurav Verma, and Srijan Kumar]
Overview: Online ban evasion is the act of circumventing suspensions on online platforms like Wikipedia, Twitter, and Facebook. Focusing specifically on malicious ban evasion, this paper (to appear in ACM WebConf 2022) employs machine learning and data science to understand the behavior of online ban evaders and develops tools that can help moderators in the early and reliable identification of instances of ban evasion.Â
Introduction
Ban evasion on online platforms is an issue that has been noted widely – more so in recent times. It has been linked with exacerbating a wide range of malicious behavior prevalent on the web, from harassment and spreading propaganda to even inciting real-world acts of violence. We curate and analyze a large-scale dataset of ban evasion pairs from Wikipedia – where both the parent and evasion accounts were detected to engage in malicious behavior. Our study shows that ban evaders demonstrate characteristically different linguistic and behavioral properties than other malicious users who are not known to evade bans. Using the data-driven understanding of ban evasion, we develop prediction and detection tools that can help moderators of ever-growing online communities.
Behavioral and linguistic markers of ban evasion
Account-level meta-information (like usernames, creation time, ban time, etc.) and the edits that ban evaders make on platforms can provide insights into their behavior. On conducting controlled comparisons with other malicious users on Wikipedia that do not evade bans, we find that ban evaders demonstrate subdued malicious behavior – they have fewer inappropriate usernames, make edits with larger temporal gaps, use objective language, demonstrate lesser usage of swear, informal, and affective words. Overall, these markers indicate that ban evaders are less explicit and more camouflaged than non-evading malicious users.
How do ban evaders change their behavior?
Even though ban evaders make a new account when their previous accounts are banned, their linguistic attributes follow certain statistical patterns. Most ban evaders choose a similar username for their new account to continue making edits on similar pages, albeit with an increased rate of edits. They also have a notable overlap between the vocabulary of the new account and the old account. Interestingly, a deeper analysis of ban evaders who are successful at evading the ban for a longer duration demonstrates that they decrease their usage of swear words and aim to appear more logical and objective in their edits. Our results indicate that some ban evaders indeed adapt their behavior to continue malicious operations on the platform.
Prediction and detection tools
Having uncovered the behavioral patterns associated with ban evasion, we aim to develop tools that can help moderators with their investigations. Using features that can not be manipulated easily (for instance, username similarity can be gamed easily by malicious actors, but linguistic properties are harder to manipulate), we develop machine learning models that can reliably and accurately detect ban evasion with as little as 3 edits on the platform. Evaluation of our models also suggests that a potential evasion account can be matched with its previously banned counterpart with near-perfect accuracy. Such strong attribution capabilities will aid in making moderator investigations more efficient as they are currently carried out manually and are known to be resource-intensive.
Between the lines
Our work takes a step forward in preserving the ethos of online platforms by studying malicious ban evasion. With online communities growing in their sheer size and influence, the need to operationalize such machine learning tools is more pertinent than ever. However, it is worth noting that not all ban evasion is bad and hence the tools should be deployed with caution to supplement human moderators. Users could be banned for incorrect reasons and can come back to platforms with rather innocuous intentions. Overly stringent approaches and complete reliance on automated tools can raise the barriers to entry for newcomers and propagate inequities on online platforms. We recommend researchers and practitioners engage with stakeholders while deploying such tools.Â