Social app Twitter can be a bit of minefield when it comes to meaningful conversation, and now the service is looking to address one of the causes of that in the form of destructive trolling. While troll behavior online can be a good laugh for all involved, it can also be hurtful, disruptive, or distorting to the conversation at hand and the people in it. It’s this type of troll behavior that Twitter is hoping to address, and it will do so by hiding that content when it’s found. There are some new signals at play in figuring out who the trolls are and who’s legitimately contributing, just in an arguably negative way, and trollish content that doesn’t quite violate Twitter’s community guidelines is still viewable, it just won’t be visible by default. This move will hopefully make conversation more streamlined and make it harder for trolls to knock things off track.
Among the new signals for figuring out trollish behavior, along with the typical behaviors and content considered by the algorithm, are people creating multiple accounts, accounts that haven’t verified their email addresses, and accounts that frequently mention accounts that don’t follow them, especially in a negative capacity. When a troll is found to be wreaking havoc on a conversation, their contributions will be hidden by default, but can be displayed by clicking on a button that will show all the Tweets in a thread. Likewise, negative trollish Tweets will be hidden in searches by default, unless the searcher uses the “Show all” option, opting to take in their search results unfiltered.
Trollish behavior detracting from conversations is one of Twitter’s smaller concerns at the moment, but it is certainly a worthy cause nonetheless, and something that every online community has to figure out how to deal with eventually. Twitter’s methods of finding and dealing with trolls are similar to, and possibly sprung from, how the service roots out abusive Tweets or other rule-breaking content. Finding and hiding or deleting Tweets that constitute fake news or misinformation also run along similar lines. In all of these scenarios, either an algorithm finds the behavior and takes action, such as sending it for human review or deleting it, or somebody in the conversation reports the bad content to be reviewed.