Nobody likes trolls or people who abuse social media. Unfortunately, cyberbullying is a real thing and sites like Twitter can bread cyber bullies who sit behind keyboards with inflated egos tapping away aggressively as they troll others. Earlier this year, Twitter committed to taking action against trolls and abusers and has now followed through on this by implementing a new range of security and privacy features including; safer search results, the hiding of ‘lower quality’ tweets, mandating temporary restrictions on offending users and allowing more user control over which tweets they see.
Individually, these measures seem relatively small. However, cumulatively they do make a difference and you can see just how Twitter is making an effort to crack down and trying to improve the user experience. While being a troll is certainly on the list of things you don’t want to do on Twitter, sometimes it can raise the awareness of an entire industry. These new measures and features act like tools designed to make the platform more user-friendly.
Offensive content filtering
One of the new features completely hides some profiles with a warning that the content tweeted may be offensive. The user can simply click ‘yes, view profile’ however at least it provides a warning and allows users a moment to consider. This is especially handy if you’re browsing Twitter at work where others may see your screen! Apparently, users will be notified if their accounts get flagged as offensive but, as of now, there are no guidelines as to how an account gets flagged.
Twitter says that the feature is still being tested and that the system for flagging an account works similarly to the existing guidelines and user reports. Although there is nothing specific to say just what exactly that means.
While still in the testing phase, the concern here is for brands who could end up with their accounts being flagged as offensive and hence gated. We at least hope that there’s some human intervention or appeals process and that it’s not simply an algorithm that can also affect innocent users through a black and white vetting system. Brands obviously want to reach as many people as they can and any gates on their account could impact their reach.
Twitter will likely be called upon to provide more detailed information should the system get rolled out permanently but, for now, we can only hope that the system is designed to target the right people and that our brands don’t get caught in a bureaucratic vetting nightmare.
Stopping the creation of new abusive accounts
Another new initiative is cracking down on suspended users creating new accounts. For a while, users have been able to report other users for abuse and harassment and Twitter then may or may not act to suspend that users account. This is all good and well until the same user just opens another account and continues to harass you. Well, Twitter says that it’s now “taking steps to identify people who’ve been permanently suspended and stop them from creating new accounts”.
Like the offensive profiles feature, there’s no information available yet as to how Twitter will monitor and implement this. There are numerous ways to circumvent the system so stopping banned users is no easy task. Some suggestions may be a ban based on the users IP address or maybe an algorithm that detects when similar accounts are created following a suspension. The fact that there are ways to circumvent this is probably the reason why Twitter isn’t sharing the details but, we can only hope that some intelligence has gone into this initiative and that it will, at the very least, make it more difficult for abusers.
Safer searching
Another feature is what Twitter is calling “safer search results”. This is simply a search option that removes results from any blocked or muted accounts. You would think that this should already have been the case but apparently not. This new option will allow people to have their results filtered or to still see the full list if the user chooses.
Collapsing potentially abusive or low-quality Tweets
The final new feature is an algorithm that collapses potentially abusive or low-quality tweets. This new feature will display tweets in order and hide potentially abusive and low-quality replies. For this, Twitter is using machine learning and will detect low-quality by looking at the date the account was created, follower to following ratio and other spam detection measures.
If you choose, you will still be able to see all tweets, it’s just that the potentially low quality or abusive tweets will be hidden behind a “show less relevant replies” banner. In essence, Twitter is not eliminating these Tweets, they’re simply hiding them from view and these filters will be applied to all accounts. The thinking is by hiding offensive or abusive comments it gives trolls less exposure which should (hopefully) demotivate them from posting such things in the first place.
All in all, it’s great to see Twitter making an effort to stamp out poor user behaviour – an area that they’ve been heavily criticised on in the past – lack of action as well as a lack of innovation on the platform has open the gates for trolls, abusers, and bots who manipulate metrics based on fake retweets, mentions and more. Other platforms and apps seem to be ahead of Twitter in terms of privacy controls and abuse safeguards and these platforms have enjoyed an uptake in users due to an improved user experience. Some even say that Twitter’s dead’ with studies suggesting users are moving away from the platform because they don’t feel they’re getting value from it any longer.
Despite this, it’s good to see Twitter taking some action and we can only sit back and hope that these new safeguards will provide a better quality user experience free from trolls and bots (mostly)!