Twitter to Block Trolls via Safety Mode
Having an account on a social media platform comes with the risk of cyberbullying from people whose sole objective is trolling others. Trolls target other users based on their religion, race, physique, gender, sexual orientation, and political views.
It’s for this reason that Twitter has been looking for a way to suppress abusive accounts and trolls more effectively on its platform. Well, they have found a solution in the form of ‘Safety Mode’ which is being rolled out to a group of testers on its website, Android and iOS.
Twitter’s ‘Safety Mode’ will flag any accounts that use hateful remarks and accounts which bombard users with unsolicited comments. This feature will then block these accounts for a period of seven days. It will be automatic once rolled out to the masses taking the burden off users of combing through comments and blocking trolls one by one.
What Has Twitter Said So Far?
Via the TwitterSafety handle, Twitter states, “Introducing Safety Mode. A new way to limit unwelcome interactions on Twitter.”
In a blog post, Senior Product Designer at Twitter Jarrod Doherty explains, “During the product development process, we conducted a couple of listening and feedback sessions for reliable partners with proficiency in online safety, human rights and mental health, including associates of our Trust and Safety Council.”
He goes on to say, “Their response influenced changes to simplify the use of ‘Safety Mode’ and enabled us to come up with ways to address the would-be manipulation of our platform. These trusted partners also played a crucial role in appointing Twitter users to join the response team, prioritizing female reporters and persons from marginalized groups.
Just like other social media sites, Twitter depends on a mix of human and automated control. While Twitter has never formally revealed how many human moderators it has employed, a 2020 report by NYU business school suggested that Twitter had employed about 1,500 moderators to look into the 199 million daily Twitter users globally.
A recent study on hate speech done by ‘Facts Against Hate’ for the Finnish government revealed that Twitter was the lowest-ranked tech giant when it came to hate speech.
According to author Dr. Mari-Sanna Paukkeri, the answer is to use (AI) artificial intelligence systems which humans have taught. She says, “there are a couple of different ways to say nasty things, and it is rocket science to build systems that can detect these.”
She continues to explain that simply highlighting specific words or phrases, a tactic which most social media platforms used, wasn’t enough.
As it deals with hateful remarks on its platform, at the same time, Twitter has revealed that it’s on a mission to crack down on ‘fake news’. Last month Twitter partnered with AP (Associated Press) and Reuters to expose false information and stop its spread.
How Does ‘Safety Mode’ Benefit Twitter Users?
Switching this feature on will allow an algorithm to detect any accounts sending hateful remarks to you. If a hateful comment is detected, then ‘Safety Mode’ will automatically block these accounts from seeing your posts and interacting with you so long as you don’t follow them or engage them on the platform.
If you’ve been selected to test ‘Safety Mode,’ you can activate this feature by going into Settings > Privacy and Safety, and you will see the option to activate it.
Although this feature is only available to a small group of testers at the moment, Twitter expects to roll out this feature to users on its website and apps on Android and iOS.
More Measures Need to Be Implemented!
Allowing an algorithm to determine which accounts should and shouldn’t be blocked could be dangerous to users, especially if a user is receiving hateful remarks from multiple accounts.
As of now, Twitter users can manually mute specific accounts and words to make their timelines more enjoyable. However, Twitter has been slow in addressing hateful remarks that many users have come across for a long time.
Twitter’s Executive on Public Policy in the UK, Katy Minshall, said: “While we have made progress in allowing users more control over their safety on Twitter, there is always more to be done.”
To sum it all up, it’s vital that as Twitter comes up with new ways to sustain itself, such as Twitter Spaces and Super Follows, it also ensures that people feel safe on its platform. Although it has taken longer than most users would have hoped, suppressing trolls via auto-blocking is a welcome first step to using social media without fear of abuse.