Connect with us

Birmingham City

Social media boycott is well-meaning but ineffective in fighting racist abuse

Thierry Henry quit social media in March and revealed his decision was down to the lack of accountability Twitter and Instagram to stem racist and abusive messages from anonymous followers.

The 43-year-old, who was recently manager of Montreal Impact, vowed not to return until ‘the people in power regulate the platform’.

He felt social media has become toxic because it’s too easy for ill-meaning people to bully and harass.

His move has had a minor knock-on effect with Scottish Premiership side Rangers and Championship duo Swansea City and Birmingham City announcing similar boycotts to their social media, coupled with many Premier League managers wanting a more widespread boycott, but leaving the platform does nothing to stop racist abuse.

What is the problem?

Abuse on Facebook, Twitter and Instagram is out of control, but this specific issue is that black and mixed-race footballers are being attacked on account of their race during and after matches by anonymous accounts.

Not only are players being targeted with racial slurs, they’re now on the receiving end of monkey emojis.

Denigrating black and brown people by comparing them to anything simian has been a dehumanising tactic for half a millennia, but Twitter has said it doesn’t specifically break the rules on racism.

The perpetrators are mostly anonymous accounts with fake email addresses too, so they’re not always easy for police to track down if they can’t get an IP code.

What’s really strange is that Twitter does screech into action when it comes to copyright content of football games.

Many have had their accounts suspended for sharing clips of a match, but the social media giant doesn’t seem to take racism as seriously.

What’s been the reaction?

There has been strong condemnation in the media and among pundits, but talk does very little on its own.

We know it’s bad that players are being abused and the problem isn’t being fixed by social media platforms, so it’s time for action.

Most who are on the end of abuse are still kneeling for Black Lives Matter before games, so they must be feeling very despondent.

Wilfried Zaha certainly is, as he feels the gesture has lost all meaning without implemented change, and Twitter’s slowness to take this seriously is frustrating.

Every gameweek we hear about players being racially abused and the perpetrators are rarely punished.

It seems strange that a social media platform can exist but the account holder of an offensive tweet can’t be found, so many are understandably losing their patience.

What ideas have been floated so far?

Identification is one route some people believe will solve the problem, but they’re ignoring the fact that many accounts need anonymity for their own safety and shouldn’t be forced into giving up their sensitive details for a free social media site.

https://twitter.com/dsharp4811/status/1376151132507140096?s=20

The idea makes some sense because a lot of racist abuse comes from anonymous accounts, so forcing people into revealing their true identity could correct their behaviour.

But this move would punish anonymous accounts that aren’t breaching Twitter’s terms and services.

Boycotting is the other idea being suggested – a typical go-to when making a stand against something, but it’s only effective when large numbers join in.

Only a couple of football clubs in the country are going silent, so Twitter isn’t really going to feel any pressure to change from that.

And people leaving the platform doesn’t fix the problem, it only stops them being directly abused. It doesn’t stop somebody tweeting about that person in an abusive way, so this isn’t an idea with legs.  

What is the ultimate solution?

Cleansing the platform property by updating the community guidelines. Twitter et al need to address the reaction from some accounts during matches and after they’ve finished, as that’s when a lot of abuse occurs.

Tweets could be instantly deleted if they contain certain words and the account would have to contact the social media platform to appeal for the tweet to stay up.

There could be a flagging system where tweets aren’t visible or are masked with a warning of bad and offensive language instead too.

Ensuring that the victims can use social media without needing to see toxic abuse in their replies and mentions is the goal, so making it more difficult to send such abuse and then cloaking messages that get through is a start.

And on the subject of identification, people do need to be held accountable for their actions, meaning Twitter needs a way of finding the account holder if they’re anonymous and have fallen foul of their community guidelines.

This could mean people might need more than just an email address to have an account but those details aren’t made visible to other users and are only unearthed by the platform if a police investigation is launched.

A lot of people use Twitter for their jobs, to speak with friends, to keep up to date on news and to share their opinions to the masses. They shouldn’t have to leave because the company is failing victims.

More in Birmingham City