Latest Headlines
DEALING WITH HATE SPEECH
In June 1993 a new radio station called Radio-Television Libre des Mille Collines (RTLMC) began broadcasting in Rwanda.
The station was rowdy, and used street language – there were disc jockeys, pop music and phone-ins. Sometimes the announcers were drunk. It was designed to appeal to the unemployed, the delinquents and the gangs of thugs in the militia. “In a largely illiterate population, the radio station soon had a very large audience who found it immensely entertaining. ” This entertainment led to the slaughtering of 800,000 people with machete in 30 days.
Soon enough this radio station consistently repeated messages to deliberately “troll,” incite and awaken a nefarious will to destroy and to cause the other party to be moved to anger to retaliate. In fact the word troll in fishing terminology means to constantly lead a fish along a baited line in other to catch the fish.
This has become the order of the day in the world of social media and the internet. Everybody that cares to follow is being lead across this to elicit their reactions in a bit to denigrate and slander them, This has also snowballed into hate speech. In fact in the pot pourri of trolls, hate speech is a major component. This includes a great level of incitement, instigation and arrogant blackmail.
Hate and beef takes over. A recent statistical analysis showed that women are the most affected group to be trolled. The Guardian of UK says over 40 percent comments on articles written by women are abusive ones. The stats though not empirical are even more dire in Nigeria as regulators interviewed say they have taken over 40 million statements on hate speech pronouncements online in the last 10 years. The recent political imbroglio in Nigeria has further heightened trolling among various groups and hateful speeches have become the order of the day. In fact the leader of the free world is not left out with his recent support of white supremacist and his slander of liberal news outlets as fake news. It is said that over 40% of his tweet has been slanderous or hateful speeches. The shocking part of this debate is that there are no local laws that adequately defines what constitutes hate speech or trolls and how people involved could seek redress. The U.K. recently sent a stern warning but this might not be adequate. The definition of terms of what makes up hate speech is still not defined.
However, in order to clamp down on hate speech the internet giants have to be able to effectively define it but let’s see how the giants define it.
Facebook defines “hate speech” as “direct and serious attacks on any protected category of people based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or disease”.
Twitter does not provide its own definition, but simply forbids to “publish or post direct, specific threats of violence against others.”
YouTube website clearly says it does not permit hate speech, which it defines as “speech which attacks or demeans a group based on race or ethnic origin, religion, disability, gender, age, veteran status and sexual orientation/gender identity.”
Google makes a special mention on hate speech in its User Content and Conduct Policy: “Do not distribute content that promotes hatred or violence towards groups of people based on their race or ethnic origin, religion, disability, gender, age, veteran status, or sexual orientation/gender identity.” Hitherto that was the definition by the internet giants but I was excited when
on May 2016, Facebook, Google and Twitter signed a code of conduct, announcing a set of standards for dealing with hate speech, including:
A promise to review the majority of reports of illegal hate speech and remove the offending content within 24 hours; making users aware about what is banned by each company; training staff to let them better spot and respond to online hate speech.
Furthermore, Germany’s justice minister Heiko Maas proposed fining social media up to €50m for not responding quickly enough to reports of illegal content or hate speech. (March 2017)
• The law would require social media platforms to come up with ways to make it easy for users to report hateful content. Companies would have 24 hours to respond to “obviously criminal content” or a week for more ambiguous cases.
These are hitherto measures but more should be done, because proliferation of hate speech and trolls could damage the moral fiber of the world.
Rufai Oseni, rufaioseni@gmail.com







