Facebook faces ban in Kenya for not stopping hate speech – TechCrunch

Kenya’s ethnic cohesion watchdog, the National Cohesion and Integration Commission (NCIC), has ordered Facebook to stop the spread of hate speech on its platform within seven days or face penalty. suspension in this East African country.

The watchdog reacted to a report by the advocacy group Global Witness, and Foxglove, a nonprofit legal firm, which pointed to Facebook’s failure to detect hate ads. This comes ahead of the country’s national general election.

The Global Witness report corroborated NCIC’s own findings that Meta, Facebook’s parent company, was slow to remove and prevent hateful content, stoking an already volatile political environment. The NCIC has now called on Meta to increase moderation before, during and after the election, while giving him a week to comply or be banned in the country.

“Facebook is breaking the laws of our country. They have allowed themselves to be a vehicle for hate speech and incitement, misinformation and misinformation,” said INCC Commissioner Danvas Makori.

Global Witness and Foxglove also called on Meta to stop political ads and use ‘ice-breaking’ measures – the tougher emergency moderation methods it has used to stem misinformation and civil unrest. in the 2020 US elections.

In Kenya, Facebook has a penetration of 82%making it the second most used social network after WhatsApp.

Facebook’s AI models fail to detect calls for violence

To test Facebook’s claim that its AI models can detect hate speech, Global Witness submitted 20 ads calling for violence and beheading, in English and Swahili, all of which were approved except one. . The rights group says it used ads because, unlike posts, they go through a stricter review and moderation process. They might also remove ads before they go live.

“All of the ads we submitted violate Facebook’s Community Standards, being labeled as hate speech and calls for ethnic violence. Much of the speech was dehumanizing, comparing specific tribal groups to animals and calling for rape, slaughter and beheading,” Global Witness said in a statement.

Following the findings, Ava Lee, head of Global Witness’ Digital Threats to Democracy campaign, said: “Facebook has the power to make or break democracies and yet time and time again we have seen the company give priority to profits rather than people”.

“We were appalled to discover that even after claiming to improve his systems and increase his resources ahead of the elections in Kenya, he still endorsed the overt calls for ethnic violence. This is not a unique case. We have also seen the same inability to function properly in Myanmar and Ethiopia over the past few months. The potential consequences of Facebook’s inaction around the elections in Kenya and other upcoming elections around the world, from Brazil to the midterm elections in the United States, are terrifying.

Among other measures, Global Witness is asking Facebook to double down on content moderation.

In response, the social media giant says it is investing in people and technology to stop misinformation and harmful content.

This said it had “hired more content reviewers to review our app content in over 70 languages, including Swahili.” In the six months to April 30, the company reported removing more than 37,000 pieces of content violating hate speech policies, and another 42,000 for promoting violence and incitement on Facebook and Instagram.

Meta told TechCrunch that he also works closely with civic actors such as election commissions and civil society organizations to see “how Facebook and Instagram can be a positive tool for civic engagement and action. they can take to stay safe while using our platforms”.

Other social networks like Twitter and recently TikTok have also come under the spotlight for not taking a more proactive role in moderating content and stemming the spread of hate speech, which is seen as fueling political tensions in the country. .

Last month, a study by the Mozilla Foundation found that TikTok was fueling misinformation in Kenya. Mozilla came to the conclusion after reviewing 130 highly-watched videos sharing content filled with hate speech, incitement and political misinformation – contradicting TikTok’s policy against hate speech and sharing discriminatory, inciting and synthetic content .

In the case of TikTok, Mozilla concluded that content moderators’ unfamiliarity with the country’s political context was a key reason why some inflammatory posts were not removed, allowing misinformation to spread about the Internet. social app.

Calls for social media platforms to employ tougher measures come as heated political discussion, divisive opinions and outright hate speech from politicians and citizens increase ahead of election of August 9.