Social media companies accelerate removals of online hate speech

Social media companies Facebook , Twitter and Google’s YouTube have accelerated removals of online hate speech in the face of a potential European Union crackdown.

The EU has gone as far as to threaten social media companies with new legislation unless they increase efforts to fight the proliferation of extremist content and hate speech on their platforms.

Microsoft, Twitter, Facebook and YouTube signed a code of conduct with the EU in May 2016 to review most complaints within a 24-hour timeframe. Instagram and Google+ will also sign up to the code, the European Commission said.

The companies managed to review complaints within a day in 81 per cent of cases during monitoring of a six-week period towards the end of last year, EU figures released on Friday show, compared with 51 per cent in May 2017 when the Commission last examined compliance with the code of conduct.

On average, the companies removed 70 per cent of the content flagged to them, up from 59.2 per cent in May last year.

EU Justice Commissioner Vera Jourova has said that she does not want to see a 100 per cent removal rate because that could impinge on free speech.

She has also said she is not in favour of legislating as Germany has done. A law providing for fines of up to €50 million for social media companies that do not remove hate speech quickly enough went into force in Germany this year.

Jourova said the results unveiled on Friday made it less likely that she would push for legislation on the removal of illegal hate speech.


‘NO FREE PASS’

“The fact that our collaborative approach on illegal hate speech brings good results does not mean I want to give a free pass to the tech giants,” she told a news conference.

Facebook reviewed complaints in less than 24 hours in 89.3 per cent of cases, YouTube in 62.7 per cent of cases and Twitter in 80.2 per cent of cases.

“These latest results and the success of the code of conduct are further evidence that the Commission’s current self-regulatory approach is effective and the correct path forward.” said Stephen Turner, Twitter’s head of public policy.

Of the hate speech flagged to the companies, almost half of it was found on Facebook, the figures show, while 24 percent was on YouTube and 26 per cent on Twitter.

The most common ground for hatred identified by the Commission was ethnic origin, followed by anti-Muslim hatred and xenophobia, including expressions of hatred against migrants and refugees.

Pressure from several European governments has prompted social media companies to step up efforts to tackle extremist online content, including through the use of artificial intelligence.

YouTube said it was training machine learning models to flag hateful content at scale.

“Over the last two years we’ve consistently improved our review and action times for this type of content on YouTube, showing that our policies and processes are effective, and getting better over time,” said Nicklas Lundblad, Google’s vice president of public policy in EMEA.

“We’ve learned valuable lessons from the process, but there is still more we can do.”

The Commission is likely to issue a recommendation at the end of February on how companies should take down extremist content related to militant groups, an EU official said.

Related posts

Russia Takes Control of Vuhledar After Two Years of Ukrainian Defiance

Iranian Missile Strike on Israel Demonstrates Increased Capability for Larger, More Complex Operations

Israel Strengthens Military Presence Along Lebanon Border