A new algorithm can identify Twitter accounts carrying out bullying and troll-like behaviour with 90% accuracy, researchers have said.
The machine learning software uses natural language processing and sentiment analysis on tweets to classify them as cyberbullying or cyberaggression.
The algorithm has been developed by researchers at Binghamton University in the United States.
The research comes as a number of high-profile personalities in the UK โ including Gary Lineker and Rachel Riley โ backed a new campaign to ignore and report abusive messages to help cut the spread of hate on social media.
The researchers said their software had successfully identified bullying and aggressive accounts on Twitter with 90% accuracy.
Jeremy Blackburn, a computer scientist on the research team, said the new algorithm used information from Twitter profiles as well as looking for connections between accounts.
โWe built crawlers โ programs that collect data from Twitter via a variety of mechanisms,โ he said.
โWe gathered tweets of Twitter users, their profiles, as well as (social) network-related things, like who they follow and who follows them.โ
He said that looking for links between users can help differentiate between aggressive behaviour and regular interactions.
โIn a nutshell, the algorithms โlearnโ how to tell the difference between bullies and typical users by weighing certain features as they are shown more examples.โ
While the computer scientist hoped the tool could be used to react to cyberbullying, he admitted more needed to be done to proactively cut abuse on social media platforms.
โOne of the biggest issues with cyber safety problems is the damage being done is to humans, and is very difficult to โundoโ,โ he said.
โFor example, our research indicates that machine learning can be used to automatically detect users that are cyberbullies, and thus could help Twitter and other social media platforms remove problematic users.
โHowever, such a system is ultimately reactive: it does not inherently prevent bullying actions, it just identifies them taking place at scale.
โAnd the unfortunate truth is that even if bullying accounts are deleted, even if all their previous attacks are deleted, the victims still saw and were potentially affected by them.โ
Social media platforms have come under increased pressure to do more to protect their users from hateful and harmful content after a number of concerns were raised around the impact of such sites on mental health and wellbeing, particularly among young people.
A government white paper published earlier this year proposed the introduction of a statutory duty of care, which would compel social networks to protect their users or face large fines.
Press Association