Thursday

April 25th, 2024

Insight

Politicized trolling is worse than fake news

Leonid Bershidsky

By Leonid Bershidsky Bloomberg View

Published July 31,2018

Online disinformation and the spread of deceptive political messages are pernicious, but they aren't necessarily the worst abuse of social networks by governments and political actors.


Rational people are resistant to propaganda, and irrational ones only consume messages that stroke their confirmation biases. No one, however, can be impervious to personal attacks on a mass scale.


A report by the human rights lawyer Carly Nyst and Oxford University researcher Nick Monaco is an early attempt to study the phenomenon of state-sponsored trolling, or the digital harassment of critics. The case studies come from a diverse set of countries: Azerbaijan, Bahrain, Ecuador, the Philippines, Turkey, the U.S. and Venezuela. They complement what is already known about the practice in Russia, whose achievements in the field of digital abuse have generated the most interest to date.


The stories in the report, commissioned by the Palo Alto, California-based Institute for the Future, are all similar in some respects.


Thousands of social network accounts, both operated by humans and by bots used to amplify the attack, gang up on a person who dares to criticize a regime or a political figure.


Invariably, the person is accused of being a foreign agent and a traitor. Memes and cartoons are used to insult the target.


The language of the comments, posts and tweets is often abusive; female targets, such as the Turkish journalist Ceyda Karan and her Filipina colleague Maria Ressa, are routinely threatened with rape.


The general idea behind the campaigns is to give the target the impression of swelling public indignation about his or her work and views, but also to drown out the target's voice with the howling of thousands of digital voices.



In more authoritarian countries, the campaigns are often conducted by pro-government organizations. That was the case in Russia in the early years of this decade.


According to the Institute for the Future report, it's the case in Azerbaijan today, where a group called Ireli ("Forward") openly hunts the regime's opponents on the web. The tendency, though, is toward the professionalization of trolling.


Russia's Internet Research Agency, featured in an indictment by Special Counsel Robert Mueller, is just one example of how trolling operations can be run by a corporation-like entity.


In Ecuador, a firm called Ribeney Sociedad Anonima won a government contract for trolling services. The Bahraini government has hired Western "black PR" firms to attack critics.


In the less authoritarian states, where voting is still meaningful, trolling operations often grow out of election campaigns.


In Ecuador, Rafael Correa created a troll army for the 2012 election and kept using it after he won.


In the Philippines, Rodrigo Duterte hired trolls to work for his 2016 presidential campaign and has since put some of the most prominent ones in government jobs.


In India, Prime Minister Narendra Modi's Bharatiya Janata Party maintains an "information technology cell," with thousands of members who receive daily instructions on what topics to promote and whom to gang up on.


The insults and threats can be unsettling on their own, and they can make it hard for the targeted person to get a coherent message to followers. And sometimes attacks have real-world consequences, as when trolls get hold of the target's personal information. That is what happened to the Finnish journalist Jessikka Aro, who tried to investigate Russian troll factories and was subjected to online and then offline abuse.


It's difficult to understand why social media platforms do little, if anything, to stop the trolling campaigns. Twitter and Facebook will remove posts and comments containing death and rape threats, but not insults, treason accusations or suggestions that a journalist is on a hostile spy agency's payroll. They also don't make it easy to complain about entire trolling campaigns rather than individual comments and messages, which are are difficult for a trolling target to flag: Ressa, the Filipina journalist, received up to 90 hate messages an hour at the height of the campaign against her.

The Institute for the Future makes some suggestions on how social networks can help, but they aren't particularly useful. For example, it says a network could ask users who create bot accounts to identify them as such, which troll farms would be understandably reluctant to do. It also suggests that the social media companies should somehow detect and identify state-linked accounts, a game of whack-a-mole that is as hard to play as it is pointless.


The easier and more useful thing would be to empower the targets of abuse campaigns. For example, flagging a dozen similar abusive comments should result in special attention from the network. Users should also be able to turn off comments to specific posts and temporarily disable tagging, otherwise it's too easy for trolls to take over a feed. And if bots are to be marked, it should be up to the networks to detect them: The technology is there, it's just not being applied consistently enough.


The best answer would be for the networks to talk to the trolls' targets and find out what tools they would have needed to fight back. The Institute for the Future's report would be a good starting point: The authors have interviewed some of the targeted journalists and activists. Together, these people and the social networks could figure out ways to curb politicized online harassment without curbing freedom of speech.

Leonid Bershidsky is a Bloomberg View columnist. He was the founding editor of the Russian business daily Vedomosti and founded the opinion website Slon.ru.

Columnists

Toons