Saturday

April 20th, 2024

Tech & Politics

Social media giants face big test in midterm elections

James Hohmann

By James Hohmann The Washington Post

Published Oct. 10, 2018

With less than a month before the midterm elections, technology companies are fighting to prove they can adequately shore up their platforms and products against foreign influence. Their success may mean the difference between getting to police their own house and having lawmakers do it for them.

Election Day could be a tipping point for Silicon Valley titans, who are increasingly in Washington's harsh glare following revelations that disinformation campaigns linked to Russia were widely disseminated on their platforms ahead of the 2016 elections. Tech moguls like Facebook's Mark Zuckerberg and Twitter's Jack Dorsey were dragged to Capitol Hill to give mea culpas for their past practices and publicly pledge to do better next time.

The companies contend they have learned from their missteps during the 2016 election and are improving their election-integrity efforts as other elections have taken place around the world. They've promised to do more to identify and stamp out fake accounts, and they have increased transparency around political ads. Facebook opened a 20-person war room on its Menlo Park campus aimed at quashing disinformation and deleting fake accounts.

At the same time, tech moguls acknowledge they're in an arms race against bad actors who are continuing to misuse their platforms ahead of the 2018 midterms that will determine which party controls Congress next year. Major failures will undoubtedly lead to more calls for tighter control of the industry from Washington. California Rep. Ro Khanna, a Democrat who represents parts of Silicon Valley, is already calling for an "Internet Bill of Rights " tackling data breaches and privacy of consumer information if Democrats retake the majority next month.

"The jury is still out," said Sen. Mark Warner, D-Va., ranking minority-party member on the Senate Intelligence Committee that investigated Russian interference in 2016, in an interview. "The companies are moving - whether it's with enough focus is still an open question."

Election integrity has become a companywide priority at Facebook, said Katie Harbath, the head of global politics and government outreach. She said the company is working round-the-clock to prevent bad actors from exploiting its platform in elections around the world.

"We know how important elections are and the role that Facebook plays in them," she said. "We can never be perfect, but we're continuing to improve every day."

Facebook believes artificial intelligence can be used to support its efforts to identify bad actors, Harbath said. The company has said its technology can block millions of accounts a day as they are being created, before they spread fake news or inauthentic ads.

The urgency of Silicon Valley's war on disinformation was underscored this summer when the typically secretive technology players made announcements that they detected and removed fake accounts tied to Iran - a sign that other adversaries are learning from Russia's playbook. In August, Facebook, Twitter and Google removed 994 accounts. Facebook also said it deleted an unspecified number of accounts with ties to Russia.

The announcements were also a reminder of the challenges the companies face as they engage in a never-ending game of whack-a-mole with bad actors. Facebook has repeatedly made announcements about removing accounts aimed at sowing political discord that display activity similar to that conducted by Russia's Internet Research Agency in the 2016 election. In late July, Zuckerberg said on his personal Facebook page that the company removed 32 accounts and pages engaged in a "coordinated inauthentic campaign," which was organizing events like a protest against the "Unite the Right" event.

Facebook now includes statistics on fake account removals in its transparency reports, where it also lays out the number of data requests it received from law enforcement. The company disabled 1.27 billion fake accounts between October 2017 and March 2018. Twitter has also updated its policies to better reflect how it identifies fake accounts.

The companies can't just look for behavior from bad actors that they've seen before.

"We're having to look for a whole host of other potential behaviors and manipulation attempts," said Del Harvey, Twitter vice president of trust and safety. She said the company is being more proactive and trying to monitor unusual behavior that could signal bad actors are trying to work their way into communities and sow division.

Yet it's not even clear if Twitter has been able to crack down on accounts already known to be sowing fake news for more than two years. Most Twitter accounts linked to disinformation in the 2016 election are still active, according to a report from the Knight Foundation, in partnership with researchers at George Washington University and social media research firm Graphika.

Twitter pushed back on the report. Harvey said in an emailed statement the report doesn't take into account any action the company takes to remove automated or spam accounts from being viewed by people on Twitter because of the technical method the researchers used to gather the data.

Google has been less transparent with lawmakers in how it's tackling its own disinformation problems. The company told Congress last year that it found more than 1,000 videos that appeared to be posted by accounts associated with Russian bad actors. Links to those videos were frequently shared on other social media sites, the company said. Google said at the time that activity on its platform appeared to be more limited than other ones because it does not offer the "kind of targeting or viral dissemination tools" that these actors prefer.

Google has tried to steer clear of the spotlight as lawmakers raise the issue of election integrity, which has stoked fears the company is not taking its role in the situation seriously enough. When Twitter Chief Executive Dorsey and Facebook Chief Operating Officer Sheryl Sandberg testified on Capitol Hill about their companies' efforts to combat election interference, Google refused to send an executive senators deemed senior enough.

Google Chief Executive Sundar Pichai did visit Washington in late September to defuse tension, meeting with lawmakers, members of the Trump administration and Pentagon officials. During the trip, Pichai spoke with lawmakers about alleged bias against conservatives at Google, and he spoke with Pentagon officials about a recently severed defense contract.

But Warner criticized the company for not being more proactive on election security - and suggested there could be consequences.

"Google is just AWOL," Warner said. "I think they are making a huge mistake in judgment and a huge mistake in policy for not treating these issues seriously."

Google did not make an executive available for an interview or respond to a list of questions from The Washington Post about its efforts aimed at safeguarding the election cycle. A spokesperson said the company has protected elections around the world from cyberattacks for a decade.

"As we approach the midterms, we remain committed to providing people with accurate, up-to-date information about their elections," the spokesperson said.

Warner warned that if the midterm elections expose uncontrolled activity from bad actors, the era of self-regulation could be at its end.

"I hope this is the last cycle we have without guardrails," Warner said.

Columnists

Toons