Special counsel Robert Mueller has indicted the Russian operatives who created fake identities and ran targeted advertising on Facebook. The ads themselves - supporting extreme anti-immigration groups and the phony "Army of Jesus" on the one hand, and fake "black lives matter" slogans on the other - have been made public.
Reams of words have been written, studies have been made.
We know how social media increases polarization, how fact-checking reaches only a narrow audience, how the lack of regulation enables false and opaque political advertisements, how algorithms favor angry and extreme views. Congress, Britain's Parliament and the European Union have all held hearings to discuss the problem. Facebook and Twitter have taken down some Russian-origin accounts.
We have learned a lot - and yet we have learned nothing.
What's worse, their messages are getting louder. After analyzing 2.5 million tweets and 6,986 Facebook pages, the Oxford Internet Institute has just found that the amount of biased, hyperbolic and conspiratorial "junk news" in circulation is actually greater than it was in 2016.
More importantly, the messages are no longer seen just by a small fringe but are much more likely to be consumed by mainstream users of social media. At the same time, only a tiny percentage of political information available on social media actually comes from political candidates. People are now more likely to see a targeted ad from an unidentified political group with an opaque agenda, in other words, than something written by the people actually vying for their vote.
Those who follow the news online are also very likely to see information not created by humans at all. A new tool created by a start-up called Robhat Labs found that as of late last week, about 60 percent of the conversation on Twitter is still driven by accounts that are probably bots (bits of code that can be programmed to mimic humans). Another survey, conducted by the Anti-Defamation League, has found that nearly a third of the anti-Semitic propaganda pumped out online also comes from bots, and there seems to be no way to tell who is behind it.
Even after being told many times about the problem, YouTube - which is owned by Google - still allows its algorithms to be manipulated by Russia Today, the Russian state broadcasting company. The network's ongoing smear campaign against the White Helmets, a Syrian humanitarian group, still features high in search results. Meanwhile, in Brazil, junk news was spread during the last election campaign on not only Facebook but WhatsApp, where it cannot be corrected, let alone traced.
We have learned nothing and we are doing nothing. The stopgap measures taken, voluntarily, by the social media companies are like Band-Aids on a gaping wound. Facebook and Twitter have both hired people to monitor their sites for "hate speech" - a term with an extremely wide range of definitions - to dubious effect.
But other, more obvious steps have not been taken. Social-media bots could be banned altogether. More rigorous procedures could prevent the creation of anonymous accounts. YouTube, and others, could change their algorithms so that known sources of disinformation don't keep floating to the top. Lawmakers could force online political advertising to meet higher standards of transparency.
But I am repeating myself and, more to the point, I am repeating many others. Calls for regulation without censorship have been made by many people and many groups - it's just that there is simply no political will to make an real change. Heavily televised hearings with a CEO celebrity such as Mark Zuckerberg are not a solution - they're a stunt.
After the midterm elections are over, we need an informed national debate, a Congressional investigation that looks into all of the possible options, as well as a commitment by political leaders to take control of the information anarchy that will eventually consume them all.