Election Interference: The Dawn of the Era of Doublespeak
As we come out of the 2019 European Union parliamentary elections and look forward to the 2020 U.S. presidential elections, disinformation and election interference continues to plague the democratic process. The feeling among the cybersecurity and election security communities is: we know it happened, we know it’s coming, and we must figure out how to defend against it.
Unfortunately, what we’re finding is that despite our awareness of and efforts to identify disinformation--the intentional attempt to disseminate false or misleading information--let alone counter it, picking out disinformation isn’t something anyone in the West has done well, even when governments know it’s there. The addition of domestic and local actors to the disinformation game is muddying the waters and adding a layer of legal complexity to the issue that a U.S. versus Russia spy game never had.
Prior to the May 2019 EU elections, multiple outlets and commentators speculated on the possibility of Russian interference. The European Parliament released a statement condemning third party interference in elections and specifically called out Russia, among others, for efforts to “undermine the foundations and principles of European democracies.”
Except there wasn’t a clear disinformation campaign organized by foreign actors identified prior to the election. The EU’s East Stratcom Task Force noted that any interference appeared to be “less sensational” than historical efforts, and although The New York Times identified cases of Russian-linked disinformation, it also noted the presence of domestic “copycats” that made it difficult to distinguish between disinformation and legitimate public debate.
This presents a problem for governments. If they cannot identify disinformation before election interference occurs, how can they take countermeasures? Additionally, every time a government raises the specter of Russian meddling and there is no hard evidence, it plays into the narrative that political factions are merely invoking a boogeyman to explain their own failings. In a world where news is often disseminated in 280 characters, nuance is often lost.
So, did the EU cry wolf? Did Russia attempt to meddle in the EU elections? In mid-June, the European Commission published a report that appeared to point the finger at Russia, while still not identifying a clear foreign-directed election interference campaign:
At this point in time, available evidence has not allowed to identify a distinct cross-border disinformation campaign from external sources specifically targeting the European elections. However, the evidence collected revealed a continued and sustained disinformation activity by Russian sources aiming to suppress turnout and influence voter preferences.
Despite numerous news reports referring to these “Russian sources” as social media, what they actually appear to refer to are largely Russian state media, such as RT and Sputnik News, based on what the EU’s East Strategic Communication Task Force publicly reports on as disinformation cases. Nowhere in the European Commission report does it make clear that social media accounts spreading disinformation were positively linked to Russia. ZeroFox analysis similarly found that Russian efforts primarily focused on the use of opinion pieces to play into divisive themes that already existed in the major EU countries such as England, France, and Germany. The apparent goal of these pieces was to help reinforce the echo chambers that have already formed as the result of local disenfranchisement with aspects of the system.
Given this, is the publication of disinformation surprising for sources such as RT and Sputnik? Publishing misleading information or propaganda through state media sources isn’t exactly inconsistent with information operations (IO) doctrine. There’s a reason Foreign Policy called Sputnik “the Buzzfeed of Propaganda” in 2014 and why it used to be illegal for Voice of America to broadcast inside US territories. Although voters may not necessarily be familiar with these sources, RT and Sputnik are funded by the Russian state. There’s a much better chance voters will recognize disinformation from these sources than from “Susan Smith” who grew up in Indiana and yet has an oddly poor command of the English language.
Why Disinformation Will Be Harder to Identify in 2020
The experience with the European elections indicates a key takeaway: disinformation might be there, but it might not be clear who’s doing it or exactly how they’re doing it, and it likely won’t follow the same patterns identified from 2016. We’re highlighting a few things as we look forward to 2020, including the involvement of more domestic parties, other foreign actors capitalizing on the opportunity for interference, and the challenges for governments in countering these efforts.
We’re expecting that it will continue to get more difficult to actually pick out disinformation originating with a foreign actor. One of the challenges with identifying foreign interference is that it’s not always easy to distinguish inauthentic content from legitimate, if polarized, public debate on social media and elsewhere online. In the EU elections, organizations such as BBC Monitoring noted the presence of polarized messages that mirrored pro-Russian disinformation but that originated with European groups and organizations.
In addition to local and domestic actors muddying the waters of disinformation, other foreign actors might be involved and use different tactics and techniques. In May the social networks also removed Iranian-origin social media accounts spreading disinformation, some of which related to the 2018 U.S. congressional elections. Iran and potentially other actors will likely be looking to the Russian example to see how they might sway outcomes in a direction that benefits them.
The ability for governments to get involved can also be restricted. The EU’s task force, for example, was criticized in 2018 for including articles published by Dutch media as examples in its database of disinformation, with critics accusing it of squashing media freedom. Despite naming a new official to oversee election interference and other threats, the U.S. Intelligence Community has limited domestic authority even though many of the webs of disinformation are likely to include U.S. persons. And for any government getting involved, countering messages spread by domestic actors could result in the reduction of political discourse and likely further alienate already disenfranchised groups, making any state response to local disinformation potentially hazardous.
In a post-truth era, disinformation is losing the prefix. Even if governments, private companies, research organizations, and the social networks work together and in tandem to identify serious instances of disinformation, the reactionary response has focused on the stifling of one end of the political speech spectrum.
This is why, when making accusations of disinformation, organizations that are involved need to be clear about their evidence. It doesn’t help to identify disinformation if it’s difficult for others to understand the particular characteristics and patterns of the activity in question: who is involved, if known; what sources or platforms; when the activity occurred; etc. Otherwise, as we’ve seen, readers will assume you mean Russian bots on Twitter published blatantly false information instead of local organizations sharing misleading RT stories in Facebook groups. Calling Russia the sole instigator of disinformation every time, without public evidence backing it up, can undermine future efforts against disinformation. Too, Russia knows that the disinformation landscape is changing and shifting and can use such unfounded accusations to bolster its “cry wolf” narrative against the West.