Defense Against the Dark Arts – Countering Disinformation in the European Union

By Akos Szegofi

Disrupting lies: where to start?

Whether to deploy or not to deploy counter-measures against disinformation – much like regulations on tobacco, drugs or firearms – is under the jurisdiction of individual EU member states. This made sense if we take into account that a few years ago, the Gerasimov-doctrine defined intentional disinformation as a weapon in the hybrid warfare. On the other hand, the Russian state used disinformation to disrupt European cooperation, making it a transnational issue. In light of Russia’s recent election-meddling in several countries, including the Brexit- referendum, supranational EU organizations found themselves in dire need of addressing the challenge that Russian-disinformation campaigns pose for EU democracy.

Parallel to this “awakening”, member states began launching their own anti-disinformation campaigns in rapid fashion, to protect domestic elections. Disinformation can be conceptualized as a communicational process, and countries gave creative and quite different answers when it came to the question of disrupting the process.

Being successful in spreading bogus information means that citizens have a desire to consume it – this is the demand side of disinformation that is perhaps the hardest to handle. On the other hand, disinformation is business, meaning that message-amplifiers have an economic interest in circulating pro-Kremlin (or EU-sceptic) messages. With new regulations, these interests can be diminished. Communicational channels such as Facebook are more than capable of manipulating their users by writing algorithms that favor the creation of echo-chamber environments and serving amplifiers by providing them micro-targeting and other socio-econometric tools. Yet the platforms can be forced to regulate their activities.

Countries like Belgium and the Czech Republic positioned themselves for the long-run and opted to educate their citizens (the demand side). The ability to read (literacy) and the ability to surf the internet (digital literacy) does not necessarily mean that the person in question can critically evaluate the information received. Media literacy programs were launched in high schools, teaching children how to identify disinformation.

In the meantime, groups of social media pioneers at the Baltic countries established “troll- hunter” networks. Volunteers track down and identify accounts and people deliberately spreading pro-Kremlin propaganda. “Black lists” were written for social media users to check whether the news source that advertised its content on their news feed had any ties with the Russian government. Following these black lists, countries began lobbying for companies not to advertise on sites that were listed – a heavy blow to anyone who wanted to make a fortune out of spreading EU-sceptic bogus messages.

Fact-checkers – journalists who assess the veracity of public utterances – suddenly found themselves drowning in work. At first, the situation didn’t seem very dangerous; most of the messages were written in poor English or German – often with the use of the infamously bad Google Translate – and the pictures used to legitimize the claims of the lousy articles were easily traceable through reverse image search. In short, they were not very good lies. Soon enough, the situation turned worse. No matter the poor quality, high numbers of EU citizens started giving credibility to platforms posing as reliable news sites. It also might have swayed voting behavior. Fact-checkers looked like they were armed with a single flyswatter, desperately trying to fight off a horde of locusts. What Russian disinformation could not achieve with quality, it substituted with quantity.

The EU had to act, and among all the potential entrance points to the disinformation sequence, it decided to disrupt the process at the level of individual messages. In March 2015, the East Stratcom Task Force was established to combat Russian disinformation, and to communicate EU policies towards Eastern partners. The Taskforce ran notoriously low on staff and money (employing maybe a dozen, Russian and English-speaking journalists) until it was granted €1.1 million in 2018, and then another €3 million from January 2019. In addition, it launched a new website and a Facebook-account under the name EU vs Disinfo. The website has a comprehensive archive of debunked messages that appeared in EU countries with the intention to stir discontent in domestic politics, while the Facebook-account boasts infographics, short videos, investigative articles and weekly lists of the most outrageous popular disinformation that surfaces.

This article aims to assess the social media activity of EU vs Disinformation Facebook-page starting from the second budget-raise (January 2019), through the months until the European Parliamentary elections (the 26th of May). Critical recommendations about the activities of the EU vs Disinfo-initiative are included.

Humor, inoculation and repetition – the strategies utilized

One conspicuous feature of the EU vs Disinformation account is its colorfulness. The journalist staff uses custom-made illustrations, quality photos and occasionally cartoons, to stand out amidst the turmoil of the news feed. In this sense, the posts go directly against the type of disinformation it is aiming to debunk; the Taskforce wants the reader to observe the difference between quality journalism and cheap lies.

In this study, three types of Facebook-posts were distinguished and put under three categories.

  1. EU vs Disinfo regularly publishes so-called disinfo-of-the-week or #TOPFAKES-lists, in which the fact-checker team shows off five or six headlines from pro-Kremlin media about EU member state policies or history. These were put on the list either because of popularity or because of their ludicrosity. In five months, the page saw 28 of these posts. Most of the time, these posts are not hyperlinks but illustrations made for Facebook. Albeit the #Topfakes-lists are entertaining, it can also be counterproductive to echo disinformation on official platforms.
  2. The second category includes individual case studies. As some particularly nefarious piece of information gets virulently popular, the Taskforce might pay an entire article to debunk falsehoods. Articles on the Skripal-poisoning for example counted as individual case studies since the EU vs Disinfo team had to deal with the mutations of the same disinformation. Often enough, certain pieces of disinformation return repeatedly, as they become of central importance for Russian foreign policy. Alternative reality production happens along the lines of these central narratives. All the 52 categorized articles were exclusively written by the Taskforce-staff.
  3. Last but not least, longer investigative articles were distinguished that are not about disinformation per se, but about the bigger picture: the infrastructure behind the disinformation campaigns and the general methods of Russian propagandists to persuade readers. Some of the articles in this category are referenced works of other news outlets, and in two occasions, scientific articles. 53 posts belong to this category.

What can be said about the posts that the EU vs Disinformation uses to present its findings and messages? First of all, there are no huge differences between the number of likes that the posts receive, nor in the frequency in which they are being shared. Generally speaking, the reach is quite low, averaging between 127 likes for #TOPFAKES, 119 for case studies and (interestingly) 136 likes for investigative pieces. Sharing almost always lags behind likes, and the difference here is even smaller, around 48 shares on average for each category. It is safe to say that the social media penetration of the EU vs Disinfo is very far behind the platforms (like Russia Today or Sputnik) that it tries to combat. The most interesting observations of posts and methodology that the Taskforce uses are below:

  • Not so surprisingly, videos always work better than articles, both in terms of shares and likes. The few spikes in the dataset that are observable are almost exclusively due to the 8 videos that were published in the given period.
  • Investigative articles concerning the methods of Russian propaganda channels such as Russia Today are among the most popular reading materials. They provide insight into the inner workings of the propaganda machine, and it seemingly intrigues the readership. Not only are these posts popular, but they are also quite useful, since they have the potential to demolish the credibility of these news sources. The stories about Freudian slips and journalistic malpractices are handled with humor by Taskforce journalists, creating a slightly sarcastic atmosphere.
  • In comparison with the insightful articles about the inner workings of the propaganda machine, articles on general methodological information about how propagandistic narratives work and other abstract know-hows fall behind in popularity (except in one case when it was sponsored). The human mind understands problems best through actual (social) settings and examples, and this certainly shows. Below is an example of a well written but poorly received (below average in likes and shares) post.

  • Anything with the face of Putin on it sends the amount of likes and shares through the roof. There were 4 articles or pictures that displayed the Russian president, and all of them stood out in terms of reach.
  • It is methodologically important that the Taskforce team did experiment with gamification. Links towards games such as “pro-Kremlin bingo” and “Crimean bingo” are not extraordinarily popular, but quite useful. An interactive game could work as psychological inoculation – a method in which test subjects are presented weak arguments and strong counterarguments in order to vaccinate them against future manipulation. Gamification is considered as the cornerstone of media literacy projects as well.
  • The excessive use of humor by Taskforce journalists. EU vs Disinformation articles do not operate with the language of official reports. A usual problem with fact-checking is that it’s hard to make myth busting as entertaining as the myth. As the tone of Russian state propaganda turns increasingly nationalistic, heroic, and aggressive, specifically in cases like the annexation of Crimea, it simultaneously becomes easy for journalists to ridicule. It is a slippery slope – too much humor that turns into sarcasm ultimately takes away from the communicational impact. The message behind the humor reads: “Don’t be so dumb to fall for this. Have a laugh instead.” In the right amount, humor is perfect to combat the fear-mongering that disinformation utilizes.

Figure 1. As it is shown, the number of likes tops the number of shares. The most popular post of this kind (in likes) was a selection of 1001 pro-Kremlin fake news from last year. The most popular in terms of shares was a post about Kremlin disinformation in connection with the Notre-Dame disaster.

Figure 2. Interestingly, a short video footage on the case of the Kremlin's attempt to relativize Stalin's crimes came out on top with 730 shares and 497 likes (in this particular case, the number of shares exceeded the likes).

Figure 3. The most popular post of this kind was a sponsored infographic about the general strategies of disinformers, titled "Warning Signs".

EU Elections and fact-checking

This study looked at the nearly five-month period directly preceding the European Parliamentary elections. Interestingly, the first fact-checks and warnings about Russian intervention appeared not earlier than the 7th of April (the post itself, an article by Visegrad Insight was among the most poorly received posts in the sample). In the months before this particular post, Ukrainian elections and the one-year anniversary of the Skripal-poisoning were in the forefronts of interest.

It is worth noticing the lack of change in the data – the distribution of likes and shares are steady throughout the months, as it was “business as usual.” Posts relating to the EP elections sporadically appeared after the first post in April, but there was no big campaign launched, no sudden upsurge in popularity indicating sponsored content – or indicating any intention to fight off a theoretical Russian disinformational campaign with a counter campaign.

What can we learn from EU vs Disinfo?

Based on their activities, the Taskforce is more than a mere fact-checker team, and something less than a media literacy program – one could say that it is perhaps an early experiment of the EU in the field of disinformation. Fact-checking might be a useful tool for news outlets, but organizations such as the EU can do much more; they can push through legislative changes, they can (and did) pressurize platforms like Facebook into monitoring their content, and they could in theory launch actual media literacy programs. The biggest problem with EU vs Disinfo is, however, its perceived political partisanship. It is hard to see why people who are most susceptible to Russian disinformation would believe anything coming from an official EU agency. The EU vs Disinfo might not be in the position to differentiate truth from lies about the EU, precisely because it is an EU agency.