Knowns and unknowns
The European Parliament elections and the Counter Disinformation Network*
Saman Nazari
Lead Researcher at Alliance4Europe, Saman Nazari is an open-source intelligence researcher who focuses on Chinese and Russian influence operations. He helps coordinate the Counter Disinformation Network and plays a key role in developing the DISARM Framework.
Share this article:

To state the obvious, information manipulation is a problem – which is why Alliance4Europe and other organizations across Europe came together to monitor the threat and safeguard the integrity of the European Parliament (EP) elections in June. At the same time, we wanted to learn as much as we could about the scope and scale of the problem, identify potential solutions, and support initiatives such as the EU’s Code of Practice on Disinformation and FIMI ISAC (Foreign Information Manipulation and Interference - Information Sharing and Analysis Centre).
Ahead of the EP elections, the Counter Disinformation Network (CDN) brought together 140 practitioners from 43 organizations, including Open Source Intelligence researchers, fact-checkers, journalists, and academics.
Working towards a shared objective – to enable effective enforcement of legislation such as the Digital Services Act (DSA) and streamline government policies against foreign interference – the network coordinates activities and helps develop a common approach (for example, via the DISARM Framework of disinformation tactics & techniques). We also developed a standardized mechanism to alert policy-makers, regulators, government officials, journalists, and advocacy groups to cases of information manipulation.
The CDN has published 30 alerts since the end of May – and developed into an effective platform for collaboration, allowing quick and easy partnerships to form, and helping members publicize their research via the alert mechanism.
What were the biggest threats to the European Parliament elections? Where did these threats originate and what weaknesses did they seek to exploit?
The dispersed nature of the field makes it difficult to answer these questions comprehensively, but it is possible to draw general conclusions based on the evidence we and other researchers collected.
The main threat to the European Parliament elections came from authoritarian forces (domestic and foreign) trying to manipulate the information space – forces which focused their messaging predominantly on migration, climate, financial issues, undermining support for Ukraine, and undermining trust in democratic institutions.
According to Meta, most of the information manipulation on their platform was domestic in origin. It’s difficult to judge the reliability of this statement.
Domestic Lowlights
Reports produced by members of the CDN and other researchers show how far-right parties across Europe (and anonymous accounts supporting them) employed a variety of narratives designed to polarize. These parties, which typically have social media pages with wide followings, systematically employed disinformation to spread anti-immigrant, anti-Green Deal, and anti-EU narratives. Their tactics and techniques should be explored further.
Four reports by Democracy Reporting International identified multiple accounts on TikTok of unclear affiliation systematically promoting far-right European political parties and their candidates. It is impossible to say whether these pages were controlled by the parties themselves or by third-parties.
The Digital Forensic Research Lab, AI Forensics, and Alliance4Europe identified that far-right parties in France and Italy employed manipulative unlabelled AI-generated images in their campaigns, violating their EP group’s (Identity and Democracy) voluntary commitment not to use AI images, which the group itself also violated.
Investigative journalists also revealed how the far-right Swedish Democrats ran a network of anonymous accounts attacking their political opponents and the media.
Foreign lowlights
Members of the CDN and other researchers uncovered evidence of Russian government-affiliated activity, focused on undermining support for Ukraine and driving polarization.
Reset.Tech, Check First and Viginum identified a new tactic employed by the Kremlin during the election centered on coordinated efforts to overwhelm fact-checkers, researchers and newsrooms with floods of emails asking them to check fabricated cases.
The most common techniques of manipulation we detected include:
Hiding the affiliation of accounts
Many operations used either proxies or accounts with pseudonyms to hide their affiliation with the Russian state.
Using fabricated news websites or doppelganger websites
Websites portraying themselves as news websites (while being run by an influence operation), or which portrayed themselves as existing news or government websites.
A case in point is the Voice of Europe website, which European authorities say received funding from Russia to run an influence operation and co-opt far-right MEP candidates.
Coordinated inauthentic behavior (CIB) networks
Russian-affiliated actors extensively used coordinated inauthentic behavior networks (for example, networks of accounts mass-posting articles from fabricated websites).
One example is the Russian Doppelganger influence operation, which the CDN (with CeMAS in the lead) found was still largely unmoderated during the election period (posting 1366 tweets in June alone) despite being well-known and easy to track.
Posting unlabeled political ads
Influence operations used unlabeled political ads and obscured the nature of their content to circumvent Meta’s ad moderation systems (an issue currently being investigated by the European Commission as a violation of the DSA).
Almost all social media platforms failed to take action on Kremlin-backed influence operations which had already been sanctioned by the EU. Five quick investigations led by Science Feedback showed that around 250 sanctioned pages were still accessible during the elections.
Conclusions
While the exact impact of these influence operations is difficult to measure, one thing is absolutely clear: there were numerous, deliberate attempts to influence the elections which sought to exploit technical vulnerabilities on social media platforms and the web, including:
-
the ability to create anonymous accounts at scale (CIB networks)
-
weak ad and content moderation
-
the ability to create lookalike domains
-
limited AI image detection on social media platforms
Some of these vulnerabilities will be addressed in the long term by the DSA, the legal instrument designed to address systemic issues on social media platforms (which is likely to have a lasting impact on the behavior of some of the world's largest corporations).
This will take time, however, which means that we – as a community of practitioners – need to work with other instruments to combat the immediate threats. Many of these – including increased polarization and social divisions, falling trust in democratic institutions, and widespread weaknesses in media literacy – require far more than a focus on technical fixes.
Influence operations don’t stop between elections. To monitor and counter disinformation effectively and systematically, stakeholders aiming to protect European democracy must work together more closely. We need to argue that countering disinformation is an existential issue for civil society – one which requires sustainable funding, resources, and access to data. A possible starting point might be a concerted effort to demonstrate why it’s vital that the European Union puts a clear-cut regulatory enforcement system in place to price the economic cost of disinformation into the business model of social media platforms.