Introduction

Artificial Intelligence (AI) cyber attacks present a significant threat to the integrity of the upcoming German election scheduled for 23rd February 2025 [5]. These attacks, often involving disinformation and manipulation, have the potential to undermine democratic processes and influence public opinion.

Description

Artificial Intelligence (AI) cyber attacks pose a significant threat to the upcoming German election scheduled for 23rd February 2025 [5]. Sensitive data obtained through these attacks can be used in coordinated hack-and-leak operations to undermine the credibility of political candidates or parties [5]. Germany is enhancing its defenses against cyber attacks and disinformation campaigns [5], particularly from Russia [5], by establishing a special task force within its domestic intelligence service (BfV) [5]. This task force aims to counter potential cyber threats [5], espionage [5], and disinformation [1] [2] [3] [4] [5], focusing on early detection of malicious activities to prevent foreign influence on the elections [5].

The BfV has noted a rise in hybrid threats [5], including disinformation [2] [5], and warns that such tactics are expected to continue [5]. Disinformation is often spread through false information on websites and fake social media accounts [5], which can circulate rapidly and persist online even after the original sources are deleted [5]. A sophisticated network of over 1,000 fake social media accounts has been identified as part of a campaign to influence the election by discrediting political opponents and inflating support for the far-right party, the Alternative for Germany (AfD) [3] [5]. Monitoring of social media discourse has shown that 47% of these fake profiles have been active for over a year [4], indicating a long-term influence operation aimed at manipulating public perception in Germany [1]. These profiles not only spread misleading posts but also attack political opponents and amplify pro-AfD narratives, particularly targeting key figures such as Alice Weidel, co-chairwoman of the AfD [4], whose posts received 33% fake engagement [4], and Chancellor Olaf Scholz of the Social Democratic Party (SPD) [4], with 22% fake profiles interacting with his content [4]. The Green Party also faced negative narratives from 15% of these fake accounts.

Voters are encountering numerous far-right narratives online [2], fueled by AI-generated content and Russian disinformation campaigns [2]. Russian groups [2], including “Doppelganger” and “Storm-1516,” are reportedly involved in these efforts [2], which aim to manipulate public opinion ahead of the election [2]. Methods employed by these campaigns include the creation of fake news stories and deep-fake videos that falsely portray politicians [2]. For instance [2], a video was released claiming a pro-Ukraine parliamentary member was a Russian spy [2], utilizing AI to fabricate the narrative [2]. The BfV believes that Germany’s reliance on in-person and postal voting with official ballot papers makes it difficult for cyber attacks to directly influence election results [5], unlike some countries that use voting machines or online voting [5].

Despite these measures [5], the Christian Democratic Union (CDU) experienced a large-scale cyber attack in June 2024 [5], and the German government attributed a previous attack on the Social Democrats (SPD) to the Russian hacking group APT28 [5]. Pro-Russian disinformation campaigns are expected to target the political landscape in Germany [5], particularly with the rise of the AfD [5], which opposes continued defense aid to Ukraine [5]. The far-right’s use of generative AI is extensive [2], with reports indicating that the AfD has published a substantial number of AI-generated posts [2], more than any other party [2].

A survey by the Bertelsmann Foundation revealed that a large majority of Germans view disinformation as a significant societal issue [2], with many believing it is used to sway political opinions [2]. The outcome of the election could have significant implications for NATO and European unity [5], highlighting the need for collaboration among Western intelligence agencies to combat disinformation that threatens democratic processes [5]. Cyabra’s AI technology addresses risks related to brand reputation [1], disinformation [1] [2] [3] [4] [5], and election threats by identifying fake profiles [1], harmful narratives [1], and generative AI content [1], further emphasizing the importance of vigilance in the face of these evolving threats.

Conclusion

The potential impact of AI-driven cyber attacks on the German election underscores the need for robust defenses and international cooperation. Germany’s proactive measures, including the establishment of a special task force, are crucial in mitigating these threats. The election’s outcome could have far-reaching implications for European unity and NATO, emphasizing the importance of continued vigilance and collaboration among Western intelligence agencies to safeguard democratic processes against disinformation and cyber threats.

References

[1] https://cyabra.com/our-reports/german-election-interference-fake-profiles-promoting-afd/
[2] https://news.sky.com/story/german-election-from-ai-influencers-to-russian-disinformation-the-far-right-is-getting-a-leg-up-online-13313167
[3] https://www.devdiscourse.com/article/law-order/3272337-fake-bot-network-targets-germanys-election-front-runner-with-ai-narratives
[4] https://cyabra.com/blog/1000-fake-accounts-disrupting-german-elections/
[5] https://www.cybersecurityintelligence.com/blog/ai—generated-deepfakes-threaten-the-german-election-8265.html