Governments should recognize electoral processes as critical infrastructure and enact laws to regulate the use of generative Artificial Intelligence.
Elections are scheduled in several countries worldwide in 2024, with potential geopolitical implications. Key events include the European Parliament elections in June, the U.S. presidential elections in November, and the French and German presidential elections in April and September, respectively. The outcomes of these elections can shape Europe’s political strategy and its relations with China and Russia, making them susceptible to vote manipulation in the listed states.
According to the latest threat landscape report from the European Cyber Security Agency (ENISA), there has been an increase in the use of AI-based chatbots for fraudulent activities, deepfakes, and similar technologies over the last 12 months. The 2024 European Union elections face threats from content generated through these platforms. “Trust in the European Union’s electoral process will crucially depend on our ability to rely on cybersecurity infrastructure and the integrity and availability of information,” said the agency’s executive director, Juhan Lepassaar.
By broadening the horizon of analysis, it is possible to verify that there will also be elections in China, Japan, Russia, Brazil, and the United Kingdom in the coming months. It is, therefore, correct to argue that the 2024 elections will be a crucial moment for world politics and, for this reason, not immune to operations aimed at influencing the outcome. The 2016 US presidential elections and Brexit have demonstrated how, through the manipulation of information, it is possible to influence the perception of entire populations on issues of interest to the community. Various state actors will attempt to interfere with voting operations by supporting candidates whose policies align with the interests of their governments.
Artificial intelligence is undoubtedly a potent weapon in the hands of malicious actors who could exploit it to manipulate the outcome of elections. The utilization of generative artificial intelligence enables the execution of disinformation campaigns on a vast scale and with unprecedented efficacy. An attacker could employ generative AI to forge realistic content, including images, videos, and audio, which can sway public opinion in myriad ways. Fabricated audio and videos concocted swiftly and disseminated ahead of elections could jeopardize a political party or candidate’s reputation. Leveraging an artificial intelligence-driven system allows for the generation of this content in real-time based on actual events and the level of citizen engagement in public discussions on major social media platforms. Chatbot networks could engage citizens in dialogues centered on political issues, fostering discontent and distrust towards targeted governments.
Artificial intelligence can be used not only to create content but also to select targets and define diffusion strategies that can evade controls implemented by companies managing social media and instant messaging platforms. Candidates and malicious state actors may employ AI to analyze data on voting patterns, create targeted messages for residents, and analyze their social media habits.
Some experts also hypothesize that artificial intelligence could be used to attack political representatives and entire parties. We are aware that generative artificial intelligence can be trained to create spear-phishing campaigns that are difficult to detect, targeting candidates and spreading the contents of their inboxes with the intent of manipulating public opinion on their conduct.
The techniques for manipulating public opinion based on artificial intelligence can be diverse, and we are likely still unprepared to counter them. In many cases, the fight is entrusted to the companies that manage social platforms, and their strategies are primarily driven by profit. This aspect is anything but negligible because combating the abuse of systems based on artificial intelligence requires significant investments by these companies and the adoption of data management practices regulated by the different frameworks of the countries in which they operate.
The need to police online content comes at a time when many of the largest technology companies have reduced staff dedicated to fighting misinformation. “The 2024 election will be a disaster because social media does not protect us from AI used to generate false content,” Eric Schmidt, former Google CEO and co-founder of Schmidt Futures, recently told CNBC.
Once the threats have been understood, it is legitimate to ask about the sources of the attacks and, to do this, it is necessary to understand which states invest the most in research on artificial intelligence. According to a report published by McKinsey & Company, the most advanced countries in artificial intelligence investments are the United States, China, the United Kingdom, Germany, and Japan. These countries lead innovation in artificial intelligence in terms of research, development, and investment. The United States is recognized as a world leader in artificial intelligence, with an ecosystem for research and development like few others. China is significantly investing in artificial intelligence, with the ambitious goal of becoming a world leader by 2030. The United Kingdom and Germany are recognized as driving forces in artificial intelligence. Artificial intelligence is having a significant impact on various industries, such as health, finance, and manufacturing. The countries that invest the most will inevitably be the most advanced, and in the medium and long term, they will hardly give space to international regulations that limit their development and acquired technological hegemony.
Artificial intelligence will transform the world we live in, and governing its evolution is equivalent to acquiring an advantage that will be difficult to fill in the coming years. Furthermore, some of these countries will be able to use artificial intelligence to influence geopolitics and take advantage of it, circumventing the timid examples of regulation that we are learning about. The threats from AI to elections are real and growing, and it is vital that governments and citizens are made aware of the risks and take steps to protect themselves.
It is important to explain to citizens how to identify these campaigns and invite them to be critical of the content they consume through the media and report suspicious activities to the platform managers. However, governments are entrusted with the most difficult task, namely implementing technological measures capable of identifying the use of artificial intelligence for the purposes described. Control rooms are needed that operate in an international context and also involve and support companies whose services can be used as attack vectors. Furthermore, governments should unanimously consider electoral processes as critical infrastructures to be preserved with ad hoc technological solutions and by adopting laws to regulate the use of generative artificial intelligence in elections.
Italian readers can view the original post published on Formiche:
Follow me on Twitter: @securityaffairs and Facebook and Mastodon
(SecurityAffairs – hacking, artificial intelligence)