In brief
Leading technology companies have agreed to help prevent deceptive AI content from influencing the many elections worldwide in 2024. This commitment was announced at the Munich Security Conference (MSC).
Signatories to the "Tech Accord to Combat Deceptive Use of AI in 2024 Elections" include Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X. With their pledge, these tech leaders acknowledge the potential harm AI-generated content can cause to democratic elections. In a statement, they announced that they would "work collaboratively on tools to detect and address online distribution of [deceptive] AI content, drive educational campaigns, and provide transparency, among other concrete steps".
This commitment comes at a crucial time, with over four billion people across more than 40 states set to vote in elections this year. Within the generally prevailing discussion about AI and ethics, the increasing use of AI in political discourse has raised concerns about its potential impact on geopolitical developments.
AI and Geopolitics
AI has been an increasingly common tool to influence political discourse. Before elections, AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates and other stakeholders in democratic elections have been spread with the goal to deceive voters. In some cases, these hoaxes have spread faster than news from reputable media sources, as their design is advanced to a level where it becomes difficult to demask them as fakes. False information about when, where, and how the public can vote essentially hinders people's access to democratic elections, robbing them of their essence.
Combatting the spread of such misinformation presents challenges for national authorities, as the rapid distribution of deceptive content online currently outpaces regulatory efforts to address it.
The Tech Accord
In response to these challenges, tech companies have initiated collaborative efforts to combat deceptive AI content. This includes developing tools to detect and address online distribution of such content, driving educational campaigns, and enhancing transparency. Besides specific projects, the signatories have agreed on the following eight commitments:
- Developing and implementing technology to mitigate risks related to Deceptive AI Election Content, including open-source tools where appropriate
- Assessing models to understand the risks they may pose regarding Deceptive AI Election Content
- Seeking to detect the distribution of this content on their platforms
- Seeking to appropriately address this content detected on their platforms
- Fostering cross-industry resilience to Deceptive AI Election Content
- Providing transparency to the public regarding their approach
- Engaging with a diverse set of global civil society organizations and academics
- Supporting efforts to foster public awareness, media literacy, and all-of-society resilience
Your dedicated team at Baker McKenzie is here to help you navigate the ever-evolving landscape of AI regulation.