Global: Tech Accord to Combat Deceptive Use of AI in 2024 Elections

In brief

Leading technology companies have agreed to help prevent deceptive AI content from influencing the many elections worldwide in 2024. This commitment was announced at the Munich Security Conference (MSC).

Signatories to the "Tech Accord to Combat Deceptive Use of AI in 2024 Elections" include Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X. With their pledge, these tech leaders acknowledge the potential harm AI-generated content can cause to democratic elections. In a statement, they announced that they would "work collaboratively on tools to detect and address online distribution of [deceptive] AI content, drive educational campaigns, and provide transparency, among other concrete steps".

This commitment comes at a crucial time, with over four billion people across more than 40 states set to vote in elections this year. Within the generally prevailing discussion about AI and ethics, the increasing use of AI in political discourse has raised concerns about its potential impact on geopolitical developments.


Contents

AI and Geopolitics

AI has been an increasingly common tool to influence political discourse. Before elections, AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates and other stakeholders in democratic elections have been spread with the goal to deceive voters. In some cases, these hoaxes have spread faster than news from reputable media sources, as their design is advanced to a level where it becomes difficult to demask them as fakes. False information about when, where, and how the public can vote essentially hinders people's access to democratic elections, robbing them of their essence.

Combatting the spread of such misinformation presents challenges for national authorities, as the rapid distribution of deceptive content online currently outpaces regulatory efforts to address it.

The Tech Accord

In response to these challenges, tech companies have initiated collaborative efforts to combat deceptive AI content. This includes developing tools to detect and address online distribution of such content, driving educational campaigns, and enhancing transparency. Besides specific projects, the signatories have agreed on the following eight commitments:

  1. Developing and implementing technology to mitigate risks related to Deceptive AI Election Content, including open-source tools where appropriate
  2. Assessing models to understand the risks they may pose regarding Deceptive AI Election Content
  3. Seeking to detect the distribution of this content on their platforms
  4. Seeking to appropriately address this content detected on their platforms
  5. Fostering cross-industry resilience to Deceptive AI Election Content
  6. Providing transparency to the public regarding their approach
  7. Engaging with a diverse set of global civil society organizations and academics
  8. Supporting efforts to foster public awareness, media literacy, and all-of-society resilience

Your dedicated team at Baker McKenzie is here to help you navigate the ever-evolving landscape of AI regulation.

Contact Information

Copyright © 2024 Baker & McKenzie. All rights reserved. Ownership: This documentation and content (Content) is a proprietary resource owned exclusively by Baker McKenzie (meaning Baker & McKenzie International and its member firms). The Content is protected under international copyright conventions. Use of this Content does not of itself create a contractual relationship, nor any attorney/client relationship, between Baker McKenzie and any person. Non-reliance and exclusion: All Content is for informational purposes only and may not reflect the most current legal and regulatory developments. All summaries of the laws, regulations and practice are subject to change. The Content is not offered as legal or professional advice for any specific matter. It is not intended to be a substitute for reference to (and compliance with) the detailed provisions of applicable laws, rules, regulations or forms. Legal advice should always be sought before taking any action or refraining from taking any action based on any Content. Baker McKenzie and the editors and the contributing authors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The Content may contain links to external websites and external websites may link to the Content. Baker McKenzie is not responsible for the content or operation of any such external sites and disclaims all liability, howsoever occurring, in respect of the content or operation of any such external websites. Attorney Advertising: This Content may qualify as “Attorney Advertising” requiring notice in some jurisdictions. To the extent that this Content may qualify as Attorney Advertising, PRIOR RESULTS DO NOT GUARANTEE A SIMILAR OUTCOME. Reproduction: Reproduction of reasonable portions of the Content is permitted provided that (i) such reproductions are made available free of charge and for non-commercial purposes, (ii) such reproductions are properly attributed to Baker McKenzie, (iii) the portion of the Content being reproduced is not altered or made available in a manner that modifies the Content or presents the Content being reproduced in a false light and (iv) notice is made to the disclaimers included on the Content. The permission to re-copy does not allow for incorporation of any substantial portion of the Content in any work or publication, whether in hard copy, electronic or any other form or for commercial purposes.