SAN FRANCISCO — In a decisive move against the manipulation of election-related content, California Gov. Gavin Newsom approved a trio of legislative measures on Tuesday, which ban the creation and distribution of deceptive AI-generated images and videos in political ads. This legislation targets the evolving threat of “deepfakes”, which have become a major concern in maintaining electoral integrity.
The new regulations, effective immediately, prohibit the use of deepfakes starting 120 days before an election and 60 days after, establishing a critical timeframe meant to protect voters from misleading digital content during and just after elections. Legal mechanisms have been put in place, allowing courts to intervene swiftly to stop the distribution of fraudulent materials and to impose financial penalties on violators.
Gov. Newsom emphasized the importance of these laws, stating, “Safeguarding the integrity of our elections is fundamental to our democracy. It is imperative that we prevent artificial intelligence from being used as a tool to foster misinformation and erode public trust in such a politically charged environment.”
Social media giants, including platforms owned by significant tech firms such as Elon Musk’s X, and Meta’s Facebook and Instagram, as well as ByteDance’s TikTok, are mandated to eliminate any misleading materials flagged under this new legislation. Moreover, political campaigns are now required to disclose when their advertisements employ materials that have been altered by artificial intelligence.
These measures were signed into law at an event featuring a discussion between Newsom and Salesforce CEO Marc Benioff, held during the major software company’s annual conference in San Francisco.
On a broader scale, this state initiative aligns with actions at the federal level where, on the same day, members of Congress introduced a similar bill aimed at curbing election misinformation caused by AI manipulation. This federal proposal would hand regulatory powers to the Federal Election Commission to oversee the application of AI in electoral contexts.
AI-generated deepfakes have been identified as a significant and growing danger, overshadowing even AI-assisted cyber threats. Research by entities such as Google’s DeepMind has highlighted the increasing prevalence of politically themed deepfakes featuring politicians and celebrities, underscoring the urgency of these new protective measures.
Recent incidents ahead of the 2024 elections involving AI image tools from companies like OpenAI, parent of ChatGPT, and Microsoft, which led to electoral misinformation scandals, have only increased concerns. High-profile deepfake attacks on public figures like Taylor Swift and President Joe Biden early this year prompted the White House to raise alarms over the misuse of AI technology.
Similar concerns have been voiced in the United Kingdom, where authorities have been warned about potential AI-driven misinformation targeting the upcoming 2024 polls.
This decisive action by California and simultaneous federal initiatives underscore the growing recognition of AI’s potential to disrupt democratic processes and the proactive steps lawmakers are taking to mitigate this threat. As technology evolves, so too does the legislative response, aiming to maintain a fair and trustworthy electoral environment.