Minnesota Implements Groundbreaking AI Regulation to Combat Election Misinformation Ahead of Presidential Vote

St. Paul, Minn. — Minnesota has taken a pioneering step by enacting a law aimed at curbing the misuse of artificial intelligence (AI) in the political sphere, the first such measure as the nation gears up for a presidential election. The new statute, introduced in 2023, targets the dissemination of AI-generated content that falsely depicts a person without their consent with the intent to damage a political candidate’s reputation or skew election outcomes.

The regulation underlines growing concerns around the potential for AI to amplify the challenges of election misinformation—a longstanding issue according to Minnesota Secretary of State Steve Simon. “AI offers a modern twist to the age-old problem of electoral deceit,” Simon noted, emphasizing the state’s vigilance against such tactics as the election season intensifies.

This legislation positions Minnesota among over a dozen states proactively legislating against fraudulent AI content in elections, reflecting a broader apprehension about the clarity and authenticity of information during critical voting periods.

Additionally, the specter of AI-related fraud in politics is not confined to state boundaries. Democratic Senator Amy Klobuchar of Minnesota is also championing federal measures to ensure a nationally coordinated response as Election Day approaches. Klobuchar highlighted several incidents involving deceptive robocalls and digital endorsements that illustrate the sophisticated nature of AI scams, which can make it challenging for voters to discern real endorsements from fabricated ones.

For instance, a robocall that mimicked President Joe Biden’s voice was investigated earlier this year for dissuading voter participation ahead of the New Hampshire primary. More alarmingly, a high-profile incident involving pop star Taylor Swift revealed AI-generated content falsely portraying her endorsement of a presidential candidate, underscoring the urgent need for comprehensive regulation.

In response to these threats, Klobuchar has proposed three bipartisan bills aimed at reinforcing the integrity of federal campaigns. One would outlaw the use of AI to create misleading images or ads about candidates to influence voter behavior. Another mandates clear labeling for AI-altered political ads, except when modifications are insignificant. The third seeks to empower the Election Assistance Commission with guidelines to shield elections from AI disruptions.

The challenge of securing bipartisan support for these measures remains, with Klobuchar actively seeking a consensus that could lead to robust federal safeguards.

Meanwhile, Simon and election officials from various states have also been proactive about mitigating AI misinformation on social platforms. A recent initiative required a major social media platform to revise its AI chatbot, which had been dispersing inaccurate election information. Questions about elections posed to the AI are now redirected to CanIVote.org, a reliable, non-partisan resource.

Highlighting the gravity of the situation, Simon advocates for an informed electorate, stressing the importance of consulting credible sources for election-related information. Both local vigilance and national legislative efforts are crucial as the U.S. confronts the multifaceted challenges posed by AI in the electoral context, ensuring that technological advancements do not compromise the foundational principles of democracy.