Please select your state
so that we can show you the most relevant content.


No, we don’t need a new federal law to address AI and elections.

By Megan Bailey and Andrew Korst

Americans may not know it, but more than three quarters of us use AI to interact with the world around us every single day. Whether that is through photo-shopping apps to blur a blemish or predictive text with our emails, this technology is entrenched in our everyday lives.

If you think we rely on AI now, just wait. Most experts believe AI use and adoption will become significantly more widespread in the coming years. Unfortunately, Congress is posed to act on legislation that would not only stunt AI’s development, but also infringe on our First Amendment rights by expanding federal overreach in ways that could harm the very fabric of our democratic processes.

AI legislation being considered by Congress tramples our free speech rights.

The measures proposed under S.2770, Protect Elections from Deceptive AI Act, to combat “materially deceptive” AI-generated content risk imposing overly broad restrictions that encompass routine political messaging tools. Imagine a scenario where standard image editing, like adding an American flag to a campaign photo or adjusting lighting in a video could suddenly be deemed illegal.

This isn’t just an inconvenience; it’s a direct threat to foundational principles of free speech. The bill’s vague standards, like the “reasonable person” test (would two reasonable people see or hear something and arrive at the same conclusion about it?) are impractical and historically, have been rejected by courts for their broadness and ambiguity. For example, imagine a dog standing up. What do you see? Is the dog standing on two legs or four? Reasonable people will see both interpretations. Add partisan politics into that interpretation and you have a recipe for disaster.

Why more disclosure laws are unreasonable and unnecessary.

S 3875, AI Transparency in Elections Act exacerbates the issue further by mandating burdensome disclaimers for AI-generated voices in ads, effectively replacing valuable communication time with bureaucratic red tape. This isn’t just about protecting consumers; it’s about whether such a requirement disproportionately stifles speech and deters effective communication by candidates and political groups. Whether candidates know it or not, every political ad that their campaign teams use to create content uses tools covered by this bill. 

The proposed legislation is federal overreach at its worst.

Both bills represent a significant federal overreach, encroaching on areas traditionally managed by states. They would invite misuse of the legal system by allowing politically motivated legal actions. This approach is not only unnecessary, given the existing state laws against fraud, but it could also lead to an inundation of federal courts with technology-focused disputes they are ill-prepared to handle.

How businesses are stepping up to put solutions in place.

The private sector is already addressing the challenges posed by AI in elections. Companies like Meta and Google have proactively updated their policies to manage AI-generated content. Moreover, incidents of misuse, such as the deceptive use of AI involving President Biden, have been quickly resolved without the need for heavy-handed government intervention. States are also adapting rapidly, suggesting that a flexible, decentralized approach to regulation is more effective and less disruptive. Given the rapid development of AI and the slow pace of Washington – in many instances, a feature, not a bug – businesses and more nimble state governments are better suited to address AI.

Businesses and local governments are best equipped to address concerns over AI and elections.

Instead of smothering AI through a national regulatory approach, we should empower consumers with more knowledge of what AI is and does. Rather than take AI technologies off the table, let’s help create a more informed society that understands how to use and interact with AI.  We can also enhance the capabilities of existing tools by creating economic conditions that encourage businesses to further invest in technologies that spot deep-fakes and tackle other deceptive practices. This would ensure that regulations keep pace with technological advancements without compromising fundamental rights or the autonomy of states and private entities.

In our rapidly evolving technological landscape, it is critical that legislative measures are crafted with precision of scope and foresight into future use cases. As we stand at the crossroads of technology and governance, the need for policies that protect freedoms and foster innovation without unnecessary constraints is crucial.

In its current form, the legislation addressed above threatens to hinder political expression and the dynamic capabilities of our electoral processes. Congress should reconsider these measures, keeping in mind the principles of federalism, free speech, and practical governance that are essential to our nation’s success.