The Honorable Amy Klobuchar The Honorable Deb Fischer
Chair, Senate Rules Committee Ranking Member, Senate Rules Committee
United States Senate United States Senate
305 Russell Senate Office Building 305 Russell Senate Office Building
Washington, D.C. 20510 Washington, D.C. 20510
Re: AI and Elections Legislation (S. 2770, S. 3875, S. 3897)
Dear Chair Klobuchar, Ranking Member Fischer, and members of the Committee:
On behalf of Americans for Prosperity and the millions of American individuals and families it represents across the country, we write to you to express our deep concerns with the various pieces of legislation surrounding Artificial Intelligence (AI) and Elections being considered today. AFP strongly supports having safe and secure elections; however, it is impossible to ignore the various issues that lie within these pieces of legislation that not only undermine the promise and potential of this technology, but also threaten the speech rights of countless Americans, organizations, and potential candidates looking to run for elected office. These proposals represent a classic case of a solution in search of a problem and, as such, we urge you to not move them out of committee for further consideration by the upper chamber.
The PEDA Act seeking to amend the Federal Election Campaign Act of 1971 (“FECA”), as amended, to prohibit the distribution of unpaid “materially deceptive” AI-generative audio or visuals concerning federal candidates is counterproductive. The PEDA Act includes an overly vague definition of “deceptive AI-generated audio or visual media” and couples it with other terminology that is highly subjective and similarly undefined. As currently crafted, the PEDA Act would capture a lot of existing and commonly-used technology that is currently employed by candidates and other speakers to create campaign and election-related ads that have no aim of misleading anyone.
Here are three things that (depending on the whim of the federal bureaucracy) could be considered “materially deceptive”:
Additionally, the PEDA Act relies on using a “reasonable person” standard to determine if AI-generated content is “materially deceptive.” This is legally problematic and chilling. Under the best of circumstances, different reasonable people have different standards for what is and is not deceptive. But in the world of partisan politics, where everyone’s views are colored by their individual political valence, it becomes impossible and can only lead to weaponization of the law amongst rivals. For this reason, laws prohibiting or regulating speech that use a “reasonable person” standard have historically been struck down as unconstitutional.1 This provision will face scrutiny in the courts, which would find this standard to be overbroad, especially when considering AI powered tools in software have been used in political ads for years without incident.
The PEDA Act delegates authority to the office of the general counsel within the Federal Election Commission to make determinations about what is and is not “materially deceptive” AI-generated content. The FEC’s expertise is election law, not technology, and under current law may only regulate the paid political speech of independent actors. Giving the FEC the responsibility and power to determine the “truthfulness” of a graphic converts the agency into a ministry of political truth. As when the Department of Homeland Security sought to establish its “Disinformation Governance Board”, this is another example in a long series of attempts to turn government regulators into political truth enforcers.
Most alarmingly, this legislation creates a new federal cause of action, empowering federal candidates who are depicted in any AI-generated content to seek injunctive relief from the courts to remove the content and seek damages and attorney’s fees. Candidates will be able to tie up their rivals’ campaigns in court under the pretense of allegedly deceptive advertising, effectively silencing their opposition, as well as chilling the speech of any person or organization who depicts them unfavorably. More dangerously, this bill regulates unpaid communications and reaches far beyond large-scale television, digital, or mass mailing campaigns. Across various social media platforms, politically diverse content creators use AI tools to create content and share it on their accounts. The PEDA Act would empower federal candidates to bully content creators, journalists, and average Americans active on social media alike by exposing them to legal action for simply exercising their constitutionally protected rights in a way the candidates do not like.
The PEDA Act has special carveouts for institutional media, but not for independent journalists and bloggers. Media and news are constantly evolving, and this bill takes a significant step back from that reality in addition to being inconsistent with how the FEC has interpreted and applied the media exemption to all manner of new forms of press. The FEC has long included bloggers and other types of digital platforms and communicators as media, and this bill amounts to picking winners and losers, where losers are subsequently subjected to FEC complaints and potential action by federal candidates and others. This bill ignores the technological progress and the evolution of a free and fair press, which the FEC has successfully navigated with clarity and respect for the First Amendment.
Moreover, many nonprofits, advocacy organizations and independent voices on the internet are increasingly looking for ways to leverage technology to improve their capabilities. As written, the PEDA Act would allow for frivolous FEC complaints to be filed against nonprofits that use AI-generated content in their various product streams such as blog posts, op-eds, social media posts, and other unpaid political speech products and would capture non-political issue speech as well. The disproportionate impact this would have on smaller organizations cannot be understated and would be a major blow to the marketplace of ideas and policy discourse in our country.
Conservatives have been wary of the weaponization of the government to target them and their speech in recent years–understandably so–and this legislation would turbocharge such outcomes. This legislation invites political actors to use the legal process as punishment against their political rivals. Expanding the scope of regulated speech with vague terms and creating a new cause of injunctive relief would almost assuredly result in a race to the bottom, with even more complaints being lodged against republican and conservative campaigns, nonprofit and organizational speakers, and average persons without the resources or knowledge to navigate such laws as these proposed bills.
This legislation would amend FECA to require a disclaimer on all political ads that contain content “substantially generated by” AI. This would be an additional disclaimer requirement to existing requirements such as “paid for by” and the “stand by your ad” disclaimers surrounding candidate authorized ads.
In the current political advertising climate, short content is king, with video and audio sometimes lasting as few as 10 seconds. Inserting the required disclaimers of this legislation would require that certain ads give just as much time to the various disclaimers as to the message itself. This increases the cost of advertising and hinders the ability of would-be challengers to manage their limited resources more effectively, significantly reducing the impact of the message itself.
This is not the first time Congress has considered legislation concerning online political advertising. Notably, the For the People Act of 2021 (S. 1), the DISCLOSE Act of 2023 (S. 512), and the Honest Ads Act (S. 543), were all proposals that attempted to regulate speech in political advertisements in different contexts. When it comes to mandatory disclaimers, the government must show both a compelling state interest and a narrowly tailored approach to furthering that interest. To date, the Supreme Court has recognized only two compelling interests – stopping political corruption or the appearance of corruption and the informational interest voters have to know who is contributing and influencing their lawmakers. A more burdensome disclaimer required for using such a broad array of standard political ad tools would likely have to be applied to nearly all current political ads and there is little evidence to support a compelling need for this disclaimer.
Similar to S. 2770, S. 3875 is expansive in nature and covers not only express advocacy but issue ads that may merely mention or depict a candidate.
The FEC has the important and sensitive responsibility of regulating political speech in a very limited way. The vaguely defined terms and broad scope of coverage of communications, both expressly political and issue related, puts the burden of deeply wrestling with technology on the FEC, which is not its expertise or its mission. The ATEA Act makes several changes to the enforcement processes of the FEC that will increase Commission backlog and litigation. The legislation would reduce the waiting period for a complainant to sue the FEC for failing to act from 120 days to 45 days. Presently, the FEC is involved in eight lawsuits alleging administrative delay of the agency. Not only will a shorter time period draw more litigation, but it would force the FEC to use its limited resources to prioritize AI-related complaints over its more significant statutory duties. The FEC would be forced to confront bogus AI-related complaints from activists looking to suppress their opposition over genuine enforcement matters relating to actual campaign finance issues.
Further, the legislation contains a provision that would assume if the target of a complaint did not promptly respond to an FEC notification of an AI disclaimer violation, that delayed response would amount to an automatic admission of wrongdoing, in direct contravention to basic principles of due process under the Constitution. This, coupled with the aggressive timeline placed on the FEC to handle AI-related matters is a recipe for disaster.
This bill charges the Election Assistance Commission (EAC), to work with NIST to produce a report with voluntary guidelines for election offices that address the use and risk of AI in the administration of elections. While well intentioned, this assignment amounts to a mismanagement of taxpayer dollars by duplicating work. The EAC, to its credit, has been producing content aimed at addressing some of the interests underpinning this legislation.
For example, in August of 2023, the EAC produced an AI toolkit that sought to explain what the tech was, the tradeoffs associated with it, and some best practices. Additionally, just last month, the EAC published an AI guide surrounding cybersecurity.
We have already seen the Biden administration be more than willing to leverage voluntary guidelines to hammer companies into taking preferred courses of action, and the risk of this guidance becoming informally required is high. On October 30th, 2023, President Biden signed his AI Executive Order, abusing emergency powers afforded by the Defense Production Act to wrap AI companies in a whole swath of red tape. Prior to issuance of the Order, a big part of the strategy from the administration was securing voluntary commitments from leading AI companies. The next day after the EO was signed, the Secretary of Commerce, Gina Raimondo, during an interview on CNBC, stated how the Department of Commerce intended to leverage those voluntary commitments and the authority of the Executive Order to hold those same companies accountable, thus turning voluntary action into demanding compliance.
Senator Ted Cruz and former Senator Phil Gramm highlighted some of these same concerns with the Biden administration’s approach to AI in a piece in the Wall Street Journal, likening the abuse to a mafia shakedown.
Americans for Prosperity is keenly engaged in the discussions surrounding emerging technologies like AI, and its impact on issues such as political speech and beyond. We would welcome the opportunity to discuss this issue further with this Committee and its members.
Private actors, companies, and existing law are already working together to protect both the public from being misled and the rights of speakers. When President Biden announced his reelection effort, the Republican National Committee (RNC) put out an ad painting a picture of what a second Biden term might look like in their view. The ad was built entirely with AI imagery, which the RNC disclosed on the ad proudly. That was a high scale production, but on the flip side, there are countless examples of less robust uses of AI video. These contrasting examples serve as a humble reminder that the odds that most uses of AI currently will be so good that an individual cannot tell it is legitimate are slim. The mere use of AI technology to generate content in making an ad is not the primary contributing factor to the core issues individuals and members of Congress may have with the underlying content within political ads.
And when used for ill, there are signs that the system is already working to protect citizens. On January 21st, 2024, robocalls were received by numerous residents of New Hampshire featuring an AI-generated voice of President Biden, encouraging the recipients not to vote in the state’s primary occurring later that week. This gathered plenty of public attention, and by the next day, it was discovered to likely have been generated by AI. Within 3 weeks, the source of the AI robocall was determined and the FCC passed a declaratory ruling clarifying that AI-generated voices in robocalls were “artificial” under the existing definition in the Telephone Consumer Protection Act. Existing institutions have the power and ability to handle these situations more directly and with greater dispatch than Congress.
Secondly, the private sector is already responding to and issuing user policies governing AI content and use on their platforms and services. For example, in September of 2023, Google announced that it would require disclosure of AI use in political ads. Shortly thereafter, in November of 2023, Meta and Microsoft announced new requirements for AI-generated content that would appear in political ads. These companies, furthermore, are investing significant resources into technology to detect AI generated imagery to streamline responses to inauthentic content.
The landscape surrounding AI is constantly evolving and changing, with new breakthroughs and challenges constantly emerging. Congress moves at a much slower pace than innovations powered by technology and that is a good thing. Top-down mandates do not allow for flexible solutions and forecloses possible emerging use cases for the technology as more companies and bright entrepreneurs experiment with it. In the process, it will significantly chill core political speech and will likely stifle new advances in AI and other innovative technologies overall.
We cannot and should not allow fear of novel technologies drive legislative proposals. These bills will ultimately do little to further secure our elections, but will do much to threaten the promise and potential of AI and trample on the free speech rights of countless Americans looking to engage in democracy.
At Americans for Prosperity, we believe in people. It is through this fundamental lens that we look at society that informs the way we think about policy matters that are being considered, whether they are at state houses around the country, or in our nation’s capital. While well-intentioned, these proposals resemble a solution in search of a problem, with a prescription that is worse than the disease itself. These proposals represent an intrusion by the federal government into an area of law largely handled by the states, running the risk of creating a regime that will undermine the trust in the very institutions responsible for safeguarding our elections.
It is for these various reasons that we urge you not to advance S. 2770, S. 3875, and S. 3897 out of Committee and work to address these glaring issues in these proposals. We stand ready to work with you and members of the Committee to strike the right balance that ensures safe and secure elections, robust public debate and discussion, and technological advancements available to all.
Sincerely,
James Czerniawski
Senior Policy Analyst, Technology and Innovation
Americans for Prosperity
Scott Blackburn
Legal Portfolio Manager
Americans for Prosperity
© 2025 AMERICANS FOR PROSPERITY. ALL RIGHTS RESERVED. | PRIVACY POLICY
Receive email alerts to learn how to get involved