While there is limited empirical research on GAI in political ads, our reading of the literature considering online misinformation, political ads, and bias in AI models offers five important insights into the potential harm of GAI in political ads:
- First, research suggests that the persuasive power of both political ads and online misinformation is often overstated. Political ads likely have more of an effect on behavior – such as voter turnout and fundraising – than on persuasion.
- Second, political ads likely have the greatest impact in smaller, down-ballot races where there is less advertising, oversight, or familiarity with candidates.
- Third, GAI content has the potential to replicate bias, including racial, gender, and national biases.
- Fourth, research on political disclaimers suggests that watermarks and disclaimers are unlikely to significantly curb risks.
- Fifth, significant holes in the research remain.
These insights from the literature help to formulate recommendations for policymakers that can mitigate the potential harm of GAI without unduly constraining its potential benefits. Research suggests that policy should focus more on preventing abuse in smaller, down-ballot races and in mitigating bias than on banning deceptive GAI content or requiring disclaimers or watermarks. Although the research points in this direction, holes in the literature remain. The result is that we should approach its insights from a position of curiosity, rather than certainty, and conduct additional research into the impact of GAI on the electoral process. Building on our assessment of the academic literature, we offer ten recommendations for policymakers seeking to limit the potential risks of GAI in political ads. These recommendations fall into two categories: First, public policy should target electoral harms rather than technologies. Second, public policy should promote learning about GAI so that we can govern it more effectively over time.