Discover more from AI Snake Oil
Is AI-generated disinformation a threat to democracy?
An essay on the future of generative AI on social media
We just published an essay titled How to Prepare for the Deluge of Generative AI on Social Media on the Knight First Amendment Institute website. We offer a grounded analysis of what we can do to reduce the harms of generative AI while retaining many of the benefits. This post is a brief summary of the essay. Read the full essay here.
Most conversations about the impact of generative AI on social media focus on disinformation.
Disinformation is a serious problem. But we don’t think generative AI has made it qualitatively different. Our main observation is that when it comes to disinfo, generative AI merely gives bad actors a cost reduction, not new capabilities. In fact, the bottleneck has always been distributing disinformation, not generating it, and AI hasn’t changed that.
You’re reading AI Snake Oil, a blog about our upcoming book. Subscribe to get new posts.
So, AI-specific solutions to disinformation, such as watermarking, provenance, and detection of AI-generated content, are barking up the wrong tree. They won’t work, and they solve a non-problem. Instead, we should bolster existing defenses such as fact-checking.
Furthermore, the outsized focus on disinformation means that many urgent problems aren’t getting enough attention, such as nonconsensual deepfake pornography. We offer a four-factor test to help guide the attention of civil society and policy makers to prioritize among various malicious uses.
Beyond malicious uses, many other applications of generative AI are benign, or even useful. While it is important to counteract malicious uses and hold AI companies accountable, an overt focus on malicious uses can leave researchers and public-interest technologists playing Whac-a-mole when new applications are released, instead of proactively steering the uses of generative AI in socially beneficial directions.
With this in mind, we analyze many types of non-malicious synthetic media, describe the idea of pro-social chatbots, and discuss how platforms might incorporate gen AI into their recommendation engines. Note that non-malicious doesn’t mean harmless. A good example is filters on apps like Instagram and TikTok, which have contributed to unrealistic beauty standards and body-image issues in teenagers.
Our essay is the result of many discussions with researchers, platform companies, public-interest organizations, and policy makers. It is aimed at all these groups. We hope to spur research into understudied applications, as well as the development of guardrails for non-malicious yet potentially harmful applications of generative AI.