The first chapter of the AI snake oil book is now available online. It is 30 pages long and summarizes the book’s main arguments. If you haven't ordered it yet, we hope that reading the introductory chapter will convince you to get yourself a copy.
We were fortunate to receive positive early reviews by The New Yorker, Publishers' Weekly (featured in the Top 10 science books for Fall 2024), and many other outlets. We're hosting virtual book events (City Lights, Princeton Public Library, Princeton alumni events), and have appeared on many podcasts to talk about the book (including Machine Learning Street Talk, 20VC, Scaling Theory).
The single most confusing thing about AI
Our book is about demystifying AI, so right out of the gate we address what we think is the single most confusing thing about it:
Because AI is an umbrella term, we treat each type of AI differently. We have chapters on predictive AI, generative AI, as well as AI used for social media content moderation. We also have a chapter on whether AI is an existential risk. We conclude with a discussion of why AI snake oil persists and what the future might hold. By AI snake oil we mean AI applications that do not (and perhaps cannot) work. Our book is a guide to identifying AI snake oil and AI hype. We also look at AI that is harmful even if it works well — such as face recognition used for mass surveillance.
While the book is meant for a broad audience, it does not simply rehash the arguments we have made in our papers or on this newsletter. We make scholarly contributions and we wrote the book to be suitable for adoption in courses. We will soon release exercises and class discussion questions to accompany the book.
What's in the book
Chapter 1: Introduction. We begin with a summary of our main arguments in the book. We discuss the definition of AI (and more importantly, why it is hard to come up with one), how AI is an umbrella term, what we mean by AI Snake Oil, and who the book is for.
Generative AI has made huge strides in the last decade. On the other hand, predictive AI is used for predicting outcomes to make consequential decisions in hiring, banking, insurance, education, and more. While predictive AI can find broad statistical patterns in data, it is marketed as far more than that, leading to major real-world misfires. Finally, we discuss the benefits and limitations of AI for content moderation on social media.
We also tell the story of what led the two of us to write the book. The entire first chapter is now available online.
Chapter 2: How predictive AI goes wrong. Predictive AI is used to make predictions about people—will a defendant fail to show up for trial? Is a patient at high risk of negative health outcomes? Will a student drop out of college? These predictions are then used to make consequential decisions. Developers claim predictive AI is groundbreaking, but in reality it suffers from a number of shortcomings that are hard to fix.
We have discussed the failures of predictive AI in this blog. But in the book, we go much deeper through case studies to show how predictive AI fails to live up to the promises made by its developers.
Chapter 3: Can AI predict the future? Are the shortcomings of predictive AI inherent, or can they be resolved? In this chapter, we look at why predicting the future is hard — with or without AI. While we have made consistent progress in some domains such as weather prediction, we argue that this progress cannot translate to other settings, such as individuals' life outcomes, the success of cultural products like books and movies, or pandemics.
Since much of our newsletter is focused on topics of current interest, this is a topic that we have never written about here. Yet, it is foundational knowledge that can help you build intuition around when we should expect predictions to be accurate.
Chapter 4: The long road to generative AI. Recent advances in generative AI can seem sudden, but they build on a series of improvements over seven decades. In this chapter, we retrace the history of computing advances that led to generative AI. While we have written a lot about current trends in generative AI, in the book, we look at its past. This is crucial for understanding what to expect in the future.
Chapter 5: Is advanced AI an existential threat? Claims about AI wiping out humanity are common. Here, we critically evaluate claims about AI's existential risk and find several shortcomings and fallacies in popular discussion of x-risk. We discuss approaches to defending against AI risks that improve societal resilience regardless of the threat of advanced AI.
Chapter 6: Why can't AI fix social media? One area where AI is heavily used is content moderation on social media platforms. We discuss the current state of AI use on social media, and highlight seven reasons why improvements in AI alone are unlikely to solve platforms' content moderation woes. We haven't written about content moderation in this newsletter.
Chapter 7: Why do myths about AI persist? Companies, researchers, and journalists all contribute to AI hype. We discuss how myths about AI are created and how they persist. In the process, we hope to give you the tools to read AI news with the appropriate skepticism and identify attempts to sell you snake oil.
Chapter 8: Where do we go from here? While the previous chapter focuses on the supply of snake oil, in the last chapter, we look at where the demand for AI snake oil comes from. We also look at the impact of AI on the future of work, the role and limitations of regulation, and conclude with vignettes of the many possible futures ahead of us. We have the agency to determine which path we end up on, and each of us can play a role.
We hope you will find the book useful and look forward to hearing what you think.
Early reviews
The New Yorker: "In AI Snake Oil, Arvind Narayanan and Sayash Kapoor urge skepticism and argue that the blanket term AI can serve as a smokescreen for underperforming technologies."
Kirkus: "Highly useful advice for those who work with or are affected by AI—i.e., nearly everyone."
Publishers' Weekly: Featured in the Fall 2024 list of top science books.
Jean Gazis: "The authors admirably differentiate fact from opinion, draw from personal experience, give sensible reasons for their views (including copious references), and don’t hesitate to call for action. . . . If you’re curious about AI or deciding how to implement it, AI Snake Oil offers clear writing and level-headed thinking."
Elizabeth Quill: "A worthwhile read whether you make policy decisions, use AI in the workplace or just spend time searching online. It’s a powerful reminder of how AI has already infiltrated our lives — and a convincing plea to take care in how we interact with it."
Book launch events
September 24: City Lights (virtual, free)
September 30: Princeton alumni events (virtual, free)
October 24: Princeton Public Library (Princeton, free)
Podcasts and interviews
We’ve been on many other podcasts that will air around the time of the book’s release, and we will keep this list updated.
Preorder links
US: Amazon, Bookshop, Barnes and Noble, Princeton University Press. Audiobook, Kindle editions.
UK: Blackwell’s, Waterstones.
Canada: Indigo.
Germany: Amazon, Kulturkaufhaus.
India: Amazon
The book is available to preorder internationally on Amazon.
Reading it & found mislabeled figures pp. 157-159. Caption wrongly identifies three calculating machines.
Thank you for writing this!