

Discover more from AI Snake Oil
Something weird happened on November 19, 2019.
When a professor shares scholarly slides online, they are usually intended for a niche group of peers. You’d be lucky if 20 people looked at them. But that day, the slides Arvind released went viral. They were downloaded tens of thousands of times and his tweets about them were viewed 2 million times.


Once the shock wore off, it was clear why the topic had touched a nerve. Most of us suspect that a lot of AI around us is fake, but don’t have the vocabulary or the authority to question it. After all, it’s being peddled by supposed geniuses and trillion-dollar companies. But a computer science professor calling it out gave legitimacy to those doubts. That turned out to be the impetus that people needed to share their own skepticism.
Within two days, Arvind’s inbox had 40-50 invitations to turn the talk into an article or even a book. But he didn’t think he understood the topic well enough to write a book. He didn’t want to do it unless he had a book’s worth of things to say, and he didn’t want to simply trade on the popularity of the talk.
That’s where Sayash comes in. The two of us have been working together for the last two years to understand the topic better.
In the Fall of 2020, Sayash took a course on Limits to Prediction, offered by Arvind and sociology professor Matt Salganik. It asked critically: given enough data, is everything predictable?
It was a course at the cutting edge of research, and the instructors learned together with the students. Through our readings and critical analysis, we confirmed our hunch that in virtually every attempt to predict some aspect of the future of the social world, forecasters have run into strong limits. That’s just as true for predicting a child’s outcomes in school as massive geopolitical events. It didn’t matter much which methods they used. Besides, the same few limitations kept recurring. That’s very strong evidence that there are inherent limits.
The only exception to this pattern came from political scientists who were using AI to predict civil wars. According to a series of recent papers, AI far outperformed older statistical methods at this task.
We were curious, and decided to find out why. What we found instead was that each paper claiming the superior performance of AI methods suffered from errors. When the errors were corrected, they performed no better than 20 year old methods. This shocked us: peer-reviewed studies which had been cited hundreds of times had built consensus around an invalid claim.
These findings confirmed Sayash's previous experiences at Facebook, where he saw how easy it was to make errors when building predictive models and be over-optimistic about their efficacy. Errors could arise due to many subtle reasons and often weren’t caught until the model was deployed.
After three years of research, separately and together, we’re ready to share what we’ve learned. Hence this book. But the book isn’t just about sharing knowledge. AI is being used to make impactful decisions about us every day, so broken AI can and does wreck lives and careers. Of course, not all AI is snake oil — far from it — so the ability to distinguish genuine progress from hype is critical for all of us. Perhaps our book can help.
We hear the clock ticking as we write this. The dubious uses of AI that we were concerned about, such as in the areas of criminal risk prediction and hiring, have massively expanded in the last few years. New ones are introduced every week. The list of harms keeps multiplying. Mark Zuckerberg promised Congress that AI would solve the problem of keeping harmful content off the platform. The role of social media in the January 6 attack reminds us how poorly those efforts are currently working.
Meanwhile the public discourse about AI has gone beyond parody, getting distracted by debates such as whether AI is sentient. Every day developers of AI tools make grad claims about its efficacy without any public (let alone peer reviewed) evidence. And, as our research has shown, we need to be skeptical of even the peer reviewed evidence in this area.
Fortunately, there’s a big community pushing back against AI’s harms. In the last five years, the idea that AI is often biased and discriminatory has gone from a fringe notion to one that is widely understood. It has gripped advocates, policy makers, and (reluctantly) companies. But addressing bias isn’t nearly enough. It shouldn’t distract us from the more fundamental question of whether and when AI works at all. We hope to elevate that question in the public consciousness.
This isn’t a regular book. We have already written a lot about this topic and plan to share our ideas with you every step of the way. We’re excited that Princeton University Press has agreed to publish our book. We were taken aback when our editor said this topic deserves a trade book with wide circulation. We only have experience with academic publishing, and have never done something like this before. But we’re excited to be doing it and we hope you follow along.
If you’re concerned about AI snake oil, there are many things you can do today. Take a look at the overview of our book. Educate your friends and colleagues about the issue. Read the AI news skeptically — in fact, we’ll soon be sharing our analysis of how pervasive AI hype is in the media.
When you’re making purchase decisions about AI-based products and services, be sure to ask critical questions, and don’t be suckered into buying snake oil. If they can’t give you a coherent explanation you can understand, the problem is not you, it’s them. It might be because the tech doesn’t work.
As a citizen, exercise your right to protest when you’re subject to algorithmic decisions without transparency. Engage with the democratic process to resist surveillance AI.
And finally, a plea to our fellow techies and engineers: refuse to build AI snake oil.
Introducing the AI Snake Oil book project
This is such a great initiative. I feel the same way at work, things are expected to happen magically using "AI". Now I have something to share when I hear this next time.
Your work is going to train a lot of smart minds and give them whole new perspective or even careers. Love it and looking forward to more of it!! Thanks again for doing this.
To me, it looks like you started from the wrong end with that 'limits of prediction' course.
Trust in predictive models of human behavior is important to the current state of many academic fields. Yet, there is soft 'proof' that these models are all reduced information. We also know that we are working off of past information, and that the people being modeled may have similar information, or different information, from what the modeler has. From those elements there are some inferences that can be made, and which suggest several hard limits to forecasting human behavior.
The first question of the soft proof is 'in terms of information and complexity, can a human mind contain a full fidelity model of a human mind?' Specific answers do not really matter, so long as they are not close to 'one human mind can hold infinite human minds at perfect fidelity'. The finding necessary to the later analysis is that there is some number of minds that a human can only understand with reduced fidelity, not full fidelity.
One result is that History is hurt and helped by the fact that the information we have about previous generations is very lossy compared to what that previous generation had, or believed that it had, about itself.
Reduced fidelity modeling is a significant thing to be certain of, because it can undermine our confidence that we can measure enough to statistically estimate that one sample and another sample are the same, or that our statistical inferences are really valid.
That we form theory off past behavior is important, because there is a critical difference between human behavior and for one example, the elasticity of copper at small deformations. We sometimes talk about matter having 'memory', retaining information about this or that bit of its history. For a large enough group of humans, at least some individual humans in that group are forming their own models, and changing their behavior based on those models.
For a large enough group, for just about all history and prehistory, there has been an complicated arms race between manipulators guessing models of most individuals in the groups, and those manipulated and feeling that the manipulation is malicious being motivated to change their behavior or mental models. This may average out 'most' of the time.
An academic modeler has at least three potential problems.
1. People in the modeled group with the same information, building a similar model of behavior, and then using it to identify a choice of behavior that would break the model. Carrying out this behavior should be rare if the group does not believe the academic modeler to be a malicious manipulator.
2. People in the modeled group with different information, who change their behavior based on factors the modeler cannot predict.
3. Information transfer from the modeler, to the modeled group. The general public definitely has at least some information about claims within academia.
For specific domains of behavior forecasting, narrower issues can be found of bad models leading to bad forecasts. One of the frequent ones is identifying N-1 stages in some specialized history, then using that to forecast and push an Nth stage. Industry 4.0 is an example where I had/have profound reservations.
Where an academic field has been very careless about those three potential problems and other constraints, the academic field can be profoundly overconfident in behavior forecasts that are not correct, or can be accurately predicted to be wrong. For a chaotic system, it may be impossible to predict what actually happens, but it may be easy to verify that a specific model is probably wrong.
I think that some misuses of AI are clearly cases where an academic field's behavior forecasting confidence is at least partly wrong, and automating that forecasting does not make it better.
There is clearly also an issue of domains that do not involve human behavior, and are additionally 'well understood', where automating the human process has challenges. For a mental process, some steps may not be noticed. If you do not have a record that a mental step exists, and you can automate and verify the other steps, the missing step may have an important impact on the quality of results. For a physical process, anyone deep into manufacturing can learn that there are approaches to tasks that are still very hard to automate well, and also tasks that can be very effectively automated.
There is definitely skill in figuring out which things can be productively automated using what methods. I'm personally not interested in neural nets, I find other automation schemes less confusing. I don't have much understanding of many domains, and prefer to be confident I understand what I am doing before I automate.