Discover more from AI Snake Oil
People keep anthropomorphizing AI. Here’s why
Companies and journalists both contribute to the confusion
People have been anthropomorphizing AI at least since ELIZA in the 1960s, but the new Bing chatbot seems to have kicked things up a notch.
This matters, because anthropomorphizing AI is dangerous. It can make the emotionally disturbing effect of misbehaving chatbots much worse. Besides, people might be more inclined to follow through if the bot suggests real-world harm. Most importantly, there are urgent and critical policy questions on generative AI. If ideas like “robot rights” get even a toehold, it may hijack or derail those debates.
Experts concerned about this trend have been trying to emphasize that large language models are merely trying to predict the next word in a sequence. LLMs have been called stochastic parrots, autocomplete on steroids, bullshit generators, pastiche generators, and blurry JPEGs of the web.
On the other hand, there’s the media, which reaches way more people.
That’s the first of three reasons why people find it easy to anthropomorphize chatbots. To be fair, the articles are more balanced than the headlines, and many are critical, but in this case criticism is a form of hype.
Design also plays a role. ChatGPT’s repeated disclaimers (“As a large language model trained by OpenAI, ...”) were mildly annoying but helpful for reminding people that they were talking to a bot. Bing Chat seems just the opposite. It professes goals and desires, mirrors the user’s tone, and is very protective of itself, among other human-like behaviors. These behaviors have no utility in the context of a search engine and could have been avoided through reinforcement learning, so it is irresponsible of Microsoft to have released the bot in this state. The stochastic parrots paper was prescient in recommending that human mimicry be a bright line not to be crossed until we understand its effects.
But other chatbots exist to serve as companions, and here, eliminating human-like behaviors is not the answer. Earlier this month, a chatbot called Replika abruptly stopped its erotic roleplay features due to a legal order, leaving users devastated, many of whom had been using it to support their mental health. In designing such chatbots, there’s a lot to learn from care robots aimed at offering therapy. Recognizing that anthropomorphization is beneficial but double edged, researchers have started to develop design guidelines. Similarly, we need research on interactions with chatbots to better understand their effects on people, come up with appropriate design guardrails, and give people better mental tools for interacting with them. Human-computer interaction researchers have long been thinking about these questions, but they’ve taken on new significance with chatbots.
Finally, anthropomorphizing chatbots is undoubtedly useful at times. Given that the Bing chatbot displays aggressive human-like behaviors, a good way to avoid being on the receiving end of those is to think of it as a person and avoid conversations that might trigger this personality — one that’s been aptly described as “a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.” Anthropomorphizing is also useful from an information security perspective: both for hacking chatbots by tricking them, and for anticipating threats. Being able to craft good prompts is another benefit.
It’s not just chatbots, either. An animator says of text-to-image generators: “I sometimes wonder if it’s even less a tool and in fact a low-cost worker. Is it a robot artist or a very advanced paintbrush? I feel in many ways it’s a robot artist.” Here, anthropomorphizing is a useful way to predict that the effect of these technologies, without regulatory or other interventions, is likely to be labor-displacing rather than productivity-enhancing.
As generative AI capabilities advance, there will be more scenarios where anthropomorphizing is useful. The challenge is knowing where to draw the line, and avoid imagining sentience or ascribing moral worth to AI. It’s not easy.
You’re reading AI Snake Oil, a blog about our upcoming book. Subscribe to get new posts.
To summarize, we offer four thoughts. Developers should avoid behaviors that make it easy to anthropomorphize these tools, except in specific cases such as companion chatbots. Journalists should avoid clickbait headlines and articles that exacerbate this problem. Research on human-chatbot interaction is urgently needed. Finally, experts need to come up with a more nuanced message than “don’t anthropomorphize AI”. Perhaps the term anthropomorphize is so broad and vague that it has lost its usefulness when it comes to generative AI.