Discussion about this post

User's avatar
Burton Rothberg's avatar

I just finished reading your book. Loved it. Here’s another example of AI predictive modeling snake oil:

You probably know about Zillow, the real estate platform. They started as an alternative to the Realtor MLS service, which was only open to agents. It worked, and Zillow (and a couple of others) really opened up the market. It was a good business.

Sometime in the teens, Zillow started adding estimated house prices to their site. They do this for all homes, not just the ones on the market. The pricing algorithm is proprietary, but probably takes nearby sales into account, as well as the size and physical characteristics of the home (How many bedrooms, etc.). My neighbor is in real estate, and she says the Zillow estimate is a fairly good first order approximation. I know some people who check their home’s Zillow estimate regularly.

In 2018 they decided to eat their own cooking. They started a business flipping houses. If a home was for sale for less than what their model predicted, they would buy it. I believe they make a site visit as well as using their models. Then they would try to sell it.

The flipping business was a total flop. Basically they overpaid for a lot of houses. Even though the overall house market was strong during Covid, they still managed to lose money. I believe the problem was that there are a lot of idiosyncratic variables that go into an individual house’s value, and many of them were not in their AI. Anyway, they started exiting the flipping business in 2021.

Because Zillow’s basic business is sound, the company was able to ride it out. However the stockholders took a substantial bath. The company lost over a billion dollars during this period. The stock fell from $137 to $30. It still has not recovered that loss.

Expand full comment
Claude Coulombe's avatar

Emergent capabilities based solely on scaling are largely a myth. While scaling does extend capacity, such as with "one-shot" or "few-shot" learning, this is primarily due to a larger pattern-matching basis, inductive mechanisms (not deductive reasoning), and combinatorial effects. Currently, LLM builders are attempting to compensate for significant shortcomings with human intervention (hordes of poorly paid science students and some PhDs) to create the illusion of progress (patching AI). This is not a viable path toward Artificial General Intelligence (AGI).

Expand full comment
15 more comments...

No posts