Discussion about this post

User's avatar
Amit Amin's avatar

"Historically, standing on each step of the ladder, the AI research community has been terrible at predicting how much farther you can go with the current paradigm, what the next step will be, when it will arrive, what new applications it will enable, and what the implications for safety are. That is a trend we think will continue."

Your best zinger this year :)

Expand full comment
Scott Lewis's avatar

One thing that's bothersome to me is that now many in the LLM promotion/biz are also conflating scaling of LLMs with the *way that humans learn*. e.g. this from Dario Amodei recently: https://youtu.be/xm6jNMSFT7g?t=750

it's as if not only do they believe in scaling leading to AGI (not true as per your article), they are now *selling* it to the public as the way that people learn, only better/faster/more expensive.

Expand full comment
44 more comments...

No posts