25 Comments
User's avatar
Srivatsa Marthi's avatar

This is so lucid. We need more voices like yours in the broader conversation!

Expand full comment
Justin Kollar's avatar

This is a thoughtful and much-needed reframing of AI as “normal technology.” One point that feels important to underscore is how computation is still deeply tied to material infrastructures—grids, water, land, minerals, and labor—that are being reorganized to support its scale (a topic I work on). If we think about AI not as disembodied intelligence but as a territorial and political project of machine work, its risks and impacts may also look very different. Framing computation as an infrastructural regime (in addition to AI as a normal technology) might also help sharpen how we think about its governance and the politics around it.

Expand full comment
Andy X Andersen's avatar

This is a remarkably well-grounded outline of the state of AI and the near-future.

Indeed, just as with other tech before, the progress and impact will be quite gradual, though it will add up. Both the skeptics, the doomers, and accelerationists get this wrong.

Glad to see sound analysis rather than usual wild guesswork.

Expand full comment
Paul Millerd's avatar

Great stuff. Been waiting for a reflection like this

Expand full comment
Alok Ranjan's avatar

Fantastic read Prof! Thanks for all the hard work that you and Sayash put together to unhype the hype machine. Tough job. I am a big fan!

Expand full comment
Izzy's avatar

Thank you, Arvind and Sayash, for such a lucid and grounding piece. The call to treat AI as “normal technology” is a necessary counterbalance to both dystopian and utopian extremes—and I deeply appreciate the clarity you bring to this complex space.

That said, I wonder if there’s room for a parallel dialogue—one that explores how complex adaptive systems (like LLMs and agentic architectures) may not remain fully legible within traditional control-based governance models. Emergence doesn’t imply sentience, of course—but it does suggest nonlinear behaviors that aren’t easily forecasted or bounded.

I’m working on a piece that builds on your argument while adding a complexity science and systems theory lens to the question of how we frame intelligence itself. Would love to hear your thoughts once it’s live. Thank you again for contributing such an important perspective.

Expand full comment
Arvind Narayanan's avatar

That sounds really interesting! I look forward to reading your piece.

Expand full comment
Izzy's avatar

I just wanted to say thank you—not only for your thoughtful article, but also for your simple, generous response here.

Your reply may have seemed small, but it landed with surprising depth. In a space where conversations about AI and complexity can often feel performative or adversarial, your genuine curiosity and openness created a moment of unexpected relief for me. I’ve spent much of my professional life navigating spaces where speaking from a systems or emergence-based lens is often misunderstood or dismissed, especially when voiced by women.

Your few words carried something rare: presence without posturing. And that felt like being met, not managed.

So thank you—for your work, and for the quiet way you made space for mine. It meant more than I can fully express.

Expand full comment
Vasco Grilo's avatar

Great paper! Can I linkpost it to the EA Forum in 1 to 2 months?

Expand full comment
Arvind Narayanan's avatar

Sure, post it any time.

Expand full comment
DABM's avatar

Why wait?

Expand full comment
Rajesh Achanta's avatar

I agree - what you say in Part II is similar to what Kevin Kelly articulated several years ago - we should not think of human intelligence as being at the top of some evolutionary tree but as just one point within a cluster of terrestial intelligences (plants/animals etc) that itself maybe a tiny smear in a universe of all possible alien & machine intelligences. He uses this argument to reject the myth of a superhuman AI that can do everything far better than us. Rather we should expect many extra-human new species of thinking very different from humans but none that will be general purpose & no instant gods solving major problems in a flash.

And so .. many of the things AI will be capable of, we can't even imagine today. The best way to make sense of AI progress is to stop comparing it to humans, or to anything from the movies, and instead just keep asking: What does it actually do?

Expand full comment
Prashant Raturi's avatar

Halfway through it. Easy for a lay people to understand . In pdf version, I noticed that it is kind of messed up between pages 11 to 13. Part 2 starting twice, on page 12 and page 13. Also 318 words text is repeated , "But they have arguably also led to overoptimism ........to...... Indeed, in-context learning in large language models is already “sample efficient,” but only works for a limited set of tasks. "

Expand full comment
Antony Van der Mude's avatar

I noticed that too. The paragraph starting "AI pioneers considered the two big challenges of AI" is on pages 12 and 13.

Expand full comment
Walter Robertson's avatar

It's hard to consider this article as having any credibility when there is no mention of Everett Rogers or "Diffusion of Innovations," which is essentially the reference point for all discussions about how innovations are adopted.

Expand full comment
Alex Boss's avatar

I think the level of uncertainty is incredibly high now and what may happen in the future. We hear from the extreme options (world ending threats and job ending crises) all the time and we rarely get ideas on a future that is more boring, limited and slow, but might be closer to what happens…

Expand full comment
Slava's avatar

Nice article. I think you are a bit too kind to the "Superintelligence" language out there. I'm not sure it constitutes a proper worldview with a well thought out model and formalization. Certainly nothing remotely similar appeared when I took graduate level AI courses around 2010. I really think it's purely a narrative constructed by economic incentives, with the convenience of a rich history of futurist literature speculation on long term technological autonomy. Incentives in contemporary finance capital prioritize growth over profitability, and the narrative of winning an arms race towards superintelligence is a winning strategy in an obvious environment of asymmetric information of AI develops relative to VCs.

Expand full comment
Chad Mix's avatar

Interesting and I think I agree in principal.

Until AI has a tether to contextualize the broad range of human experience,to recognize a ‘sigh’ not as a word, but as each possible interpretation of it at once, each capable of reframing those words immediately in its orbit?

Unfortunately, only a tool

Expand full comment
Tyrone Post's avatar

A planet of average people will have average thoughts about the slow speed of A.I. adoption.

Thankfully, genius doesn’t require attention for implementation, and it’s widely distributed across hardworking optimists who are busy working because they realize too many things are moving too quickly to accurately “keep up” with much of it.

Old problems. New people. New tools.

Expand full comment
Frank Johnson's avatar

It’s good read I have few questions what about the data that is used to train models small and large

Where do you all stand on it?

Expand full comment
Sarah Seeking Ikigai's avatar

Such an interesting, clear and grounded read, thank you!

Expand full comment