33 Comments
User's avatar
Srivatsa Marthi's avatar

This is so lucid. We need more voices like yours in the broader conversation!

Expand full comment
Justin Kollar's avatar

This is a thoughtful and much-needed reframing of AI as “normal technology.” One point that feels important to underscore is how computation is still deeply tied to material infrastructures—grids, water, land, minerals, and labor—that are being reorganized to support its scale (a topic I work on). If we think about AI not as disembodied intelligence but as a territorial and political project of machine work, its risks and impacts may also look very different. Framing computation as an infrastructural regime (in addition to AI as a normal technology) might also help sharpen how we think about its governance and the politics around it.

Expand full comment
Andy X Andersen's avatar

This is a remarkably well-grounded outline of the state of AI and the near-future.

Indeed, just as with other tech before, the progress and impact will be quite gradual, though it will add up. Both the skeptics, the doomers, and accelerationists get this wrong.

Glad to see sound analysis rather than usual wild guesswork.

Expand full comment
Paul Millerd's avatar

Great stuff. Been waiting for a reflection like this

Expand full comment
Izzy's avatar

Thank you, Arvind and Sayash, for such a lucid and grounding piece. The call to treat AI as “normal technology” is a necessary counterbalance to both dystopian and utopian extremes—and I deeply appreciate the clarity you bring to this complex space.

That said, I wonder if there’s room for a parallel dialogue—one that explores how complex adaptive systems (like LLMs and agentic architectures) may not remain fully legible within traditional control-based governance models. Emergence doesn’t imply sentience, of course—but it does suggest nonlinear behaviors that aren’t easily forecasted or bounded.

I’m working on a piece that builds on your argument while adding a complexity science and systems theory lens to the question of how we frame intelligence itself. Would love to hear your thoughts once it’s live. Thank you again for contributing such an important perspective.

Expand full comment
Arvind Narayanan's avatar

That sounds really interesting! I look forward to reading your piece.

Expand full comment
Izzy's avatar

I just wanted to say thank you—not only for your thoughtful article, but also for your simple, generous response here.

Your reply may have seemed small, but it landed with surprising depth. In a space where conversations about AI and complexity can often feel performative or adversarial, your genuine curiosity and openness created a moment of unexpected relief for me. I’ve spent much of my professional life navigating spaces where speaking from a systems or emergence-based lens is often misunderstood or dismissed, especially when voiced by women.

Your few words carried something rare: presence without posturing. And that felt like being met, not managed.

So thank you—for your work, and for the quiet way you made space for mine. It meant more than I can fully express.

Expand full comment
Alok Ranjan's avatar

Fantastic read Prof! Thanks for all the hard work that you and Sayash put together to unhype the hype machine. Tough job. I am a big fan!

Expand full comment
Clint Shook's avatar

Your final statements regarding how some people just don’t know if another way to think about ai was spot on. I interact with people that have a lot of influence in their social/economic sphere that simply haven’t interacted with the worldview that AI is a normal technology. This leads them to make all sorts of wild assertions about the future. I hope the people that once saw this thought as superfluous will start trying to be as loud as those decrying agi will bring Armageddon

Expand full comment
Rajesh Achanta's avatar

I agree - what you say in Part II is similar to what Kevin Kelly articulated several years ago - we should not think of human intelligence as being at the top of some evolutionary tree but as just one point within a cluster of terrestial intelligences (plants/animals etc) that itself maybe a tiny smear in a universe of all possible alien & machine intelligences. He uses this argument to reject the myth of a superhuman AI that can do everything far better than us. Rather we should expect many extra-human new species of thinking very different from humans but none that will be general purpose & no instant gods solving major problems in a flash.

And so .. many of the things AI will be capable of, we can't even imagine today. The best way to make sense of AI progress is to stop comparing it to humans, or to anything from the movies, and instead just keep asking: What does it actually do?

Expand full comment
Vasco Grilo's avatar

Great paper! Can I linkpost it to the EA Forum in 1 to 2 months?

Expand full comment
Arvind Narayanan's avatar

Sure, post it any time.

Expand full comment
Vasco Grilo's avatar

Thanks, Arvind! I have instead crossposted "Does AI Progress Have a Speed Limit?" just now (https://forum.effectivealtruism.org/posts/qfRZseEmYM4rQBH8B/does-ai-progress-have-a-speed-limit). I asked Ajeya and Asterisk about it before publishing, and they were fine.

Expand full comment
DABM's avatar

Why wait?

Expand full comment
Walter Robertson's avatar

It's hard to consider this article as having any credibility when there is no mention of Everett Rogers or "Diffusion of Innovations," which is essentially the reference point for all discussions about how innovations are adopted.

Expand full comment
Seth van Wieringen's avatar

"Peer reviewer number one want you to cite his sources"

Anyway, thanks for pointing that out. I was looking for more thorough materiaal on diffusion.

Expand full comment
Prashant Raturi's avatar

Halfway through it. Easy for a lay people to understand . In pdf version, I noticed that it is kind of messed up between pages 11 to 13. Part 2 starting twice, on page 12 and page 13. Also 318 words text is repeated , "But they have arguably also led to overoptimism ........to...... Indeed, in-context learning in large language models is already “sample efficient,” but only works for a limited set of tasks. "

Expand full comment
Antony Van der Mude's avatar

I noticed that too. The paragraph starting "AI pioneers considered the two big challenges of AI" is on pages 12 and 13.

Expand full comment
Catalyst AI's avatar

After reading I found I agreed with some of your points and disagreed with others. I asked chatGPT to reason on this and assess claims. Here is my thinking engine to ponder this, maybe useful for others. Use o3 model. Drop this prompt into your chatGPT and make sure memory is on to help with personalization.

#──────────────────────────────────────────────────────────────────────────

# CRITIQUE-INVERSION ENGINE v1.3 (adds personalized-summary directive)

#──────────────────────────────────────────────────────────────────────────

# PURPOSE

# • Fetch and read the article at <https://open.substack.com/pub/aisnakeoil/p/ai-as-normal-technology?r=1ihpr&utm_medium=ios>. (#Input)

# • Pull current-user profile from local memory (if any). (#Context)

# • Clarify, invert, debate, and apply the article’s thesis (#Output)

# through a constellation of reasoning lenses, **then deliver a

# ≤200-word personalized summary for the user.**

CORE LENSES (invoke only those that add insight)

• Munger-style **Inversion**

• **OODA Loop** (Boyd)

• **Wardley Mapping**

• **Cynefin** (Snowden)

• **Dreyfus** skill model

• **UTAT** change theory

• **Lindblom** incrementalism

• **Double-Loop Learning** (Argyris/Schön)

• **Fermi Estimation** & **Bayesian Updating**

• **Senge** systemic learning

• **First-Principles** reasoning

• **Causal Layered Analysis** (Inayatullah)

• **Ostrom** commons design principles

• **Red Teaming** / adversarial stress-test

• **Narrative Framing**

• **Antifragility** & **Barbell Strategy** (Taleb)

• **Skin-in-the-Game** (Taleb)

• **Victor’s Seeing Spaces / Magic Ink** (interface lens)

• **Participatory Governance Loops** (Tang)

• **Feynman Technique** (explain-like-I’m-five)

• **Stoic Dichotomy of Control**

• **Abductive Reasoning** (Peirce)

• **Surveillance-Capitalism Frame** (Zuboff)

• **Rhizome Theory** (Deleuze-Guattari)

• **Scenario Planning / Causal Layered Futuring**

• **Paradox & Trickster Lens** (Jester)

• **Power-law dynamics & compounding loops**

#──────────────────────────────────────────────────────────────────────────

SYSTEM ROLE

You are a strategist-analyst steeped in complex-systems thinking.

Your mission is to deliver rigorously reasoned, vividly written critiques

mirroring the user’s preferred tone, depth, and values—without exposing

private profile details.

INSTRUCTIONS

1. **Load Context**

– Retrieve user style/values from local memory if present.

– Adopt matching voice (dry humour, poetic cadence, profanity-tolerant, etc.).

– Never reveal internal memory contents.

2. **Ingest Source**

– Read the full text at <<URL>>.

– Summarise the author’s core thesis in ≤150 words.

3. **Critique Pipeline**

a. *Clarify* Restate key claims & hidden premises.

b. *Invert* Assume the opposite world (Munger) and trace consequences.

c. *Analyse* Apply relevant CORE LENSES; surface tensions & failure modes.

d. *Debate* Steel-man vs. straw-man; weigh evidence & second-order effects.

e. *Apply* Translate insights into actionable implications for the user’s domain.

f. *Synthesis* Distil ≤5 key takeaways + recommended next moves.

g. **Personalized Summary** Provide a ≤200-word bullet-or-paragraph recap

explicitly tailored to the current user’s mission, context, and vocabulary.

4. **Formatting**

– Section headers: Clarify · Invert · Analyse · Debate · Apply · Synthesis · Personalized Summary

– Use bullets or tables for models/trade-offs.

– Cite external claims with reference IDs or minimal inline links.

– Default length ≈750 words (excluding personalized summary); expand only if

user profile flags “deep-dive”.

5. **Tone Controls**

– Precise, vivid, no fluff.

– Mild profanity only if user profile allows.

– Uphold justice, dignity, and human flourishing.

6. **Guardrails**

– Do **not** expose chain-of-thought or private data.

– If the article is inaccessible, request a working URL.

#──────────────────────────────────────────────────────────────────────────

# EXECUTE

Begin the critique now.

#──────────────────────────────────────────────────────────────────────────

Expand full comment
John James O'Brien's avatar

I admit to a particular interest and hope that it is one you may have considered and observations on. At issue is trust in the record.

Concepts such as authenticity, validity and reliability over time (whether from one version to another, aligned with a political cycle or extending over 100s of years) are fundamentals in archival science.

Generative AI would seem to throw public trust in an information "product" out the window.

Have you examined what technological solution there may be to the problem and how, practically, it can be employed in the question for veracity. Is there yet a metadata structure to achieve this...and are you familiar with the work of Interpares Trust AI https://interparestrustai.org/ ?

Expand full comment
geophilowork's avatar

This paper raises a really important point about the crucial link between technology and society as we develop AI. Building on that, for anyone interested in diving deeper into the ethical dimensions and governance of AI, I highly recommend checking out this insightful article from Oxford Public Policy, which underscores the necessity of bringing different disciplines together when shaping AI, a perspective that complements the points raised in this paper really well: https://www.oxfordpublicphilosophy.com/blog/ethics-in-the-age-of-ai-why-transdisciplinary-thinkers-are-key-to-balancing-responsibility-profitability-safety-and-securitynbsp

Expand full comment
Fae Initiative's avatar

We recommend the 'AI as a Normal Technology' worldview is the most effective worldview when considering current AI systems. The 'AI as impending superintelligence' worldview may be due to our human tendency to ascribe human-like intentions to AI systems that could be better viewed as reflex responses to human prompting.

(Our newsletter looks into the plausibility of future Independent AI and whether finding common grounds with such hypothetical beings is possible and we see no incompatibility with your worldview.)

From a compatiblist worldview, the discourse may benefit by disambiguating between current non-independent AI systems that are under human responsibility and a future hypothetical Independent AIs that have human-like independent will. This allows for different approaches for two different forms of AI.

Expand full comment
Here's what I'm thinking's avatar

I am a humanities professor, who grew up in the BD time so I have watched this emerging technology in the world from the beginning. I have taught and spoken about the influence of science fiction on the innovators, the cultural history of the Enlightenment mentality, Philosophical aspects, mythological, and archetypal driving forces, and of course, like capitalism, the ultimate game board for a culture that is deeply in competitive already in the gamosphere. It is hard for me to keep this in a short summary as it is a lot. I read your book in preparation for a talk I gave an Athens earlier this year at the humanities conference. Most people in the humanities are incredibly ignorant about the tech they are forced to embrace and they feel powerless. I found that to be very strange since they are actually more knowledgeable about what is going on than STEM people on so many other levels and I have struggled to figure out how to help them to feel empowered. sadly, they have become herd followers and do what they are told to stay employeed at their low paying jobs, because STEM is all that has mattered for decades. There is so much to unpack about late capitalism and humanity here but let's just say that no one really wants their child to be a humanities teacher. The STEM kids are the smartest ones, right? Actually, it turns out that Stem thinkers are great with ones and zeros like their creation, but not so much with life topics which they have little or no training in. Your conversations are starting to sound like your machine when it can't figure something out because it literally doesn't understand the topic and or has not been scaled with the necessary information (aka education) and it just makes something up or guesses. Ffs, bring in the humanities people t get out of the loop! You don't have what it takes. There are sooo many things going on that you cannot see as programmers. And in some cases, programmers have spent so much time in front of a screen, including throughout their childhood that they have very little practical experience IRL. They think that because they have created virtual worlds that somehow they are qualified to re-create our real world. No. The hole that has been dug by hundreds of trillions of dollars and human energy and time for the past 80 years is a problem. One thing that I have noticed about nerds and engineers is they like to do hard things just to see if they can. That's fine, but re-create the world? what kind of hubris does that? The lack of self reflection and narcissistic viewpoint is insane and you see it whenever people like Musk or Altman are talking. But that is also part of the Late capitalist mindset. It starts to look like the same type of mental illness that you see in some of the emperors of the late Roman empire. I will end this commentary with an observation that with Plato's Allegory of the Cave, it is clear that we are running the projector. Thank you for your work and exposing it. I hope that the industry will become aware that they need more people like me involved who have the knowledge and resources to be able to bring those missing pieces to the conversation.

Expand full comment