Discussion about this post

User's avatar
Leon Galindo Stenutz's avatar

Excellent, informative, serious -- and a (temporary?) relief from the onslaught of information on the dangers of runaway AI.

Two core questions:

1. Is AI truly a core species-level anthropogenic existential risk?

2. How much of what you have observed is merely the result of bad journalism and poorly managed public relations -- versus a partial or upfront attempt to hype up AI Risk as a form of fake news or even intentional PsyOps focused on destablizing certain audiences and societies?

Would also be interesting to better understand how your insights relate to the alarm concerning AI as an existential risk sounded by the likes of Stephen Hawking, Elon Musk, Eliezer Yudkowsky, Bill Gates, Stuart Russell.

Also, to understand how your research and vision of the near to mid-term future of the impact of AI align with, or differ from the work of Nick Bostrom, Jaan Taallin, Max Tegmark, Yann LeCun, Brian Christian, or Melanie Mitchell, among others.

Thanks, appreciate your article deeply.

Expand full comment
Abigail Olvera's avatar

Excellent and useful piece. The very last link for the “agenda setting quality” of media is broken. Were there other citations to replace it?

I can see the “black box” nature of AI being a cop out for applications like those in EdTech but would “black box” discussion be useful in some limited circumstances like general purpose LLMs?

Expand full comment
1 more comment...

No posts