Discussion about this post

User's avatar
Raj Iyer's avatar

Thank you, this was a great read.

"The assumption that AI safety is a property of AI models is pervasive in the AI community."

Isn't the reason for this that a lot of AI companies claim they're on the pathway to "AGI", and AGI is understood to be human-level intelligence (at least), which translates, in the minds of most people, as human level understanding of context, and thus, human level of responsibility. It's hard to say a model is as smart as a human, but not so smart that it cannot discern to what purpose another human is using it.

Put another way, many (though not all) AI companies want you to see their models to be as capable of humans, able to do the tasks humans can, at human or near-human levels, only automated, at broader scale, and without the pesky demands human employees make.

Acknowledging that AI models cannot be held responsible for their risky uses puts them in their appropriate place: as a new form of computing with great promise and interesting use cases, but nowhere close to replacing humans, or equaling the functions humans play in thinking through and mitigating risk when performing the kinds of tasks AI may be used for.

But that runs contrary to the AGI in a few years/singularity narrative, so safety is facto expected to be an aspect of the intelligence of Artificial Intelligence models. The snake oil salesmen are being asked to drink their own concoctions. Hopefully, at some point, they'll be forced to acknowledge reality.

Expand full comment
Steve Newman's avatar

Great post! There are a lot of important ideas here that I hadn't seen clearly expressed before.

You note that in many cases, an important aspect of risk reduction may lie in adapting to risks, rather than attempting to develop riskless models. I have recently been thinking along somewhat similar lines. It seems to me that for many risks, there are adaptations available that are worthwhile on their own merits, even without taking AI risk into account. For instance, public health measures such as improved building ventilation and pandemic preparedness seem justifiable based solely on the the basis of existing viruses, but would also reduce the danger from AIs capable of assisting with bioengineering. Not all risks can feasibly be eliminated in this fashion, but it seems to me that many risks can be substantially reduced.

Expand full comment
18 more comments...

No posts