31 Comments
Mar 30, 2023Liked by Arvind Narayanan

Today we had the FLI Open letter, AI snakeoil post, and a Yudkowsky article, all offering quite different views. What. A. Ride...

Expand full comment

I'm one of those millions of mechanical turks getting pennies for hours of work in ''correcting the ai'', and after spending the last two weeks on those jobs I am now constantly rolling my eyes at the lies and gaslighting that the companies are spreading to sell their product. Its not AI, its barely an autocorrect.

Expand full comment

Great framing.

Re: Speculative/real long-term risk.

AI algorithms have already significantly altered our civilization. Recommendation engines (though considered old AI) are designed to show you the content that is very similar to what you previously consumed. Some analyses show that such algorithms lead to extremist viewpoints (https://policyreview.info/articles/analysis/recommender-systems-and-amplification-extremist-content, https://www.extremetech.com/internet/327855-whistleblower-facebook-is-designed-to-make-you-angry). These algorithms have incentivized creators to generate highly divisive, incendiary content by rewarding them with revenue.

Expand full comment

If you're going to make this argument, you need to actually argue it. It might be true that giving time to concerns about AGI is bad because AGI risk is not something to worry about. But to argue this you need to actually show that AGI risk is not something to worry about rather than handwaving it away as sci-fi: plainly, if catastrophic or existential risk from AI is real, then it is worth considering, to do otherwise would be stupid. To not offer any explanation for why you dismiss it is poor argumentation and is not worthy of the authors. I also note that none of the linked further reading argues this point either...

Additionally, you do not offer any arguments as to why GPT-4, which we can all interact with and see its capabilities to easily complete many of the menial tasks we do every day, is not in danger of automating away a large % of jobs. The linked further reading also does not convincingly argue against this. You simply point out the (true) mismatch between benchmarks and real world performance, but this is not a true counterargument. The letter also targets GPT-5 on this point in any case, and you do not offer any argument as to why it may not be in danger of automating away more jobs either. This undermines your entire point.

Expand full comment

I strongly agree that there is a disproportionate amount of concern for long term risk we can not yet evaluate, while lacking concern for how AI is currently being used.

I think the security issues will be a nightmare. Consider the situation is essentially a system that exhibits unsuspected emergent behaviors, the internal are black box and not understood, and the input is everything that can be described by human language. That is a huge potential attack surface to try to contain with a lot of unknowns.

Finally, I would add there are also substantial societal issues that will begin to emerge that we don't know how they will yet play out. This probably follows immediately after the security issues, but before long term risk scenarios.

I've also put together a rather extensive set of thought explorations that goes deeper societal aspects of both short term and longer term. Would be interested in your thoughts as well.

https://dakara.substack.com/p/ai-and-the-end-to-all-things

Expand full comment

I agree and disagree at the same time. I think Jeffrey Lee Funk and Gary N. Smith nailed the situation well:

https://www.salon.com/2023/04/22/our-misplaced-faith-in-ai-is-turning-the-internet-into-a-cesspool-of-misinformation-and-spam/

According to them, the real issue is AI generated garbage. It may or may not be a risk as such, but it will pollute the whole open Internet. There are already estimates that as much as 90-95% of all content in the open Internet could be generated by AI in the next few years. There are already hundreds of tools popping out here and there that allow creating a website with few clicks, polluting it with AI garbage, and then feeding the garbage to social media, trying to fool SEO guards, or whatever. In other words, much of the garbage will be the usual stuff; commercial promotion material and advertising generated by machines used by advertisers, companies, advertising agencies, and related parties. It does not stop here: all product descriptions will be filled with AI garbage, their reviews will be about AI garbage generated by the click farms and other manipulators throughout the world, and so forth.

As the companies openly admit that their tools output garbage, also the inaccuracies and misinformation are very much real even when no nefarious purposes are involved. These will be delivered both to users of the chatbots directly and spammed to the Internet.

Expect also a lot of AI garbage in science: predatory publishers and paper mills will most certainly utilize LLMs. My advice for any serious academic would be to steer away from any AI tools as these mostly output garbage. Academia does not need any more "productivity" as there are already plenty of nonsense. If you need help at "writing", "rewording", or "restructing" by machine-generation tools you are already working in a wrong place.

Furthermore, any safeguards put place by big companies are useless in practice because custom LLMs in particular (unlikely image generators perhaps) are fairly easy to build. Thus, I wouldn't count out the issues with explicit disinformation, phishing, social engineering, spam, malware, etc. because these things are being developed everywhere in the world, including by nation states. EUROPOL's recent report hinted that also crooks in the dark web are building their own LLMs. If you look at what is happening at, say, Telegram, LLMs are presumably already used to push heavy-handed disinformation by bots.

Few scenarios:

(a) Once enough AI garbage is out there, it will likely become impossible to train better LLMs in the future because learning from one's own garbage is hardly a good idea (some say that synthetic data may help, but I doubt it).

(b) It will become hard for any volunteer human-based knowledge-creation endeavor to compete with AI garbage. Even things like Wikipedia may eventually be a victim. Places like StackOverflow will die. Volunteer communities have little reason to contribute serious content because it is impossible and pointless to compete with AI garbage.

(c) On the positive side, it may be that science, traditional media, and reputable publishers will gain in this garbage scenario because they will likely be able to maintain some quality controls that will be absent in the AI cesspit that the Internet will become.

(d) Again on the positive side, people may learn to enjoy AI garbage. Although I despise LLMs, I have to admit that the AI art is amazing. I can't wait to see what the video game industry will be able to do with this stuff. Therein this AI stuff is a perfect fit.

(e) The jury is still out on what will happen to social media, but I expect also large transformations there due to the AI garbage and the rise of AI bots. Closed walled-gardens may emerge. Even real-identity verification may be a thing.

(f) Crawling the web may become more difficult as some companies and communities may put blockers to spiders because copyrights are not honored by AI garbage harvesters.

(g) The open source community may start to migrate away from GitHub due to license violations by CoPilot and others.

So, all in all, it will be the era of garbage.

Expand full comment

Are you aware 'I read a scifi book once' is a lazy straw-man argument?

Expand full comment

I disagree with your characterization that AI taking people's jobs is a speculative risk, and its formed in its most extreme version further distorting the argument.

You don't need AI to take everyone's job for the worst case scenario. All you need is for them to replace and be human competitive in the low to mid salary range jobs which from a distribution perspective amounts to most jobs.

The ideal corporation is 1 leader at the top, its shareholders, and no workers just robot slaves producing. The cost of labor is the majority of the cost for anything. Our society requires a fine balance between factor markets and product markets. Money made in factor markets is used to buy goods in product markets.

If the balance is upset, wealth concentrates, business sectors concentrate and consolidate, this eventually leads to macro-economic issues, and the division of labor eventually breaks down when government prints money.

Unrest historically is what happens when you have breakdowns in food security, environmental security, and/or are deprived of the means to correct the situation. Current versions of AI chatbots can eliminate most low to mid-paying jobs with a little prompt engineering. Ther's a lot of money getting behind replacing human labor and production with robots.

What then happens to the people who can't meet basic necessities. By the time this type of problem can be characterized, societal mechanics would make the outcome inevitable because all of the important indicators lag, and you run into forms of the economic calculation problem.

Expand full comment

Signed by mRNA vessel, AI driving cars’ salesman, brain-linking, Elon Musk 😄

“governments should step in and institute a moratorium”

https://futureoflife.org/open-letter/pause-giant-ai-experiments/?fbclid=IwAR1lB7vyQxxMYEewzAx3JkkfW-tuRPgO2GYDUq3RSaJKe7Mbg1PUU1howR4&mibextid=Zxz2cZ

Chatty will always be biased for a reason! The big guys will never ever allow everyone to utilise AI’s full potential. Maybe you have already noticed that our monetary system, which runs the world, is tweaked in a way that ensures the power is kept in the same hands.

https://open.substack.com/pub/manuherold/p/proof-microsofts-chatty-chatgpt-has?r=eymvs&utm_medium=ios&utm_campaign=post

Expand full comment

> Consider the viral Twitter thread about the dog who was saved because ChatGPT gave the correct medical diagnosis. In this case, ChatGPT was helpful. But we won't hear of the myriad of other examples where ChatGPT hurt someone due to an incorrect diagnosis.

Ofcourse its the other way around. The dog story was a nice fluff thingy that happened, and *maybe* we'll see a few others reported, but we'll surely hear about every instance a chatbot even remotly contributes to any more extreme harm done to humans. --> https://www.dailymail.co.uk/news/article-11920801/Married-father-kills-talking-AI-chatbot-six-weeks-climate-change-fears.html

Expand full comment

?"One way to do right by artists would be to tax AI companies and use it to increase funding for the arts."

Kind of like taxing the auto industry last century to fund horse saddle making.

Expand full comment

Barn door, horse, bolted! Enough said!

Expand full comment

When I hear comments like that, to halt the development of AI, I always think about people that are deadly ill, dying of cancer for example. I can imagine AI will do wonders in drug design & personal drug doze optimization. And what - you will ask these people to wait because you're scared for the future (or maybe want some additional time to get in the game) No way. Genie is out of the bottle already. There's only path forward.

Expand full comment

What I really can't stand in these discussions, is any claim like "AI will kill us", which is just embarrassing, because AI **itself** is the most FRAGILE thing on Earth (if anybody cares, I've argued about this in my substack)

Expand full comment

I absolutely don't understand why the (for lack of a better phrase) AI safety establishment has decided that civilizational-scale tail risks are inimical somehow to consideration of other risks and also so unworthy of consideration that they can be dismissed like this, by handwaving and name-calling alone.

Like imagine a world where the people who work on safeguarding nuclear material from thieves or terrorists in practice seem to spend half their time writing screeds about how the World War Three Bros are distracting us from the real issues with their speculative, futuristic apocalypse visions. And their arguments for why not to worry about World War Three are all statements like "In reality, no-one has ever detonated a hydrogen bomb over a city." Very weird stuff.

Expand full comment

Very interesting article, thanks. I tackle the same topic coming from a different angle, that of a humanitarian aid worker who specialises in protection of civilians. You can read it on The Machine Race, my blog series looking at AI, human rights and society. The new article on this is called, 'Fighting over AI: Lessons from Ukraine'. Would love to hear your thoughts, Sayash and Arvind, and those of others.

https://medium.com/@themachinerace/fighting-over-ai-lessons-from-ukraine-191b59e86f6b

Expand full comment