The philosopher Harry Frankfurt defined bullshit as speech that is intended to persuade without regard for the truth. By this measure, OpenAI’s new chatbot ChatGPT is the greatest bullshitter ever. Large Language Models (LLMs) are trained to produce
Terrific piece, though I'd argue that truth is actually not completely irrelevant to fiction and other art. And that you might have nailed the reason why those ChatGPT Nick Cave lyrics aren't going to haunt anyone's dreams. They're just not coming from a place of truth.
Without that magic ingredient of working off of the truth, stories or songs don't really truly connect with life. That's probably why if you get ChatGPT to generate a horror story about a Panera Bread visit gone wrong, the result is wacky and impressive, but at the same time is sort of meaningless forgettable BS'ing that won't stick with you the way something would that was trying to dig into the truth of being a human being.
A typical English user of ChatGPT would ask a question about a subject that is well known to him from American or British culture, find some mistaken detail in ChatGPT’s response, and say: “There, I am better than you!” OK, so, ChatGPT would get a C for that question, while this English user would get an A or B. What if you now ask ChatGPT a similar question, but related to a foreign culture, (let’s say) Indian or Indonesian. ChatGPT would likely still get a C for that question, while this English user would likely get an E or F. What if you now ask ChatGPT a similar question, but written in a foreign language, (let’s say) Vietnamese, and which the respondent has to answer in French. ChatGPT would still get a C, while this English user won’t understand a single word of the Vietnamese question, let alone try to answer it in French. The point is: users are focusing on the 0.3% of situations where they might get a higher grade, and being completely obtuse about the 99.7% where ChatGPT is irreversibly outclassing them and outperforming them.
It's a good point you make. The problem is, that a human with an AI on their smartphone would outclass the AI. Getting outclassed at tasks isn't new for people... excavators dig more than we can, hydraulics lift more than the strongest man, digital calculators converge on the square root of pi faster than the most prodigious genius. The difference is, when people do these things... it's a palimpsest for the true general power their minds possess. When AIs do this task... it's because they have mountains of data and many hours of supervised input.
This goes to the heart of the problem... we don't have the first clue about what creates consciousness.
We know that certain regions of the brain are recruited in performing certain tasks... but electromagnetic waves are guided by antennae and frequency-selective circuits to produce sound.
Anyone assuming that the sound is "in" the electromagnetic waves in any physical sense would be greatly mistaken.
The notion that intelligence is "in" the brain might be equally specious.
The brain could just as easily be a conduit for thought, feeling, and experience. We simply do not know either way.
I enjoy your post(s) and agree with most of it. However, I do wonder if your perspective on learning is focused on high level education and/or students that are ambitious and genuinely want to learn. I am an associate professor at a business school, and have tested my last 3-4 years of exam question on the bot. It generally performs quite poorly and fails the clear majority, yet would pass some of the questions. It might even get a C on one of the questions (out of 25 or so). So, it is clearly not a worry in the sense that it can lead students to a high performance nor genuine learning. I'd also argue it shouldn't be able to fully answer questions at the master's level. On the other hand, at the bachelor level of a an average university, this tool can (and will soon) be able to do just well enough to pass. This can be partially solved by making exams onsite and disallowing the use of the internet. Yet, not everything happens on-site. The positive angle is that the bot helps assessing if an exam question should be used or not - but a substantial number of students, who just want a diploma, will exchange tedious learning activities with letting the bot answer for you. And in will work, at times.
Your comment is on the level of "you won't always have a calculator in your pocket". How about instead of banning this tool you teach your students how to use it effectively in their classroom activities? It's time to get the tedium out of the studying and unleash the creativity.
Not sure why you think these considerations are mutually exclusive? Obviously the bot provides exceptional teaching opportunities in the class room- I am very excited about my future teaching
I guess The bigger risk with these large LLMs will likely be realized in the next decade or so. By then, there might be more and more texts flooding the Internet that were actually "written" by these LLMs drowning the original texts/ true information sources and these models won't be able to discern the true content from their own garbage.
The subsequent LLMs/ AI models trained on these data will simply degenerate further in accuracy/ creative writing and generate even more bullshit content. And ad infinitum.
I think I would quibble with the statement "where truth is irrelevant, like writing fiction." Fiction without underlying truth is indeed, irrelevant- which is why AI has been largely unhelpful for providing truly quality creative output.
Translation and interpretation are two different things. Writing is a reductive code for language, which is multi-modal. Knowledge is socially constructed and individual epistemologies are rooted in our ontologies. It is not clear how a subset of code for performed language is considered "truth", nor how the authors have validated their claims around language, translation, and interpretation. Oversimplifying these things is a habit that gets regurgitated in scholarly writing, and then is fed into the limited dataset of AI, so that it is codified in the ether. These mistakes are then passed on in classrooms. See also: Sign language gloves.
Terrific piece, though I'd argue that truth is actually not completely irrelevant to fiction and other art. And that you might have nailed the reason why those ChatGPT Nick Cave lyrics aren't going to haunt anyone's dreams. They're just not coming from a place of truth.
Without that magic ingredient of working off of the truth, stories or songs don't really truly connect with life. That's probably why if you get ChatGPT to generate a horror story about a Panera Bread visit gone wrong, the result is wacky and impressive, but at the same time is sort of meaningless forgettable BS'ing that won't stick with you the way something would that was trying to dig into the truth of being a human being.
A typical English user of ChatGPT would ask a question about a subject that is well known to him from American or British culture, find some mistaken detail in ChatGPT’s response, and say: “There, I am better than you!” OK, so, ChatGPT would get a C for that question, while this English user would get an A or B. What if you now ask ChatGPT a similar question, but related to a foreign culture, (let’s say) Indian or Indonesian. ChatGPT would likely still get a C for that question, while this English user would likely get an E or F. What if you now ask ChatGPT a similar question, but written in a foreign language, (let’s say) Vietnamese, and which the respondent has to answer in French. ChatGPT would still get a C, while this English user won’t understand a single word of the Vietnamese question, let alone try to answer it in French. The point is: users are focusing on the 0.3% of situations where they might get a higher grade, and being completely obtuse about the 99.7% where ChatGPT is irreversibly outclassing them and outperforming them.
It's a good point you make. The problem is, that a human with an AI on their smartphone would outclass the AI. Getting outclassed at tasks isn't new for people... excavators dig more than we can, hydraulics lift more than the strongest man, digital calculators converge on the square root of pi faster than the most prodigious genius. The difference is, when people do these things... it's a palimpsest for the true general power their minds possess. When AIs do this task... it's because they have mountains of data and many hours of supervised input.
This goes to the heart of the problem... we don't have the first clue about what creates consciousness.
We know that certain regions of the brain are recruited in performing certain tasks... but electromagnetic waves are guided by antennae and frequency-selective circuits to produce sound.
Anyone assuming that the sound is "in" the electromagnetic waves in any physical sense would be greatly mistaken.
The notion that intelligence is "in" the brain might be equally specious.
The brain could just as easily be a conduit for thought, feeling, and experience. We simply do not know either way.
I enjoy your post(s) and agree with most of it. However, I do wonder if your perspective on learning is focused on high level education and/or students that are ambitious and genuinely want to learn. I am an associate professor at a business school, and have tested my last 3-4 years of exam question on the bot. It generally performs quite poorly and fails the clear majority, yet would pass some of the questions. It might even get a C on one of the questions (out of 25 or so). So, it is clearly not a worry in the sense that it can lead students to a high performance nor genuine learning. I'd also argue it shouldn't be able to fully answer questions at the master's level. On the other hand, at the bachelor level of a an average university, this tool can (and will soon) be able to do just well enough to pass. This can be partially solved by making exams onsite and disallowing the use of the internet. Yet, not everything happens on-site. The positive angle is that the bot helps assessing if an exam question should be used or not - but a substantial number of students, who just want a diploma, will exchange tedious learning activities with letting the bot answer for you. And in will work, at times.
Your comment is on the level of "you won't always have a calculator in your pocket". How about instead of banning this tool you teach your students how to use it effectively in their classroom activities? It's time to get the tedium out of the studying and unleash the creativity.
Not sure why you think these considerations are mutually exclusive? Obviously the bot provides exceptional teaching opportunities in the class room- I am very excited about my future teaching
I guess The bigger risk with these large LLMs will likely be realized in the next decade or so. By then, there might be more and more texts flooding the Internet that were actually "written" by these LLMs drowning the original texts/ true information sources and these models won't be able to discern the true content from their own garbage.
The subsequent LLMs/ AI models trained on these data will simply degenerate further in accuracy/ creative writing and generate even more bullshit content. And ad infinitum.
I think I would quibble with the statement "where truth is irrelevant, like writing fiction." Fiction without underlying truth is indeed, irrelevant- which is why AI has been largely unhelpful for providing truly quality creative output.
Translation and interpretation are two different things. Writing is a reductive code for language, which is multi-modal. Knowledge is socially constructed and individual epistemologies are rooted in our ontologies. It is not clear how a subset of code for performed language is considered "truth", nor how the authors have validated their claims around language, translation, and interpretation. Oversimplifying these things is a habit that gets regurgitated in scholarly writing, and then is fed into the limited dataset of AI, so that it is codified in the ether. These mistakes are then passed on in classrooms. See also: Sign language gloves.
Thanks to Memex 1.1 for recommending this great read. Some really interesting links here, too. Thanks Arvind and Sayash.
Doesn't that definition fit propaganda as well?
"OpenAI’s new chatbot ChatGPT is the greatest bullshitter ever."
So then... What's it's verbal IQ? XD
BTW, were you the one who wrote about chatbots in Substack comments? Because I wish I had bookmarked that guy.