Russ Roberts: So, this piece that you wrote was provoked by some of the responses that people are getting from Chat GPT [Generative Pre-trained Transformer] and how human they seem. And, it’s engendered a lot of anxiety about whether humans are going to be replaced, even more thoroughly by computers, artificial intelligence, and so on.
But, you had a very different take. And, your take, I think is the most interesting one I’ve seen, which is that the real question isn’t whether the machines are going to imitate us, but how we are already imitating machines. And, how would you introduce that idea?
Ian Leslie: Yeah, so I was struck and amazed, as so many people were, by just how good ChatGPT’s imitations of being human were. It was doing all these wonderful things and I was seeing them being passed around on Twitter; and of course, we’re only seeing the really good ones. We don’t see the rubbish ones. But even so, that–the really good ones are really good. They’re really impressive.
There was one which was–the silliest ones were the ones that appealed to me naturally. So, somebody said, ‘How do you get some peanut butter sandwich out of a VCR [videocassette recorder] recorder, but I want it in the style of the King James Bible?’ And, he got this beautiful Biblical-sounding passage explaining how to do that.
Another person asked to explain Thomas Schelling’s theory of nuclear deterrence, but in a sonnet, if you please. And, they delivered, a ChatGPT delivered this wonderfully formed sonnet, a Shakespearean sonnet, giving a pretty good explanation of Schelling’s deterrence theory.
So, many others like this, including some student essays. I saw people, a few academics saying, ‘Wow. Posted this. I gave it this prompt and the response I got was as good as some of my students.’
Now, this is where I started to kind of really think about this question, because if you follow some of those academics and then you look further down the thread. They would start to say things like, ‘But you know what: I mean, I wouldn’t necessarily give a good mark because it sounds a little bit like the response my students give when they don’t really know anything about the topic and they’re just sort of winging it.’
And, then there was some interesting discussions around that, including one from a guy who teaches writing. And, he said, ‘Look, this type of thing is very familiar. The kinds of pieces that GPT writes are the kinds of pieces that students often write and get decent grades for.’ And, that’s not a coincidence because essentially we’ve taught them–we taught many of them–that good writing means following a series of rules and that an essay should have five-part structure. So, instead of helping them to understand the importance of structure and the many ways you can approach structure and the subtleties of that question, now, we tend to say, ‘Five points.’ That’s what you want to make in an essay. The student goes, ‘Okay, I can follow that rule.’ Instead of helping them to understand what it means to really nail or at least give your writing depth and originality and interest, we say, ‘Here are the five principles you need to follow. Here’s how long a paragraph should be. Here’s how your sentence should be. Here’s where the prepositions go or don’t go.’
And, we’re basically programming them. We’re giving them very simple programs, simple algorithms to follow.
And, the result is we often get very bland, quite shallow responses back. So, it isn’t actually any wonder that ChatGPT can then produce these essays because they’re basically kind of following a similar process. That ChatGPT has a huge amount of training data to go on, so it does much more quickly.
And so, we should be alarmed by it, but not because it’s on the verge of being a kind of super-intelligent consciousness, but because of the way that we’ve trained ourselves to write algorithmic essays.
And so, I thought that was a really interesting discussion. And, then the more I thought about it, the more I thought, ‘Well, actually the same principle applies to different industries.’ I listen to music a lot. I’m sure that a lot of your listeners do. And, you see this debate playing out in music because the streaming services–where a lot of us listen to most of our music–are very algorithmically driven, and they tend to incentivize musicians to create songs and tracks that fit a certain formula because they know that formula works. Right? So, there’s effectively a set of rules imposed and you either meet that standard or you don’t.
And, I’m simplifying hugely, of course. But, that tends to mean that musicians then write to that algorithm because they know if they don’t then they pay a penalty for it. Because complexity and surprise and originality are not necessarily what the algorithm is going to recognize and put to the top of the queue or to put in a playlist.
And so, you get what some people call the robot aesthetic: everyone writing to a formula and whatever the trend is now, it gets absolutely amplified and you go in one direction–kind of hurt.