Intro. [Recording date: May 31, 2023.]
Russ Roberts: Today is May 31st, 2023, and my guest is philosopher Jacob Howland. He is Provost and Director of the Intellectual Foundations Program at UATX [University of Austin, Texas], commonly known as the University of Austin. His latest book is Glaucon’s Fate: History, Myth, and Character in Plato’s Republic. Jacob, welcome to EconTalk.
Jacob Howland: Thank you, Russ. It’s great to be on your show.
Russ Roberts: Our topic for today is the impact of artificial intelligence, AI, on our humanity–on the human experience–based on essay you wrote on the website Unherd. We’ve done a number of episodes recently on whether AI is going to destroy life on earth. An important question. For the record, I am concerned but not panicked. I’m not sure that’s the right position. I reserve the right to become panicked in the future.
But, today we’re going to talk about a different aspect of AI. We’re going to assume it doesn’t kill us off in the extinction sense, but we’re going to look at the question of whether it’s good for us or not. So, let’s start with what you’re worried about. What’s wrong with AI, and with having humans use it extensively? It seems like a great thing.
Jacob Howland: Well, AI certainly has its uses, and I mean, I know many people who consult ChatGPT [Chat Generative Pre-trained Transformer] if they want, for example, to generate a syllabus quickly on, let’s say, depletion of nutrients from the soil, environmental impacts of certain human practices–you know, things like this. It will gather information and put it together in a tidy, neat way.
Of course, there is the case now of lawyers sort of cheating on their preparation for cases and asking ChatGPT to pursue, produce legal briefs. And of course, one of the problems with ChatGPT is that it fictionalizes–it makes things up.
But, my concerns are really quite broad. Let me start with this social concern. I recently have been studying Henry Adams’ book, The Education of Henry Adams, and Adams, in the last brilliant chapters of this book, lays out what he calls a dynamics theory of history in which he explains that human beings–who are a kind of force of nature: we have certain capacities and powers–are shaped by and shape the forces with which they interact.
And, Adams, during his lifetime–he was born in 1838–noticed a very sort of disturbing acceleration of social change. I mean, between 1838 and 1900, right? You had the introduction of railways, telegraphs, telephones, airplanes, for goodness’ sake, ultimately: all kinds of technological inventions and so forth.
And he began to reflect on this and he set forth a hypothesis, which is that: The amount of power or force at the disposal of human beings doubles or has doubled every decade since around 1800. And, already by 1900, he felt that if you sort of think of that–the rate of acceleration is the same, but the curve goes up–he began to be concerned about the effects on society, on sort of the destruction of organic communities and the dislocation of human beings and so forth.
So, if we think about artificial intelligence, the rate of acceleration seems to be even greater in terms of the forces at our disposal than Adams understood it to be.
And, one of my concerns is the way that AI is going to put loads of people out of work. Right? There are all sorts of jobs–computer programming, for example; I mentioned lawyers earlier, perhaps lawyers. Education is going to be transformed radically. That’s something we can talk about a bit because students are using things like ChatGPT to write their papers, and no doubt professors are using them to write their lectures and so forth.
And that’s going to present a huge problem. It’s going to present the problem of enforced leisure, if you like. Our lives are structured around meaningful activities, and if you sort of think of it from Aristotle’s perspective, happiness is an activity of the soul, as he says, right? You’re engaged in some kind of work, some kind of activities that have significance to you. If we take away employment from a large number of individuals, they’ve lost one of the great sources of meaning in their lives. So, that’s just one thing: What are people going to do when they have this free time?
Now, you recall from discussions during the year or the last few years when we were all shut down by COVID, and I remember reading some articles saying, ‘Well, this is great,’ right? ‘Because people now have time to do oil paintings and to listen to music,’ and so forth. But, that raises another problem, and that is that we haven’t been trained, as John Maynard Keynes points out in a famous essay called “Economic Possibilities for our Grandchildren”–we haven’t been trained for leisure.
In fact, that’s a very old problem. Aristotle points it out. But it even goes before Aristotle. Adam and Eve couldn’t handle light gardening. Right? Aristotle says–and he has a critique of the Spartans, but it extends to the Athenians as well–says in the Politics, ‘War is for the sake of peace, and business which you conduct in peace is for the sake of leisure.’ But, the Spartans don’t know how to be at leisure. Not only they, but not even the other Greeks. Aristotle deplores the fact that when they have leisure time, they sit around and drink lots of wine and tell myths.
Russ Roberts: Well, let’s start with that. I mean, I think, I know you have other things to say about AI and other tools; but this is a very old worry–the worry that technology will get rid of, eliminate jobs. It hasn’t. If anything, our jobs have gotten more pleasant, say, over the last a 100 years. A hundred years ago, the dangers of a lot of the workplace were quite high. There was farming, which was very dangerous, manufacturing, which is very dangerous. Those jobs have been reduced greatly in the West as a source of income or a source of meaning, and in theory, they’ve been replaced by more meaningful jobs; in theory, jobs like you and I have–jobs that use a different set of skills than our manual labor or physical strength. In theory, jobs that enhance what is human about us and make us that we’re not so much beast of burden in the workplace the way we were in the past.
I don’t know whether that’s been good for humanity or not. I would argue there’s a lot more leisure in our life in all kinds of ways. Certainly outside of the workplace, there’s a lot–there’s leisure by definition to the extent we avoid the workplace when we’re not physically at work. But, even at work, when we are on the job in the office, we often are free to do things that would normally be called leisure–surf the Internet, do other things like that.
And I’m agnostic about this issue of whether leisure is good or bad for human beings. I agree with you that work is an important source of meaning for many people–not all. Again, I think I’m very lucky and you are as well. But I assume that leisure is good. Now, I concede that not all of us, including myself, are good at using it, but do you want to argue that we should stop technologies that make it easier to take leisure?
Jacob Howland: Well, I actually have a lot to say about leisure in connection with idolatry. I think we’ll come to that later, but let me just make a couple of observations here. Yes, work is safer by all kinds of measures. Even, I was just looking the other day about, oh, deaths per a hundred thousands of teenagers or children in the 1970s, and it’s much, much lower now than it was–because we used to ride around without bike helmets and stuff like this.
But, I would point to a couple of things here. Farming is dangerous work, but it’s very interesting work, and it’s work that engages a human being across sort of a whole spectrum of capacities. So, if you’re going to be a farmer, you have to understand how to help a cow give birth. You have to be able to build fences. You have to understand planting fields and different kinds of grains and when to harvest, and you’re very much in touch with nature.
It turns out–and I remember reading this book by Harry Braverman, the title of which I can’t remember now. But he was a Marxist economist who started out as a kind of metal cutter, and he made the argument that what’s happening with the advance of technology is a kind of–to use Marx’s terms–alienation of the worker from his product. Right? And, he talks about the kind of managerial regime that began in sort of the late 19th century. So, it used to be that you’d have craftsmen–right?–who would craft an object carefully and put themselves into it. Right? Under a sort of managerial regime, you’re following cut-and-paste orders that are sort of given to you by these managers. I remember, and I believe he cites in his book an American government organization that was listing skilled and unskilled jobs, and they listed farming as unskilled, which he thought was outrageous because you require lots of skills to farm. Whereas he points out, or one could point out, that flipping burgers at a fast food joint is semi-skilled labor because in some instances you push a button and the machine times it or flips it over or something like that.
I would also point to the fact that there’s not a whole lot of job satisfaction, Russ. Now, I haven’t read the statistics recently, but my recollection is that surveys suggest that most people aren’t really happy with their jobs. It’s not as if people go to work and come back and say, ‘I have a vocation, and it’s very exciting,’ and so forth. Now, you and I–I think you’re right: we’re lucky, because we’re academics and we get to read and write and think and so forth.
Another thing I would just point out here is this: With regard to leisure–and let me just be absolutely clear, I think leisure is absolutely essential. That’s separate from the claim that we don’t know how to use our leisure. So, I’m going to come back to the essential character of leisure in a bit.
Russ Roberts: Okay. But, let me ask you about this issue about satisfaction. I don’t trust most of those studies: I think they’re often done with an axe to be ground–that they come with an agenda.
But, I think the more basic idea would be, I don’t want to work on a farm, and most farmers don’t want to sit in an office all day and read. We’re all different. We choose the things that make our heart sing, that put food on the table for our family. And, definitely there’s trade-offs, often, between those two things. I worry about the nature of the workplace, but I’m not sure it’s plausible to argue that the alienation of people from their work product is the source of our spiritual or personal malaise is that afflict us in the west.
I will tell listeners: I have an upcoming episode with a sheep farmer, so we’ll get to hear his perspective. He has chosen–he’s Oxford-educated but has chosen to stay on the farm for many of the reasons I think you would applaud.
But, most people, it’s not appealing. It’s not what they want to do; it doesn’t speak to them; and they’re happy to lose some of the meaningfulness of work to have less of it. The fact that the modern work week is creeping downward in certain measures–not all–in certain measures, or lifetime hours are creeping downward as they have for a century or so, is: most people think that’s a good deal. Now, whether they can use that time well, that’s a separate question, but I’m going to have trouble with that; but that’s where I think we should move on to next, unless you want to talk about this issue of meaningfulness on the job.
Jacob Howland: Well, let me just say this. Our conversation has made me realize that I have a somewhat complicated thesis, and it’s this: Work is not particularly meaningful for a lot of people, but it’s essential for their lives. And, I don’t just mean in terms of putting food on the table, but psychologically. Take the case of the lottery winner. Right? In fact, my son had an eighth grade teacher who was making a film about people who have won the lottery. What happens when you win the lottery? Okay, let’s say you are a custodian in a building, doing janitorial work. You win the lottery. What do you do? First thing, quit your job, move somewhere else, right? Buy a new house.
And, all of a sudden, the structure of your life is gone–like, the day-to-day structure. Now you have to regenerate or reproduce that, but the point of the film was that lottery winners are not happy often because they sort of veer off. Right?
So, that’s one concern. But, what I really want to get to is kind of the fundamental importance of leisure and the way in which AI very curiously cuts off the opportunities for leisure in a kind of foundational way, while at the same time throwing people into a condition where they’ve got to fill their time.
Russ Roberts: Yeah, let’s talk about that. But I just want to add–and we talked about it in a recent episode with Tyler Cowen–I’m not convinced that ChatGPT is going to eliminate jobs. The driverless car was the rage eight or so years ago, and that was going to change the workplace–which it would have if it were viable. It would’ve put millions of taxi cab drivers and truck drivers out of work overnight if it had fulfilled its promise and been a viable technology. It may still be–I remain skeptical–but if it did come, it would have a dramatic effect on the lives of millions of people, and that transition might be very unpleasant. I don’t know whether we would want public policy to reflect that unpleasantness to try to slow it down, but I just want to say it’s not clear to me that AI per se will reduce the number of jobs. It’s just kind of interesting.
There have been a lot of trends–social trends–that have scared people about whether jobs were going to disappear. Outsourcing was the most dramatic one before: “This outsourcing, the sending manufacturing abroad, is going to destroy X million jobs in America.” And that didn’t happen. So, I think one of the lessons, possibly–it may be different this time–but one of the lessons is that new activities come along because these technologies make things less expensive, conserve resources, and so on.
So, let’s put that to the side. I think there’s a general question about the use of leisure that I can see because this device that I hold in my hand–my smartphone–I see what it’s done to my attention span and my ability to be a focused friend at times or family member. And, I am concerned about it. But I also recognize that it’s a new technology: norms that might come along that would help us deal with that may still come into place and maintain our humanity. So, do you want to say anything about that on leisure or anything else? You can go to something else if you want.
Jacob Howland: Yeah. Sure. So, I mean, as I said at the outset, I think that there are a whole range of problems that are raised by the incredibly rapid development of AI. And, let me just say for the record, I would put human extinction–like, physical extinction of human beings–sort of lower on the list, probably, than many. But, one thing, and you’ve already pointed to it is: human capacities tend to atrophy in disuse. So, we all use GPS [global positioning system]. I’m quite convinced that back in the day when we had to actually figure out where we were going and maybe read a map and so forth, we had better navigational skills.
A lot of creative activity is going to be, and already is being, sort of handed over to AI. I was speaking with someone the other day whose mother, I think, works in, like, fashion design; and he said she’s going to be put out of business. Because, you don’t actually–you can generate images, you can take models, or you can maybe even construct them because now AI can do that and put them anywhere in the world against any backdrop, under any lighting, and so forth.
So, let’s just take writing and reading. I was speaking to an academic director of a consortium of high schools recently, and it was kind of an unsettling conversation because he said–I said, ‘What do you do about ChatGPT?’ He said, ‘Well, we told the kids in the schools they can’t use ChatGPT. Then it turned out they were using ChatGPT. So, now we have assignments; we say: Go ahead and use ChatGPT, but our writing assignments are edits, right? Like: Edit what’s coming up on ChatGPT.’
And, then he said to me–and the guy is maybe 40 years old–‘Look, I use ChatGPT all the time. I run my articles through it; it gives me suggestions. I take maybe half of them.’ And I said, ‘But here’s the thing. You learned how to write before ChatGPT. If you reduce writing classes for kids who are in eighth grade or something or 10th grade to looking at generated content and then reflecting on it and trying to figure out how to make it better, they’re not actually going to learn how to write.’
So, ceding these intellectual capabilities and creative capabilities to AI, it seems to me, is a very bad idea. And, in my article, I suggest that we might even see moral capabilities. Like, AI can make judgments for us: not just where to drive, but what to do.
Russ Roberts: Now, I think that atrophy thing is a very deep question. Let’s talk about that for a bit. One argument would be: Who cares? Right? We lived in a world until recently–well, for most of human history, being able to write was irrelevant. We added an era of, I don’t know, around 1800–I don’t know when it would’ve started–a very short era perhaps of when being able to communicate in writing was very useful. And, that era is now–it will still exist. It’ll just be ChatGPT will be doing the writing and communicating for me in a digital form, which is really no different. True, I can’t do my own anymore, but why should I care? I mean, I don’t really believe that, Jacob. It does make–alarms me greatly. But I wonder if I’m right. Tell me, why should I care?
Jacob Howland: It’s not just writing, but it’s the whole question of logos–of the word–the spoken word as well as the written word. So, we begin the Hebrew scriptures with God creating the universe by speech. God said, ‘Let there be light,’ and so forth. And then, one of the first things, if not the first thing that we see a human being doing–the first human being–is naming the animals they brought before Adam [?], the first human being, and that individual names the animals. Or, we can also go to the Gospel of John: ‘In the beginning was the word,’ the logos.
There’s something both human and divine about the power of speech or logos. And, again, I’m using the Greek word, which because it can mean thought, reason, reflection, speech, etc.
And, education, it seems to me, is–let’s sort of break it down to a twofold process: Opening the soul to what is and allowing it to be receptive. Receptive, perhaps uniquely among species, although I don’t know, to the whole, right? And, taking those experiences and impressions in, that’s one part of it.
And then the other thing is communicating; and that means putting into words or maybe paintings or music and sculpture and so forth–all of which, by the way, are augmented by words because you say, what is this painting about? What does this sculpture depict? And, sharing your individual perceptions with others, I think that’s very fundamental to humanity. What’s going to happen if we rely on ChatGPT–or not ChatGPT, let’s say advanced AI, because it’s going to keep going–to do our talking, our writing? That ultimately means to do our thinking for us.
And, it seems to me that from the point of view of an educator, education is about taking young men and women as they are with the peculiar capacities and abilities that they bring–which they acquire through nature and circumstance–and developing them. And, it’s focused on the individual, the individual human being who, the Bible tells us, has a kind of divine spark. Is that divine spark going to reside only in the ether in sort of the digital world?
And, one other thing I’d say, Russ, is that if you ask ChatGPT to do your writing for you, what does ChatGPT do? It goes to the information encoded on the Internet, which is not necessarily high quality–some of it is–and kind of scoops it up, regurgitates it, hands it back. Is that going to be a source of new and fresh ideas of the sort that human beings value?
Russ Roberts: I don’t know. It does some interesting art. It does some interesting music in it’s very primitive form.
Now, I think I want to come back to the atrophy question, though, because I think that is the deep one. I’ve noticed that it’s harder for me to express myself in English because I’m working on my Hebrew. And, if I did that more intensely–the Hebrew part–certainly I can’t, I can’t be who I am in Hebrew. Right? It’s not an atrophy: it’s just I’ve never developed it sufficiently. As I try to develop it, I pull back some of my ability to think, quote, “in English.” And, it’s an essential part of who I am, how I express myself, either in speech or writing. And so, one way to take–to say–what I hear you saying is that if we cede, C-E-D-E, if we cede our capacity to communicate to technology, we lose the ability to express ourselves.
Jacob Howland: So, let’s just talk about Twitter for a second, because here we have a little exemplary case, let’s say, of what artificial intelligence in a very broad sense might do, or let’s say the kind of digital–sort of development of digital devices, etc. It conditions people to generating texts. Now, when you say the word ‘text,’ or I say the word ‘text,’ we might be thinking of the Gilgamesh epic or something; but now we’re talking about a little short thing, right? Now, the texts come along, and then this is a binary technology. Right? We respond in binary ways–thumbs up, thumbs down. Right? So, we’re already being conditioned to sort of behave like machines. Right?
And, if you kind of expand that out–again, if you’re not thinking and you’re not writing, and you’re not developing your skills and language and so forth; and then you’re ceding that, right?, to these machines–are you going to lose the capacity to judge what is put before you? Will your skills of judgment kind of erode?
And, not just with regard to judgment, like, ‘Wow! this is really insightful,’ or ‘This is a good book,’ or something like that. But, judgment with regard to questions like, ‘Is this true? How should I understand these things?’ Right?
Now, that’s a whole ‘nother thing about AI. One of the things I’m very concerned with is the potential for artificial intelligence to not only surveil us and gather all kinds of information about us and so forth, but to manipulate us–very fundamentally. You know Plato’s cave image, right? You got people sitting on the bottom of the cave and they’re looking at shadows on the wall cast by puppeteers behind them.
Well, what’s already happened with digital technology is: We live in a bunch of caves. Right? Sometimes caves tailored to us individually. I mean, we’ve all had the experience of searching for something on the Internet or purchasing something, and then thereafter, up comes that product, right? Or different versions of it. We already know that information that is gathered about individuals who are listening to certain things or reading certain articles: The algorithm then generates more of the same. Right? Which kind of cuts us down and puts us in our own cave, as I’m saying. Right?
Now, what if–oh, and we also have a problem, as you know, telling deep fake videos and photographs and so forth from the real thing. We also have a problem, coincidentally, on the ideological side of media, basically propagandizing, right? So, making certain kinds of judgements and emphasizing certain facts, neglecting dimension[?], others, and so forth.
So, in our culture, there’s an issue that’s rather very, very serious, which is: It’s hard to know what the truth is. And not only that, even the truth about facts. I’m sure you’ve been in conversations where people will simply deny that something is a fact that you are quite convinced is a fact.
And then, you know how the conversation goes. So, they’ll say, ‘Well, where did you learn that fact?’ And, one side might say, ‘I learned it in the New York Times,’ and the other side might say, ‘That’s untrustworthy. Where did you learn your fact?’ ‘In Fox News.’ Right? ‘Well, I’m not going to listen to that.’
Now, once you get ChatGPT–which has already shown a tendency, by the way, to fictionalize–I mean, people are suing it for libel because it just makes up stuff, makes up legal cases that don’t exist and so forth. It does that perhaps unintentionally for sure. I mean, it does it unintentionally because of its algorithms.
But, what if you have intentional feeding, which is designed based on your psychological profile for the sake of, let’s say, manipulating you to vote for a certain candidate or to take a certain action? Where will the truth lie? How will we know? What if somebody says, ‘Here’s a video, here’s Vladimir Putin conceding,’ or something like that?
Russ Roberts: Yeah, well, I’m worried about all that. I think we’ve already got that problem without ChatGPT, and ChatGPT, I think just accelerates it.
Russ Roberts: And that has deeply disturbing implications for democracy, that–an institution that is not very healthy anyway, right now, in my view.
Russ Roberts: I want to come back to this, to the educational point you made. So, I’m going to reframe your argument and see if you agree with this reframing. We talked about–I know you’re a reader of Homer, and I forget what episode it came up in, but we were talking about, I think the Odyssey on the program at some point. And, a listener wrote me and said, ‘Well, I don’t need to read it because I’ve read the comic book and I know what happens.’ And, I think it was a serious comment. I’m not a 100% sure. But, we could–at some level, I would call it bad/poor education–we could test students on whether they read Homer by asking them, ‘What’s the name of the one-eyed monster in the cave that Odysseus and his men encounter?’ Answer: Cyclops. a) Cyclops; b) Shrek; c) King Kong’ d) whatever. So, one level of reading a great work would be: Did you do it? And, in doing it, did you understand it at the most cursory, narrative level?
So, that’s not education. I could tell you what’s a comic book; I could tell you–I could tell you the plot of the Odyssey. That is not what is the value of reading. You don’t read the Odyssey to find out what happened. You might be pulled along, but it’s not why we assign it here at Shalem College. It’s not why I’m sure students at UATX will read it. You read it to learn something about the human experience and yourself. And, that learning takes place through the arduous task of wrestling with the text.
ChatGPT–you can feed Homer into it and it’ll summarize it beautifully, by the way, do a really good job. It’s really good at that.
And, I think my worry would be that if education stays on its current course–which is somewhat spit back and parroting–that, ChatGPT will be a very powerful way to look smart. And, the skills of reading that are quite challenging will not be acquired.
That’s the atrophy–a different version of the atrophy argument.
And, we will lose the ability to read–to read thoughtfully, to read carefully, to read skeptically.
In theory, that should change how we teach, and that could be good. We should change how we teach both high school and college, in my view.
So, is there any grounds for optimism there that this will force us–along the lines of a recent episode we did with Ian Leslie–that it’s true that ChatGPT is pretty good at entertaining humans? That’s because we become somewhat machine-like. Once we are forced to deal with this, maybe we’ll become more human. [More to come, 33:11]