You’ve heard a lot about Artificial Intelligence (AI) on EconTalk of late- the good, the bad, and the frightening. In this episode, host Russ Roberts welcomes one of AI’s most strident acolytes, venture capitalist and entrepreneur Marc Andreessen, to talk about his vision of how AI will save the world. Andreessen maintains that AI will make everything better- if we let it. “Fluid intelligence,” which he describes as the ability to think and reason, has long been the domain of humanity exclusively. Andreessen sees AI an augmentation to human intelligence, not a replacement.
Roberts is skeptical. Sure, he says, AI can respond to commands, but can it learn what I love?
Well, you know what all of us here at EconTalk HQ love, right? We love to hear from you. So take a moment and share your thoughts in response to any of the prompts herein. Let’s keep the conversation going.
1- How do Andreessen and Roberts each define the nature of intelligence? What’s the difference between fluid and general intelligence, according to Andreessen, and how does this apply to what AI can do for humans? To what extent do you think AI can/will develop the potential to “think like humans?”
2- What are you using AI like Chat GPT for, and why? How do Roberts and Andreessen see AI becoming practically useful, and what might you add? How likely are you to allow AI to become your “ultimate thought-partner,” as Andreessen describes it?
3- Roberts asks Andreessen why he believes AI is not going to run amok, despite the problems of anthropomorphizing and millenarianism. How does Andreessen answer? Why does he characterize the extreme AI skeptics as a kind of apocalyptic cult, and to what extent do you think this characterization is fair? What is the real danger of this “cult,” according to Andreessen? Again, is he fair?
5- Roberts pointedly asks Andreessen of AI, IS THIS GOOD FOR US (at least in the short run)? Specifically, he asks, “Do you believe that any technology that is not explicitly destructive–and by that I mean, say, a nuclear bomb or a virus–that any toy of which our lives are full of now as 2023 residents, that they’re all good?”
How does Andreessen answer this question with respect to AI, and to what extent does he convince you?
Equally interesting, Andreessen answers that nuclear power and nuclear weapons have both been net positive. How does he explain this, and again, to what extent do you agree?
To what extent should the precautionary principle rule how we confront new technology?
5- What do you think the biggest policy issues with respect to AI will be? Andreessen rightly insists that the race is on, and not every country will agree that AI poses an existential risk. How will the approaches of different countries (the two discuss Israel and China, for example) differ? How worried should we be that a “a new Cold War dynamic” may emerge with respect to China?
BONUS QUESTION: Roberts says, “…someone, I hope, will put all of the EconTalk transcripts into ChatGPT and let me interview Adam Smith.” Challenge is open; reward available. (Yep, I used Open AI to generate that image.)