In his bestselling book, Superintelligence, Oxford University philosopher Nick Bostrom wrote about a community of beleaguered sparrows. The birds find it difficult to survive in a hostile environment, and so they come up with the idea of adopting an owl. This owl would help them build their homes, scare away predators and help them find food. Thrilled by this wonderful solution, they decide to search for an owl egg or chick, which they could adopt and rear. Only one sparrow, the grouchy, one-eyed Scronfinkle, disagrees: He does not think having a predatory owl in their midst is a great idea, and says they should first think about how to domesticate and control it. However, the other sparrows wave his objections away, claiming that getting an owl would be a tough proposition and they could figure out the taming bit once they have one.

It took a few thousand Scronfinkles (19,000 at last count) to sound the alarm about the most powerful technology in recent times: GPT4 and generative AI. Tech titans and luminaries — including Elon Musk, Steve Wozniak, Stuart Russell, and many others — recently wrote an open letter under the aegis of the Future of Life Institute. “Powerful AI systems,” they wrote, “should be developed only once we are confident that their effects will be positive and their risks will be manageable.” They referenced OpenAI, the company in the eye of the generative AI storm, in its recent statement: “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”
That point, they said, is now. Therefore, they called on all AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
There is a difference between Bostrom’s guileless sparrows and this moment, writes Ross Douthat in the New York Times. The sparrows, he says, at least knew what an owl looked like and what it did and could potentially prepare for any harm it would do. In the case of generative AI and GPT, we do not know how it does what it does, and its future shape. Similarly, when the iPhone, Facebook or TikTok were introduced, we did not realise they could have a dark side. Even OpenAI researchers admit that they are yet to exactly decipher how GPT4 realises its near-miraculous results, with CEO Sam Altman disclosing, “We’ve got to be careful here. I think people should be happy that we are a little bit scared of this.” Taking the bird analogy further, this is not just another canary in the coal mine, this particular canary dug the mine!
The Future of Life letter writers do have a point. If we look back at recent scientific history, we have hit the pause button on a few fundamental, potentially threatening technologies. Human cloning was one, and so was eugenics. Our fractious planet even rallied together to regulate other planet-destroying technologies — chemical and biological weapons, and nuclear energy. The latter presents an interesting analogy. Splitting the atom, much like powerful AI, has two sides: Its beneficial side can bestow humankind limitless clean energy, but it has a horrific dark side in the form of nuclear weapons. It took the Hiroshima bombing for humans to realise the magnitude of the horror, and it drove countries to come together and regulate it globally with the International Atomic Energy Agency and the Non-Proliferation Treaty. So far, we have prevented subsequent Hiroshimas, but then there is the flip side: Perhaps we were too conservative in harnessing nuclear energy, fearing its destructiveness; and thus, we substituted clean nuclear energy with dirty fossil fuels, hastening the climate crisis. Maybe we substituted one quick, horrific ending with a long-drawn, equally horrific another. Therefore, detractors ask what the six-month pause will achieve, even if it is enforceable. Would China, for instance, follow it? And would it delay the advent of, say, GPT7 or 8, which could help us achieve nuclear fusion or solve global warming — big problems which humans, unaided by superintelligence, have been unable to solve?
This debate could go on interminably, but I believe that we need to start talking about the ethical minefields around generative AI with the same speed and urgency that we are releasing newer and more powerful versions of it. In the normal course, I believe the world’s regulators will lag far behind in developing this fundamental and powerful new technology. Governments have to step in. Perhaps a six-month pause is not the best solution. But Musk and others are urgently raising a warning that we pause and think of how to tame the generative AI owl before we adopt it, and the sparrows of the world would be well served to take note.
Jaspreet Bindra is the author of The Tech Whisperer, and completing his Masters in AI and Ethics at Cambridge University
The views expressed are personal