# Disruption, Democracy & the Global Order – Yuval Noah Harari at the University of Cambridge ## Transkribiert mit noScribe Vers. 0.4.4 S07: Good evening, everyone. It's my privilege to welcome you all and to welcome Professor Yuval Noah Harari to speak to us this evening. This large audience shows that his fame has preceded him. Yuval's a globally respected public intellectual. He's acclaimed for his clarity and balance in addressing key themes of our time. This visit is hosted by the University's Center for the Study of Existential Risks, CISA, which was founded a decade ago by some of us concerned about global threats and co-hosted by King's College. Yuval was born in 1976 in Israel. He got his PhD from Oxford in 2002 and he subsequently became one of the history faculty of the Hebrew University of Jerusalem. His book, Sapiens, a History of Humankind, came out in English in 2014. It sold 25 million copies. It was followed by another book, Homo Deus, a Brief History of Tomorrow, and these and later books have established Yuval, along with many articles, lectures, and videos, as a real global guru, as it were. He offers balanced and insightful perspectives into the turbulence and pressures confronting our world. A world that's home to 8 billion people, increasingly demanding of energy resources and threatening dangerous climate change and mass extinctions. And a world where powerful technologies, nuclear, bio, cyber, and now AI, have created an interconnected society where disasters can amplify and spread globally. Yuval's lecture today is entitled Disruption, Democracy, and the Global Order. He'll speak for just about 20 minutes. He'll then be joined for 20 minutes discussion by the two other people now on the stage, Professor Matthew Connolly, Director of the Center for the Study of Existential Risks, and by Dr. Gillian Tett, celebrated as an FT journalist, but now also the Provost of King's College. They'll then be joined for further discussion on the other chairs by some students from Cambridge's Existential Risks Initiative. And after these panel discussions, there should be some time for questions from the audience until we close at seven o'clock. But for the 99% of people here who won't get a chance to ask a question, there's some good news. And the good news is that today won't be your last chance. And that's because I can announce that Yuval is becoming a distinguished research fellow at the Center for Existential Risks. And so this is going to be one of a number of visits. And he is the first holder of a new category of eminent visitor fellowships at the new Institute of Technology and Humanity in the university. So let me now hand over to Yuval to give his lecture. Thank you. S01: So it's a pleasure and honor to be here and to join CESAR and to talk about existential risks with you. And humanity, of course, is facing a lot of problems these days. But really, there are three existential threats that put the very survival of the human species at risk. And these are ecological collapse, technological disruption by technologies like AI, and global war. We know that two of these existential threats are no longer just future scenarios. They are already a present reality unfolding around us. Our ecological system is already collapsing with thousands of species going extinct every year. And we might be just a few years away from crossing critical ecological thresholds that might put human civilization too at risk of extinction. As for the threat of AI, whereas 10 years ago, it was still a science fiction scenario that interested only a very small community of experts, it is now already upending our economy, our culture, and our politics. Within a few more years, AI could escape our control and either enslave or annihilate us. And one of the most important things to realize about AI is the rapid pace of its development. The AI that we are familiar with today, in February 2024, is still at a very, very early stage of its evolution. Organic life took billions of years to get from amoebas to dinosaurs. AI is at present at its amoeba stage. But AI isn't an organic entity, and it doesn't evolve through the slow process of organic evolution. Digital evolution is millions of times faster than organic evolution. So the AI amoebas of today may take just a couple of decades to get to T-Rex stage. If Chad GPT is an amoeba, how do you think the AI T-Rex would look like? But in this brief talk, I want to focus on the third existential threat that we are facing, global war, because in many ways, it is the key to dealing with the other two. If humanity unites, we definitely have the resources and the wisdom to deal both with the ecological crisis and with the AI revolution. It is within our power. But if humanity is torn apart by war, that would probably doom us. Given the weapons that we now possess, a third world war could directly destroy human civilization, of course. But even if we avoid blowing ourselves to pieces, a third world war would destroy us indirectly, because it would focus our attention on fighting each other and would prevent us from dealing effectively with the ecological crisis and with the AI revolution. And the bad news is that like the ecological crisis and like the AI crisis, World War III might also have already started and we just haven't realized it yet. Perhaps in 40 or 50 years, if any humans are still around, everybody will know that World War III started on the 24th of February, 2022, the day that Russia invaded Ukraine, just as today, everybody knows that World War II started on the 1st of September 1939, the day Germany invaded Poland. The thing about history is that the meaning of historical events is often revealed only in hindsight. In September 1939, or even as late as May 1941, people in New York, in Stalingrad, in Hiroshima, and in numerous other cities across the world, were not sure, they did not know that they were living already in the midst of the Second World War. Of course, they knew that there was a war in Europe and they knew that there were other conflicts in East Asia and elsewhere, but it wasn't obvious that all these regional wars were actually parts of a single world war. Maybe we are already right now in an analogous situation. I've just arrived a few days ago from Israel, where we are in the midst of a brutal and bitter war with Hamas, which might escalate at any moment to a much, much bigger regional conflict. And yet, even most Israelis and Palestinians don't necessarily make the connection between the war that we are involved in and the war in Ukraine, or the rising tensions in East Asia, in South America, and elsewhere. Perhaps years in the future, it will be obvious to everyone that events in Gaza, in Yemen, in Ukraine, in Guyana, in Taiwan, and elsewhere were closely linked. Here at CESAR, scholars focus on the study of existential risk. And there are two existential questions that need to be asked in the context of World War III. First, if this war has indeed already erupted, is there still a chance of saving humankind? Is it possible to prevent ecological collapse or an AI catastrophe, even in the midst of global conflict? And at least to my mind as a historian, the answer is very obvious. No, absolutely not. If we are in the midst of a third world war, it means we simply cannot invest the necessary resources or secure the necessary global cooperation to prevent ecological collapse or an AI apocalypse. The second question is, if World War III has already began or is about to begin, can we still stop it before it becomes too late? Now, focusing on one specific war, regarding the war in Ukraine, the answer again seems obvious, that it is possible. As long, for instance, as long as Putin thinks that he can win the war in Ukraine militarily, the war will continue and expand. The only way to really secure peace there is if Europe and the United States make such a strong commitment to Ukraine that Russia despairs of military victory. And only then can serious negotiations about a peace deal, about a compromise that leads to peace, only then these negotiations can begin. Now, this is certainly something that Europe and the USA can achieve. Russia's GDP is smaller than that of Italy and is about the same as the Netherlands plus Belgium. The combined GDP of Europe and the USA is more than 20 times bigger than that of Russia. So they definitely have the resources to provide Ukraine with enough support. And really, Europe and the USA don't even need to use their own money. They can take the 300 billion US dollars in frozen Russian assets and give it to Ukraine. If they want to, they have ample resources to make sure that Ukraine can defend itself and that Russia cannot win this war. But when we broaden our horizons from a specific conflict to look at the world as a whole, things are much, much more complicated. The big question is whether Putin's decision to invade Ukraine is an exceptional aberration that can be contained by fear and action, or is it simply a universal human norm, so even if in this specific case the outbreak can be contained, other such outbreaks are bound to happen and multiply. And scholars, of course, have been arguing about this for generations. So-called realist thinkers argue that the only reality is power and that an all-out competition for power is the inescapable condition of the international system. The world is a jungle where the strong prey upon the weak, and those who refuse to acknowledge the law of the jungle will soon fall prey to some ruthless predator. So, according to this logic, even if Putin is stopped, World War III is only a question of time. There are reasons to think, however, that realists have a selective view of reality and of jungles. Real jungles, unlike the ones in our imagination, are actually full of cooperation, symbiosis, and altruism displayed by countless species of animals, plants, fungi, and even bacteria. If organisms in the rainforests abandoned all cooperation in favor of an all-out competition for hegemony, the rainforests and all their inhabitants would quickly die. And that's the real law of the jungle. And when we observe human history, what we see is that the record of war is variable and not constant. Some periods were exceptionally violent, but others were relatively peaceful. The clearest pattern that we observe in the long-term history of humanity is not the constancy of conflict, but rather the increasing scale of cooperation. I don't have much time, so let me mention just one piece of evidence of particular importance regarding state budgets. For most of recorded history, the military was the number one item on the budget of every empire and kingdom and republic. From the Roman Empire to the British Empire, military expenditures consumed more than 50% of the state budget. During World War I, for instance, military expenditures in the UK averaged around 50% of the budget, and during World War II, it reached about 70% of the budget. In contrast, in the early 21st century, the worldwide average of government expenditure on the military has been only around 7% of the budget, while the average expenditure on healthcare has been 10%. For many people today around the world, the fact that the healthcare budget is bigger than the military budget is unremarkable, but it was the result of a major change in human behavior, one which seemed impossible to most previous generations. The decline of war in the early 21st century didn't result from a divine miracle or from some change in the laws of nature. It resulted from humans changing our own laws and beliefs and institutions and making better choices. Unfortunately, the fact that this change stemmed from human choice also means that it is reversible almost at any moment. Different human decisions, like Putin's decision to invade Ukraine, could result in a new era of war worse than anything we have seen before. In Russia, military expenditure is now again about 30% of the state budget, and if Putin isn't stopped, this might be the case in more and more countries in Europe and elsewhere around the world. Now, the decisions that leaders make are in turn shaped by their understanding of history. National interest is never the result of purely rational calculations. It is always the moral of historical and mythological narratives that we tell ourselves. Which means that just as overly optimistic views of history could be dangerous illusions, overly pessimistic views of history could become destructive, self-fulfilling prophecies. If people believe that humanity is trapped in an unforgiving dog-eats-dog world, that no profound change is possible in this state of affairs, and that the relative peace of previous decades was simply an illusion, then the only choice remaining is whether to play the part of predator or prey. And given such a choice, most people would prefer to be predators. Unfortunately, we should remind ourselves that in the era of AI, the alpha predator in a dog-eats-dog world is most likely to be AI. No human, no human country. Now, I cannot predict what decisions people will actually make in the coming years, but as a historian, I don't believe in historical determinism. I don't think that either war or peace are inevitable. At least for a few more years before AI potentially takes over, war and peace are still a human choice. And we don't have to choose war, because among humans, wars are almost never fought over objective needs like food or like territory. They are almost always fought about historical and mythological narratives that we invent and believe. To come back to the war that is currently devastating my region of the world, Israelis and Palestinians don't really fight over food or territory. There is enough food between the Mediterranean and the Jordan River to feed everyone, and there is enough land to build houses and schools and hospitals for everyone. People fight over the stories in their imagination. For example, both Jews and Muslims believe that one particular rock in Jerusalem, the Holy Rock, under the dome of the rock, is among the most sacred objects in the world. And each nation believes that God gave us the right to own this Holy Rock. So let me end by quoting the Palestinian philosopher, Sari Nusseiba, who wrote a few years ago, that Jews and Muslims acting on religious beliefs and backed up by nuclear capabilities, are poised to engage in history's worst ever massacre of human beings over a rock. Thank you. S05: Thank you very much indeed for those comments, which were a very challenging call to arms and not exactly cheering. Joining me in asking this in this debate is Professor Matthew Connolly, who is the director of the Centre for the Study of Existential Risk, that Martin is a co-founder of and which Yuval is now joining. Matthew has been looking at these issues for many, many years. I should point out his recent book, which is terrific, about the declassification engine, what history reveals about America's top secrets, which, along with Yuval's sweeping analyses of histories, provide a very bracing view of where we are. I'd like to start by asking you, Yuval, as a journalist who has spent much of my career dealing with stories and narratives, I'm fascinated by the emphasis you put on the narratives in our heads and the fact that at the end of the day, what we're fighting about today is so often about the narratives, not about or not just about actual tangible human needs. Do you see anything on the world stage today or in the Middle East from a narrative perspective that could cheer you up? S01: That's a good question. I have to think about it. S05: I mean, it's only Tuesday. We don't want to get too depressed on a Tuesday. What do you think about it, Matthew? Do you see anything on the world stage that is cheering up? S04: Well, let me just first say I didn't think I was going to debate Yuval. So if I were, I think I would just surrender now. But I do have a few thoughts, though, about one in particular about how we could unpack the idea, which I think is a very powerful idea, the idea that for all the concern we have about climate change and about AI, those problems get even harder, right, if we imagine us trying to grapple with them in the context of a global conflict. But if I was to push back a little, if I were to do that, you know, let's say, you know, this war expanded where the U.S. and China were drawn into conflict. You know, the war is terrible for the environment. Like you can imagine all kinds of ways in which military conflict would create tremendous environmental problems. And there are many examples through history, you know, to illustrate that possibility. But on the other hand, it would probably have at least initially a devastating effect on the world economy. Like I was just hearing today estimate that even a blockade of Taiwan, you know, could lead to loss of some 10 percent of GDP worldwide. The last time that was that would be a lot worse than than COVID. But the last time we saw real progress on climate change was during COVID. Right. Because at least initially it had such an effect on the world economy. So we could start to unpack these things and see ways in which like, yes, there are cascading effects. And that's the best way to understand the whole constellation of risks that we face. But sometimes these effects, they don't all run in the same direction. On the other hand, I do think that if we're to see global conflict, it will probably accelerate the development of A.I. for military purposes. Right. I think that that we're already seeing that. Absolutely. Yeah. So I wonder if it's possible we could start to unpack these things and maybe they were going to find they won't all flow in the same direction. S01: Is that possible? Yeah, I think one of the reasons, the good reasons to have a central like like Caesar is because these existential threats, they are not kind of in isolated silos of their own. They constantly interact and either amplify or perhaps sometimes run counter to each other. Coming back to your question, then, yes, I think that we have, of course, a battle of narratives in the world and we have had it for a long time. And in the 20th century, there are three main narratives about the history of the world that shaped human thinking and that still shape our thinking to a large extent. And two of them see conflict as inevitable. It's just the engine of history. But the third doesn't. I mean, the three big stories that were told in the 20th century, our fascism, communism and liberalism. Now, fascism and communism or Marxism more general. What's common to them is that they think about history in terms of conflict. Fascism argues that history is a conflict between nations. This is or between races. And this is the engine of history. And this is inevitable. There is no way to stop it. And it will only stop eventually if one nation conquers the whole world. And Marxism has a very similar way of thinking that history is an inevitable clash, a conflict only not between nations. This is just a smoke screen. The real conflict is between classes or between oppressors and oppressed. Like the whole of history is just oppressors and oppressed. But the conflict is inevitable. And again, it can peace can be achieved only at some end of time moment when there is just one class remaining. But the third way of thinking, the liberal story about the world says the world is not a conflict. Essentially, the story can be about cooperation, that humans are not, of course, they are divided into nations and classes, but they also have common experiences that people of all nations and people of all classes have certain common experiences because of which they have certain common interests and values, or at least they could have. If they opened their eyes, they would realize that they have some common interests and values. And this could be a basis for a history of cooperation and not a history of conflict. I think we are still, I mean, even though it changed a lot since the previous incarnations of these stories in the 20th century, we are still to a large extent in this debate over whether history is inevitably about conflict. Or is it possible to build a global order which is based on shared experiences and values and cooperation? S05: Well, being cynical, the easiest way to build a shared order is to have something that everyone agrees to hate on and unify against. So short of an invasion of Martians, who the entire world could basically band together against, do you see anything that is actually going to change this debate towards more focus on cooperation? The existential threats. //S01: The existential threats.// S01: This is the kind of the good side of having an existential threat, that if people realize the danger we are facing, then these are the Martians. So it's clearest in the case of AI that really is an alien invasion. I mean, for me, the letters AI, they don't stand for artificial intelligence. They stand for alien intelligence. Calling it artificial misses something. That artificial still gives us the impression that it's under our control in some way because it's an artifact that we created. And it's true that we created it, but the big danger is it is escaping our control. And calling it alien intelligence is much more accurate because it really thinks, processes information, makes decisions in a radically alien way. S05: Right. Well, calling it alien intelligence would certainly make a better Hollywood movie. S01: Yes. And also a better kind of target to unite against. Again, not to unite in order to kill all the AIs or ban AI. This is unrealistic and also undesired. I mean, there is huge positive potential in the technology as well. But realizing that if we go down that path, then not the Russians, not the Americans, not the Chinese, nobody will win. I mean, the AI will win. And a similar story could be told also about climate change. I mean, looking back, this was the story told successfully about nuclear war. But I personally think that without nuclear weapons, a third world war in the 1950s or 60s would almost surely have erupted. That it was only the realization of mutually assured destruction that prevented a third world war. We might still pay for it in the future for having created nuclear weapons. But as of 2024, actually, it's been one of the best inventions of humankind. S05: Well, that's the irony, isn't it? As someone who grew up reading things like When the Wind Blows and, you know, learning about people crouching under their desk to survive a nuclear explosion. I don't know if that would certainly wouldn't have helped at all. In some ways, I think the current problem today is that an entire generation grew up feeling scared and then thought that danger had evaporated. And there's kind of an assumption that the current dangers will somehow evaporate again in the future as well. But I mean, Matt, do you think that either calling it the alien invasion or trying to find ways to get people to coalesce around climate change is realistic? S04: I love the alien invasion. Alien intelligence. S05: Sorry, alien intelligence or AI can be alien invasion. It could be alien invasion as well. S04: But there's one problem. It reminds me, by the way, there was a moment, the late history of the Cold War, where Ronald Reagan asked Mikhail Gorbachev if there was an alien invasion, surely we would band together to fight them off. And Gorbachev had to agree in the ways Reagan told the story anyway. That was the beginning of the end of the Cold War. So in this case, though, there are some humans who seem to be welcoming the invaders. And they, in some cases anyway, they want to bring it on. And to me, and I wonder for you, too, as an historian, does it remind you of the accelerationists, right? And what you call in Homo Deus, you call this dataism is another way of thinking about it, a kind of religion, right? And to me, it does remind me of some millenarian movements of the past. And what do they have in common? Well, it's this idea that the next world will be better than this one. And there will be an unveiling, an apocalypse, right? And in some cases anyway, the feeling is that we shouldn't be speciesists, that actually we should welcome this evolution. So I'm curious, does this remind you in any way of like the kind of millenarian movements we've seen before? S01: Yeah, absolutely. I mean, you see a lot of religious influence on the thinking about AI. And it's also, you know, part of this revulsion with the imperfect world that we are living in and the hope for the creation of a perfect and better world. And this is always probably the most dangerous fantasy that humans ever came up with, is the idea of the ability to create a perfect world. Not only because it causes you to feel very dissatisfied with the actual world around us, but because it's an open check to do the most horrendous things in the world. Because if you, you know, if you then look at the bottom line, yes, we did all these horrible things, but then we created the perfect world. So in the end, it was worth it. And we've learned this lesson again and again throughout history. And I hope we don't have to learn it again now. That fantasies of a perfect world are the most dangerous idea ever. S05: Absolutely. Well, I think that's very true. Hollywood take note again. By the way, I say we've got lots of questions coming on Slido. Do go on and vote as well. Someone called Richard Moulin is currently in the lead for his question. So anyone who wants to basically get another question up there, go on Slido and put your questions in and vote on what question you most would like to have pitched at you. My question, though, I'd really like to ask you is why now? Why have we gone from the so-called end of history to quote Francis Fukuyama, when suddenly things seem to be getting better and better, where it seemed quite rational to try and price a hundred year bond on the assumption that life was going to be sufficiently stable, that you could actually price it that far in advance? Why now? S01: The short answer is that I don't know. And again, I belong to that school of historians who don't think that you can explain historical events in a kind of casual way. Causality. S05: Yeah, that you can find the causes of everything. That's going to be a great disappointment to everyone who's studying history and has an exam to sit in a few weeks time. S01: If you ask me why did fascism rise in the 1930s, the short answer is that we still don't know. I mean, we can explain in specific causes what drove voters in Germany in the early 1930s to vote for Hitler and so forth. But if you look at this wave that is much broader than Germany, why then? Why in the 30s? And looking back, it was obviously a mistake. It didn't bring about the hope for results. I think we can describe it. I don't think that we have an answer of why it happened. And this is largely because so much of history, and this goes back a little to what I talked about, so much of history is caused not by objective material conditions. It is caused by stories in our imagination. And while one story defeats another, it's very often down to chance. And not again, the stories themselves, I think, are a powerful engine of history. And they are extremely unpredictable and sometimes irrational. S05: But just to push back on that for a second, I'd like to ask Matthew the question as well in a second that we're going to bring on the students. Don't you think one tangible concrete explanation is that from the 1950s until 1989, we had this fairly finely balanced power dynamic Cold War between the US or the West and the USSR. And there was, as you say, this fear of nuclear annihilation that kept things more or less in balance for a very long time. And what's happened since then, it's been a breakdown of the Cold War, the rise of many centers, what someone like Ian Bremmer calls not so much the G7, G2, but the G0, because no one's in charge and everyone's fighting to basically create some kind of new power balance and dynamic. If you're being optimistic, this is just an adjustment phase that we've gone from the Cold War era into something else. But we're basically in the limbo land at the moment. And that's why it's so bad. Could that be an explanation? S01: Yeah, absolutely. But the thing is, if you have a certain order and then you dismantle it and you don't create an alternative order, then you get chaos. And that's not a good thing. Now, you can say, OK, in the long term, we'll build a better order. But the question is, do we have a long term? I mean, previously in history, you could always say, OK, there'll be a few wars, there'll be some catastrophes, but humanity will muddle through. This time, we are not so sure. If we make the wrong choices, we will not muddle through. And this is, you know, lots of people try to compare the AI revolution, let's say, to the Industrial Revolution. And say, you know, in the end, we learned how to build industrial societies, which function relatively well. But this took something like 150 years and involved terrible experiments in how not to build industrial societies. I mean, Nazism was actually an experiment in how to build an industrial society. Communism, Soviet communism, was the same thing. A huge, terrible experiment in how to build an industrial society. European imperialism was an experiment in building industrial societies. Imperialists often had this idea that the only viable industrial power is an empire. Because an industrial power needs control of raw materials and markets. So unless you build an empire, you will not have a sustainable industrial society. Looking back, we now know this was not just nonsense, this was terrible nonsense. But people didn't know that in the 19th century. Now, if we have to go again in the 21st century, as we learn how to use the power of AI and so forth, if we have to go through another cycle of such failed experiments, I'm not sure we'll be able to muddle through. S04: You know, Julian, to answer your question, there are a lot of historians who would push back at the idea that this Cold War period was a time of relative stability and peace. And they would point out, you know, the United States is full of the refugees from the many wars that the United States waged or fueled in that period. That said, like if you take a step back and you look, you know, just statistically, it's true that the incidents, you know, the numbers of people who've died, especially if you look at it in terms of their percentage share of the world's population, it has been going down. But it reminds me a little bit of, you know, political scientists who will tell you, you know, well, you know, I can explain 97% of the variants, you know, looking at all these cases of hundreds of wars over the centuries and so on. And then I say, well, OK, but what if you can explain all the wars except for World War I, World War II, or World War III? So that's why the historians come in, right? So if there's time, and I think maybe we are going to move to the students, it would be good to also talk a little bit about whether the dynamics you're seeing now in terms of great power rivalries, and especially the concern about the rise of this power or another, how that could help explain what we may come to see in the years. S05: Well, we've actually got questions coming in. In fact, Richard's question has now been supplanted by Sam's question. Wherever Sam is, keep voting, keep asking your questions, which is actually on this very issue. So we'll come back to that in a moment when we take the audience questions. Let's have the students who are linked to the Cambridge Existential Risk Initiative, Kerry, I think it is, come up on stage. We have three of them. We've got Olivia Benoit, who has got an M for Economic, Social History from Cambridge. She's Director of CHERI, or CHERI, however you pronounce it. Shoshana Dadee, who is an undergraduate scholar who's at Emmert studying History and Politics. And Giovanni Massini, who's a PhD student working on the Cambrian Explosion, the rapid appearance of animal-dominated ecosystems around half a billion years ago. And they're coming up here because one of the things that Yuval is doing these days, together with his husband, Itzik Yaraf, is to launch something called Sapienship Lab, which is a media hub where people can debate these issues and share ideas about how not to destroy ourselves, which, of course, right now is a good thing. I feel like we need that right now, given the tenor so far. But let me start with you. I think you're going to be asking the first question, Shoshana, to Yuval, about what you make of what he said and what you think are the key issues we should be thinking about now. S02: It wasn't directly related to what you said. No, no, that's absolutely fine. I only got your speech after I made my question. But generally, it's all about the rise of corporate power and corporate personhood. So as multinational firms gain or garner legal protections and have automated systems, what will become of individual agency within democracy and democratic consent worldwide over the coming decade? S01: That's, I think, a very, very important question that we don't pay enough attention to. At present, at least in some countries like the US, corporations can become legal persons. Now, there is a lot of criticism of that, but we still comfort ourselves with the thought that even if corporations are legal persons, Microsoft or McDonald's is a legal person, actually they need humans to run them. So it's just make-believe. But this may not be the case for long. With AI, we can imagine a non-human legal person which is run by decisions of an AI, not a human being. And if you can incorporate, if you can legally incorporate AIs as legal persons, then we might soon be outnumbered by vast numbers of non-human persons. I mean, technically, it's much easier to multiply AIs than humans. So we could reach a point when you have millions, even billions of these new legal persons. And if they have all kinds of rights and they can donate money to politicians and they can use this leverage to gain more rights, maybe even the right to vote eventually, so we were talking earlier about all kinds of low probabilities scenarios of an apocalypse. This is another one, that just through this kind of almost technical loophole, that there is now a path, a legal path for AIs to take over. They don't need to create a new framework for that. S02: And so would you describe that as a rising power? Is that the way in which a new power could rise? I mean, people have said that MNCs or multinational corporations are a rising power. S05: A way of saying, are we going to have Google and Microsoft run us all? Are they the new country? S01: So corporations, I mean, we have seen before, like in the 18th century, with the East India Company and the British East India Company, the Dutch East India Company, how corporations can become major players on the international stage, even more powerful than they are today. So this in itself is certainly a possibility, but it's not a new possibility in history. What is, I think, a new possibility is the combination of the legal personhood of corporations with AI creating something that we have never seen before. S05: Thank you. That's fascinating. Particularly having spent my career writing for the Financial Times, it's all about companies. So there you go. Giovanni, you've got a question too. S00: Thank you very much. My question dovetails a bit with what you've been asked about narratives, but refers specifically to ecological collapse. So not just climate change, but also the biodiversity crisis. And it is, do we need a new narrative to tackle these issues? Is there some kind of epic just around the corner that we have missed? Or do we need to tackle this in a more piecemeal fashion when it comes to narratives, the narratives we deploy to tackle the ecological crisis? S01: We definitely need better narratives because humans are not kind of programmed to think about this particular type of danger. I mean, you know, evolutionarily going back to the Stone Age, we had to worry a lot more about the tribe next door than to worry about our impact on the environment. No Stone Age tribe faced an apocalypse because the campfires changed the climate and whatever. So this is why even today we are far more attracted when we read the news to a story about tribal conflict or the threat from a neighboring tribe than we are to another scientific study about the potential impact of climate change. So I think scientists need a lot of help from artists in this realm. There are definitely not enough TV series, not enough blockbuster movies about the ecological crisis. We need a lot more of that. In terms of what kind of story to tell, ultimately I think it's about getting down from our minds to our bodies. Because in our minds, cultures create very different cultural stories for each human group, which set us apart. But our bodies are still basically the same everywhere. So if our identity was less connected to the inventions of our minds and more embedded in our bodies, I think it will be an important step not just for tackling the climate crisis, but for tackling all the major problems that we face. S00: And do you think extending that sense of embodiment perhaps to other organisms as well, so stretching the boundaries of, let's say, just our species and thinking about how we share this planet and a long evolutionary history with other beings could help foster some kind of empathy or compassion, or do you think that's a dead end? S01: No, absolutely. I think one of the advantages of embodiment is that you realize that basically we are animals and we depend on the same ecological conditions as the other animals. And we have so much in common, certainly with other apes and other mammals, but also with other organisms. Whereas when we are stuck just in the stories of the mind, it's very easy to believe that we are completely different, not just from the people on the other side of the hill, but actually from all these animals. You know, to take religious mythology, for example, so in many religions, not all of them, but certainly in the monotheistic religions, as far as I know, only humans have souls. Other animals don't have them. If the main story you tell yourself about your life is about your soul and what will happen to my soul after I die and so forth and so on, so this creates a complete disconnect with the chimpanzees and the cows and the pigs and everybody else, because they don't have a soul. So they are not really part of the same story as me. They may be part of the decoration, but they are definitely not the main actors. S05: Well, thank you. We've also got a question. I didn't introduce it earlier, from Nandini Chiralka, who is the Executive Director of the Exocentral Risk Alliance that looks at how to provide early-stage researchers and entrepreneurs with essential skills. And you're doing a Master's in Engineering at Cambridge. Nandini, give us your question. S06: Yes. So to actually link my question to what we have been speaking about so far, what do you think is the most powerful narrative from the February 2024 AI that you mentioned in your opening speech to an AI system that no longer poses existential threats to humanity? What do you think is the most powerful narrative from today to that world? S01: I think that to tackle the AI crisis or the AI revolution, aside from what we've all talked about already, we should move the discussion from fixed regulations to the building, to the creation of living institutions. The idea that a lot of people have that, okay, AI is coming, there are all these dangers, we need to map the different dangers, and then we need to pass regulations that will limit the danger, will protect us. And of course, it's very difficult because how you pass through all the parliaments of the world, the same regulations and so forth. But I think the main problem is we cannot anticipate the type of threats and dangers that AI will create. As I said, I think that the AI that we know right now, we think it's something huge and big, it's actually a tiny amoeba, this is the AI baby. I mean, we haven't seen anything yet. Anything that we can imagine right now in 2024, on the basis of our acquaintance over just the last few years with the first AIs, it's not going to protect us from the AIs of 10 or 20 or 30 years to the future. And the pace of change is just accelerating. So we can't regulate in advance. What we need is to build regulatory institutions that are living institutions, that they can understand and react to things as they develop. And of course, this demands a lot of human capital. I mean, such an institution would need the best minds. We will not be able to kind of deal with the crisis if the best human minds are in the private companies and the regulatory institutions don't have anything comparable. Similarly, we will need the best technology there as well. And we will need public trust and public support. I mean, if you try to build, and we know this now in a painful way, if you try to build the best institution in the world, but you don't know how to communicate with the public and you lose the public trust, it's not helpful. So we also need to put an emphasis on keeping the public on board. And so I think these institutions will also need artists and people who are experts in communication with people. S05: Even journalists. Even journalists, yeah. Exactly. Well, one of the reasons why you've had this debate and brought the students on stage is because you do have this Sapienship platform, which you're putting out on Instagram and social media to try and get more debate among young people and not-so-young people around the world to actually engage in this. And we've got a huge number of questions pouring in from the audience right now. We've also got Democracy in Action in the sense of a really big battle going on which is the most popular question. So I'm going to jump in straight away. How many bots are voting in this? I don't know whether the people voting are genuine or not. Some actually have got names, some are simply anonymous. The one that is most popular at the moment is one from an anonymous person who may be a bot, but who knows. I'm going to read it out and ask you what you think. I feel like a lot of your work is concerned with futurology, which seems to be purely speculative. What is the utility of this discipline when it is so difficult to discern the future? That's a polite way of saying it's what is the point about being so depressed? S01: Well, first of all, I define myself as a historian, not as a futurologist. Certainly all my formal training is as a historian, a medievalist actually. But I understand history not as the study of the past, rather history is the study of change, of how things change. And this is what it has to offer humanity. I mean, who cares about things that happened a thousand years ago? All the people that lived back then are dead. They don't care what we say about them. The point is what can it teach us about our lives today? I don't think that history can predict the future. I mean, as I said, I can't even explain the past. If you ask me why fascism rose in the 1930s, I don't know. I can describe how it happened. I don't know why. So certainly I cannot predict as a kind of cause and effect that from the moment we are here right now, it will inevitably go there and then it goes there. It's impossible. What I try to do as a historian is use my knowledge of the past in order to draw a map of different scenarios for the future, especially the scenarios which are less intuitive, especially the scenarios like with AI. So not only the scenarios that people in Silicon Valley think about, but given what I know about the Middle Ages, when you take AI and the Middle Ages together, what do you get? And I draw some of these scenarios as well. And the point, again, is not to predict the future. It's to prevent the worst case scenarios. If you believe the same prophecy, then I would say it's pointless. Why forecast a terrible future that you cannot prevent? It really just makes people depressed. Yes. So the idea is, yes. I mean, again, I tend to focus on the negative scenarios simply as a way to balance the often very optimistic scenarios that come out of places like Silicon Valley. But my intention is not to say this is inevitable, we're all going to die, there is nothing we can do about it. But, OK, let's focus on that and see how we can prevent it from becoming a reality. S05: Right. I'll bet on that. Absolutely. Please do. And by the way, it's fascinating watching this battle of questions. So if you want to ask a question, don't give up, there's still time. Because I say the order and rankings keep changing. If you want to vote on the question you want asked, do go on to Slido. And this is, as I say, democracy in action. S04: So I was fortunate to read a lot of Yuval's early work. He actually wrote several academic monographs before Sepian. He's written on Renaissance military memoirs, War, the Ultimate Experience, which is about how people came to think that war was unique in human experience and gave them a unique understanding they wouldn't have otherwise. He also wrote a book about special military operations. So he's actually written a lot of history, like even before the most recent books. And in the most recent books, I mean, you can look through it, as I have, because I actually wanted to know, what is Yuval's worst case scenario? What would an AI insurrection really be like? You're not going to find it, actually. Much of what he's writing is a history of the present, like trying to understand how we got to where we are today. One last thing to say, I'm blanking on his name, but a great historian once said that a historian has to have the future in their bones. You have to have some understanding of the arc of history to have any understanding of what even matters, right? And I think one thing that historians try to do is just to explain, how did we get here? And if you want to have any sense about where we're going, you have to at least know how we got to where we are now. S05: Absolutely. Well, listen, so another question is just literally from Timothy, has just jumped to the top ranked question and beaten all the others. It's been shooting up the last few minutes. Where is he? Where's Timothy? We can embarrass Timothy. And Timothy's question is, do you agree that the current, quotes, laws of jungle were mostly written by Western countries? How could we rewrite these laws without wars or conflicts? S01: If you mean the laws of the international system, which are very different from the laws of the jungle, again, what very often happens in history, that people create laws and then they project them on the jungle or on nature and claim that these laws were always there. Like in the economic system, capitalism is just natural. We didn't invent these laws. They were always there. And it's the same with the international system. So certainly many of the laws of the international system, as we know it today, have a Western origin and were imposed on much of the world during the era of imperialism and during the Cold War. The challenge now is if these laws are not optimal and they are not, how do you improve them midway? How do you fix a ship while it's in the ocean, not docking in a safe port? I mean, if you just ditch all the laws, like one of the best laws I think that was inherited from previous generations is the sanctity of borders. And we know that many of the borders in the world, certainly in the Middle East, certainly in Africa, in many other places, were drawn by a few British and French and Belgian and German imperialists with very little knowledge and very little and without any care for the people who actually lived there. But these are the borders that we have inherited. Now, if we say, OK, all these borders, we don't care about them anymore. Let's start afresh. You will have this huge wave of wars all over the world. So one of the foundational agreements during the era of decolonization was that no matter how unjust a border is, we are going to accept the sanctity of the colonial borders because we understand that if we renounce them, the result will be a terrible wave of violence in much of the world. Now, it doesn't mean that we need to freeze the world in this state, situation. But if we want to do better, of course, going back in history and starting again, it's impossible. We are already in mid-journey. So whatever improvement is necessary has to be done while still keeping the peace of the world as much as possible. S05: Right. I'm going to ask a question, actually, going to go down the list. Richard Moulin, who gets special jumps of cue because he gave his surname as well. So we definitely know he's not a bot. I don't know whether Richard Moulin is in the hall. If so, you might want to wave your hand. But I presume you do actually. Yes, he actually exists. He is not alien intelligence. So Richard's question is, given many of these risks, particularly great power war, are often directed and influenced more by governments rather than private individuals. What can people in the audience here, the students, researchers, members of the public, people sitting on the stage, actually do to help mitigate them? Are we completely powerless? S01: So I would like also to hear your views on that. But personally, I think the best thing that most people can do is join an organization or an institution. The superpower of Homo sapiens is the ability to cooperate in large numbers. If you want to make a change in the world, doing so as an isolated activist is almost impossible. But 50 people cooperating in an organization or institution can make a much, much bigger change than 500 isolated individuals or activists. And so that would be one advice. And the other would be choose one thing. Don't try to take the whole world on your shoulders. That's impossible. Choose whatever is closest to your heart or to your, where you have special expertise, then you can make the biggest change. And trust other people to do their share. And if they don't, then what can we do? But that's... S05: Would any of you like to comment about what you think you can do? Do you think you can do anything to stop apocalypse? S06: I think I have a model of, if you care something about deeply enough, you can make other people care. And not enough people have that. I care about this deeply enough to make it change, to begin with. And a mentality of something like, if not me, then who? That doesn't really... It's not as sort of widespread as I always expected to be or something. But that caring, that I just care about this, that I will go to lengths to solve this issue, solves many problems in the world. Yeah. S00: I think I share some of your appreciation, at least for the importance of ideas. Some ideas waiting to be found and discussed, sitting just below the surface, which could be valuable. And sometimes it's the case that a single individual can bring them up and then share them with others. And then I also think there are some assumptions and some ideas which are bad ideas and bad assumptions, which can be exposed as empty more easily than people realize. Because sometimes, again, the truth is just below the surface and you just need to question some basic assumptions. And I'm thinking of one, for example, the idea that the current model of AI development is necessarily going to bring about immense economic and societal benefits as put forward by some. Well, let's discuss. And perhaps we can push the brakes and think that perhaps the risks outweigh the possible benefits at the moment, just to throw an idea around. But more generally, yes, let's recruit on the power of ideas. S03: I guess I'll just say, as a person that has also studied history, I think looking at this from a historical lens is really interesting. And so if I think, had I lived 700 years ago, the year is 1300, and I'm where my ancestors are from, so let's say Northern Italy, would I have the power to change anything as an individual? Well, I'm 24, I probably would have a bunch of kids. The long and short of it is no, I really wouldn't have any way to make a difference. And I think that there is something unique about the time that we live in now. Even though we do frequently feel the sense of helplessness and powerlessness, in some ways we have more agency than humans have ever had. And so I think that that's really special and deserves serious attention and consideration. S05: I just thought about how Greta Thunberg's voice has resonated on social media and gone far. Never before in history have we had the ability to go viral for good and bad in that way. S01: I think you put your finger on something very, very important and disturbing, that yes, we also as individuals have more power than ever before. Yet one of the most common narratives that people gravitate towards today in the world is narratives of victimhood. And you see even some of the most powerful groups in the world and some of the most powerful nations in the world tell themselves their story as a story of eternal victimhood. And the danger of that is that when you feel a victim, you don't take responsibility for anything. And acknowledging that you have power can be scary because then it makes you responsible. Like if the world is going towards apocalypse and I don't have any power, that's not my fault. But if I have power and things go down, then it is my fault. So I think part of it, it's very important to do what you say and to realise how much power even individuals have today. We should clap that. S05: Reasons to be cheerful. If I could really ask, we have another great question that ties into this, which is from Anonymous. So I hope this is not a bot. If you recognise this question, you may want to wave your hand and identify yourself. What films or books are this generation's version of When the Wind Blows? Can they help us? And for those of you who are under the age of 25, When the Wind Blows was something that anyone over the age of 50 grew up with, like me, about nuclear. It's a story about nuclear holocaust and it was very, very powerful in capturing the imagination of people back in the 70s, 80s, 90s. So what is the current version of When the Wind Blows for today's generation? Is it things like Contagion? Is it the movie about pandemics that came out just before COVID? Are there other examples of... S01: We have a lot of movies about pandemics, but if I think more about the AI danger, then it still needs to be made. So there is a huge opportunity out there to create that movie for AI. There are, of course, a lot of AI apocalypse movies, but I don't think that any of them have succeeded in doing what kind of these Cold War, nuclear war movies have done. So that's... I mean, if you want to make a change in the world, making the AI version of that would be very, very important. S04: I asked you about a similar question earlier, and he said that actually we are living in the golden age of television, so it may not be a movie. And in fact, there's the episode from Black Mirror, right? Nosedive, which I agree, that's what it was called, right? Yeah, Nosedive. And it's like the most chilling story, parable really, about the world of social media. And unfortunately, it just feels not like science fiction at all. It feels all too real. S01: I think it was aired in 2016, but it was just so spot on. I mean, looking back, it's amazing that it was so accurate. Right. S05: Yes, no, absolutely. Got just time for a few more questions. A question from Sam. I don't know if Sam's in the audience, but if you are, thank you for your question. And he says, welcome to Cambridge. World War II is taught in schools around the world. What might be taught in schools in 50 years' time about the various conflicts today? Maybe you can start with the Middle East conflict. I have no idea. Tell us where that's going to end. S01: I don't know if there are schools in 50 years. Not just because there might not be humans, but even if there are humans, I mean, you know, schools were not always there. The schools that we know are a creation of the Industrial Revolution. They follow the kind of the prototype of the factory with knowledge as a mass-produced commodity and the kids passing through the kind of factory line in bunches. And, you know, even the architectural structure of the school, of having these cubicles with 30 kids in each one, and every hour a teacher comes in and one teaches history, and then he goes out, and then somebody comes and teaches mathematics. All this setup, it really comes out of the Industrial Revolution. I don't think there is any reason to assume necessarily that it will still be there in 2050 or 50 years from now. People will, I mean, if there are humans around, they will still be learning history. But what will they tell about the present moment, about the present Middle Eastern conflict or any other conflict? We can't, we don't know. I mean, depends on the decisions we now make, what kind of world is created will also define what kind of stories people will tell with insight. S05: Do you think we'll still have universities in 50 years' time? S01: Some versions of them, yes. But, again, what kind exactly? You know, the university that we, as we know it, is a product of the Middle Ages. And it survived much longer and a lot more upheavals and crisis than the modern school. But still, especially AI could pose a huge challenge also to the university because it will really never had to deal with a technology for creating ideas. I mean, even if you look at the history of information technologies, leave aside all other technologies, bombs and whatever, just information technology. So the university weathered successfully the invention of the printing press and the telegraph and the radio and television. But AI is different because the printing press, the telegraph, the radio, they could not create new ideas. They could only copy and broadcast the ideas created in the human mind. AI is the first technology that can create completely new ideas, including very alien ideas. It can do scientific research in ways which the human brain can't. It can deal with vast amounts of data in a way that we can't. So this is bound to make a huge impact, not only on how we teach kids, but also on how we do research. And where will it be in 50 years is really anybody's guess. S05: Matt, are you worried about being put out of a job by a bot? S04: Not yet. But I do want to make a prediction because when I think about how we might look back 50 years from now, I think about how all of us now look back to historical figures of the 19th century or the mid-20th century. So one prediction I want to make is that I think it's very unlikely that looking back, our children, they're going to look at us and say, wow, they had it right. They were completely correct in all their moral and ethical assumptions. Because just think of it, Abraham Lincoln as a young man was a terrible racist, certainly by contemporary standards. Mahatma Gandhi was a terrible misogynist. He had just completely revolting views on women. And so do we think that we're better than Lincoln and Gandhi? So I think people have to have some humility when we disagree with each other and we have to be humble for a time. And universities are a great place to do that. So yeah, I think they'll be around. I hope so. S05: Well, I think we all hope so. And they've been around for 600 years. So who knows? But I think this will have to be the last question. And it's a really interesting question. It's got a lot of votes, sadly from anonymous, but they may want to wave their hand in a moment. Either the bots are being brilliant. Most of the questions I'd say do come from anonymous. So either the bots have already taken over or a lot of you are feeling shy. So the question is this. AI safety and ethics are focused on aligning AI systems with human interests, while the concerns of non-human animals are rarely considered. What can we do to increase moral consideration for all sentient life in studies of existential risk? How do we worry about the earthworms as well? S01: So I think we need to place at the center of the discussion the issue of consciousness. There is a huge, huge confusion about terminology, also in the world of AI, between consciousness and intelligence. But there are two different things. Intelligence is the ability to solve problems and reach goals. Consciousness is the ability to feel things, like pain or pleasure or love or hate. Now we tend to confuse the two because in humans and also in other mammals and birds and other animals, consciousness and intelligence go together. We solve problems through our feelings. Computers, at least until now, have been very different. They, in some areas, are already more intelligent than us, but they have zero consciousness, as far as we know. As far as we know, they cannot feel pain or pleasure or love or hate at all. Now, nobody knows what will happen in the future. There are models, theories, which argue that eventually AIs will also become conscious or sentient. They will start feeling things. They will start being able to feel pain or love or hate. There are other models and theories that says no. They could become far more intelligent than us and still have zero consciousness. That may be in the evolution of life. If you think about it, the long trajectory of billions of years of evolution. So the evolution of intelligence in the case of animals like us and like chimpanzees and like dogs and pigs and so forth, it passed through the development of consciousness. But this is not a universal law of the evolution of intelligence. Maybe there are alternative roads leading to superintelligence without consciousness at all. Just going through another way, an alien way. And we could reach a point when they are superintelligent and still zero consciousness. And the danger is then that if they could destroy not just human civilization, they could destroy the whole of the very light of consciousness to just reconfigure the entire ecological system to their needs, which they don't need consciousness. They function in a completely different way. They could even spread from Earth to other planets, to other galaxies. They will fill the universe with intelligence, but it will be a completely dark universe with zero consciousness. Like again, high intelligence capable of building spaceships that travel in the speed of light and whatever, but there is zero feeling. Nobody, no entity feels any pain or pleasure, any love or hate. Now for me, this is a terrible scenario. The worst case scenario of all is this. That we will have a universe full of intelligence and completely devoid of consciousness, because I think that of the two, consciousness is far more important, is far more valuable. And it's a good reminder that at least as of today, we share this with the many other animals, but not with the computers, not with the AIs. So we are still on the same team as the worms and the apes, not on the same team with the computers. S05: Right. Well, that's a very thought provoking point to end with. I mean, having heard you and having heard Matthew, I sort of take away three key points myself. One is that we are living in an era of profound existential risk, not because, or not just because we have conflicts, not just because we have climate change, but because all of those dangers are leveraged in an extraordinary way. I mean, it's an era of virality in every possible sense, to the point where these dangers could engulf us all, and as you say, wipe out consciousness. That's point one. Point two, though, is that you're not ready to throw in the towel. You believe we can still fight back by collaborating and by each of us taking up one cause that we truly care about and going forth and fighting for it to the best of our abilities. And we've heard about a number of causes that we can all get involved with, and I think that's a very good rousing message for everybody, that we can actually do something and actually get involved and get engaged. And I guess the last third thing I take away, which I find really interesting, is the power of narrative, the power of storytelling. We've heard a lot about your stories, your narrative today in your books. Universities are all about narratives, and another way that we can all get engaged is by getting engaged in that narrative and trying to create a richer narrative, and perhaps a more hopeful narrative that talks about what we can actually try and do to fight these risks as well. On the theme of narratives, there's going to be a lot more debates coming up in the coming weeks. The next event from CESA is going to be at the Cambridge Festival on the 19th of March, where Nandini and others will be talking about their work, their narratives around existential risk. And you've also got the conference happening in September with CESA, which will be picking up many of these themes. At King's College, we've got a number of talks coming up in the coming weeks. We've got Kristalina Georgieva, who is the managing director of the IMF, who's going to be speaking on March the 14th. She's going to be talking about the last 100 years, very much bouncing off the work of John Mennon Keynes and the work that he did after World War I, lamenting the death of globalization, and she'll be talking about the next 100 years of globalization and Bretton Woods. We've got Al Gore, who'll be coming in in May to give a sermon in the chapel, King's College Chapel, to talk about climate change, and he will be talking about what we can be doing to fight that. And then later on in May, we have also a discussion, which we're just formulating right now, with the head of cyber policy at the National Security Council from the White House, and also the head of GCHQ, talking about how we can use quantum and other types of technology to fight wicked problems. So, lots more chance to get involved in debate and narrative around the risks we all face, but perhaps we can end on a hopeful note by saying thank you to the panel, thank you to Yuval, thank you Matthew for helping to organize all this, and let's all go forth and fight to survive. Thank you.