Transkribiert mit noScribe Vers. 0.4.4
S07: Good evening, everyone. It's my privilege to welcome you all and to welcome Professor Yuval Noah Harari to speak to us this evening. This large audience shows that his fame has preceded him. Yuval's a globally respected public intellectual. He's acclaimed for his clarity and balance in addressing key themes of our time. This visit is hosted by the University's Center for the Study of Existential Risks, CISA, which was founded a decade ago by some of us concerned about global threats and co-hosted by King's College. Yuval was born in 1976 in Israel. He got his PhD from Oxford in 2002 and he subsequently became one of the history faculty of the Hebrew University of Jerusalem. His book, Sapiens, a History of Humankind, came out in English in 2014. It sold 25 million copies. It was followed by another book, Homo Deus, a Brief History of Tomorrow, and these and later books have established Yuval, along with many articles, lectures, and videos, as a real global guru, as it were. He offers balanced and insightful perspectives into the turbulence and pressures confronting our world. A world that's home to 8 billion people, increasingly demanding of energy resources and threatening dangerous climate change and mass extinctions. And a world where powerful technologies, nuclear, bio, cyber, and now AI, have created an interconnected society where disasters can amplify and spread globally. Yuval's lecture today is entitled Disruption, Democracy, and the Global Order. He'll speak for just about 20 minutes. He'll then be joined for 20 minutes discussion by the two other people now on the stage, Professor Matthew Connolly, Director of the Center for the Study of Existential Risks, and by Dr. Gillian Tett, celebrated as an FT journalist, but now also the Provost of King's College. They'll then be joined for further discussion on the other chairs by some students from Cambridge's Existential Risks Initiative. And after these panel discussions, there should be some time for questions from the audience until we close at seven o'clock. But for the 99% of people here who won't get a chance to ask a question, there's some good news. And the good news is that today won't be your last chance. And that's because I can announce that Yuval is becoming a distinguished research fellow at the Center for Existential Risks. And so this is going to be one of a number of visits. And he is the first holder of a new category of eminent visitor fellowships at the new Institute of Technology and Humanity in the university. So let me now hand over to Yuval to give his lecture. Thank you.
S01: So it's a pleasure and honor to be here and to join CESAR and to talk about existential risks with you. And humanity, of course, is facing a lot of problems these days. But really, there are three existential threats that put the very survival of the human species at risk. And these are ecological collapse, technological disruption by technologies like AI, and global war. We know that two of these existential threats are no longer just future scenarios. They are already a present reality unfolding around us. Our ecological system is already collapsing with thousands of species going extinct every year. And we might be just a few years away from crossing critical ecological thresholds that might put human civilization too at risk of extinction. As for the threat of AI, whereas 10 years ago, it was still a science fiction scenario that interested only a very small community of experts, it is now already upending our economy, our culture, and our politics. Within a few more years, AI could escape our control and either enslave or annihilate us. And one of the most important things to realize about AI is the rapid pace of its development. The AI that we are familiar with today, in February 2024, is still at a very, very early stage of its evolution. Organic life took billions of years to get from amoebas to dinosaurs. AI is at present at its amoeba stage. But AI isn't an organic entity, and it doesn't evolve through the slow process of organic evolution. Digital evolution is millions of times faster than organic evolution. So the AI amoebas of today may take just a couple of decades to get to T-Rex stage. If Chad GPT is an amoeba, how do you think the AI T-Rex would look like? But in this brief talk, I want to focus on the third existential threat that we are facing, global war, because in many ways, it is the key to dealing with the other two. If humanity unites, we definitely have the resources and the wisdom to deal both with the ecological crisis and with the AI revolution. It is within our power. But if humanity is torn apart by war, that would probably doom us. Given the weapons that we now possess, a third world war could directly destroy human civilization, of course. But even if we avoid blowing ourselves to pieces, a third world war would destroy us indirectly, because it would focus our attention on fighting each other and would prevent us from dealing effectively with the ecological crisis and with the AI revolution. And the bad news is that like the ecological crisis and like the AI crisis, World War III might also have already started and we just haven't realized it yet. Perhaps in 40 or 50 years, if any humans are still around, everybody will know that World War III started on the 24th of February, 2022, the day that Russia invaded Ukraine, just as today, everybody knows that World War II started on the 1st of September 1939, the day Germany invaded Poland. The thing about history is that the meaning of historical events is often revealed only in hindsight. In September 1939, or even as late as May 1941, people in New York, in Stalingrad, in Hiroshima, and in numerous other cities across the world, were not sure, they did not know that they were living already in the midst of the Second World War. Of course, they knew that there was a war in Europe and they knew that there were other conflicts in East Asia and elsewhere, but it wasn't obvious that all these regional wars were actually parts of a single world war. Maybe we are already right now in an analogous situation. I've just arrived a few days ago from Israel, where we are in the midst of a brutal and bitter war with Hamas, which might escalate at any moment to a much, much bigger regional conflict. And yet, even most Israelis and Palestinians don't necessarily make the connection between the war that we are involved in and the war in Ukraine, or the rising tensions in East Asia, in South America, and elsewhere. Perhaps years in the future, it will be obvious to everyone that events in Gaza, in Yemen, in Ukraine, in Guyana, in Taiwan, and elsewhere were closely linked. Here at CESAR, scholars focus on the study of existential risk. And there are two existential questions that need to be asked in the context of World War III. First, if this war has indeed already erupted, is there still a chance of saving humankind? Is it possible to prevent ecological collapse or an AI catastrophe, even in the midst of global conflict? And at least to my mind as a historian, the answer is very obvious. No, absolutely not. If we are in the midst of a third world war, it means we simply cannot invest the necessary resources or secure the necessary global cooperation to prevent ecological collapse or an AI apocalypse. The second question is, if World War III has already began or is about to begin, can we still stop it before it becomes too late? Now, focusing on one specific war, regarding the war in Ukraine, the answer again seems obvious, that it is possible. As long, for instance, as long as Putin thinks that he can win the war in Ukraine militarily, the war will continue and expand. The only way to really secure peace there is if Europe and the United States make such a strong commitment to Ukraine that Russia despairs of military victory. And only then can serious negotiations about a peace deal, about a compromise that leads to peace, only then these negotiations can begin. Now, this is certainly something that Europe and the USA can achieve. Russia's GDP is smaller than that of Italy and is about the same as the Netherlands plus Belgium. The combined GDP of Europe and the USA is more than 20 times bigger than that of Russia. So they definitely have the resources to provide Ukraine with enough support. And really, Europe and the USA don't even need to use their own money. They can take the 300 billion US dollars in frozen Russian assets and give it to Ukraine. If they want to, they have ample resources to make sure that Ukraine can defend itself and that Russia cannot win this war. But when we broaden our horizons from a specific conflict to look at the world as a whole, things are much, much more complicated. The big question is whether Putin's decision to invade Ukraine is an exceptional aberration that can be contained by fear and action, or is it simply a universal human norm, so even if in this specific case the outbreak can be contained, other such outbreaks are bound to happen and multiply. And scholars, of course, have been arguing about this for generations. So-called realist thinkers argue that the only reality is power and that an all-out competition for power is the inescapable condition of the international system. The world is a jungle where the strong prey upon the weak, and those who refuse to acknowledge the law of the jungle will soon fall prey to some ruthless predator. So, according to this logic, even if Putin is stopped, World War III is only a question of time. There are reasons to think, however, that realists have a selective view of reality and of jungles. Real jungles, unlike the ones in our imagination, are actually full of cooperation, symbiosis, and altruism displayed by countless species of animals, plants, fungi, and even bacteria. If organisms in the rainforests abandoned all cooperation in favor of an all-out competition for hegemony, the rainforests and all their inhabitants would quickly die. And that's the real law of the jungle. And when we observe human history, what we see is that the record of war is variable and not constant. Some periods were exceptionally violent, but others were relatively peaceful. The clearest pattern that we observe in the long-term history of humanity is not the constancy of conflict, but rather the increasing scale of cooperation. I don't have much time, so let me mention just one piece of evidence of particular importance regarding state budgets. For most of recorded history, the military was the number one item on the budget of every empire and kingdom and republic. From the Roman Empire to the British Empire, military expenditures consumed more than 50% of the state budget. During World War I, for instance, military expenditures in the UK averaged around 50% of the budget, and during World War II, it reached about 70% of the budget. In contrast, in the early 21st century, the worldwide average of government expenditure on the military has been only around 7% of the budget, while the average expenditure on healthcare has been 10%. For many people today around the world, the fact that the healthcare budget is bigger than the military budget is unremarkable, but it was the result of a major change in human behavior, one which seemed impossible to most previous generations. The decline of war in the early 21st century didn't result from a divine miracle or from some change in the laws of nature. It resulted from humans changing our own laws and beliefs and institutions and making better choices. Unfortunately, the fact that this change stemmed from human choice also means that it is reversible almost at any moment. Different human decisions, like Putin's decision to invade Ukraine, could result in a new era of war worse than anything we have seen before. In Russia, military expenditure is now again about 30% of the state budget, and if Putin isn't stopped, this might be the case in more and more countries in Europe and elsewhere around the world. Now, the decisions that leaders make are in turn shaped by their understanding of history. National interest is never the result of purely rational calculations. It is always the moral of historical and mythological narratives that we tell ourselves. Which means that just as overly optimistic views of history could be dangerous illusions, overly pessimistic views of history could become destructive, self-fulfilling prophecies. If people believe that humanity is trapped in an unforgiving dog-eats-dog world, that no profound change is possible in this state of affairs, and that the relative peace of previous decades was simply an illusion, then the only choice remaining is whether to play the part of predator or prey. And given such a choice, most people would prefer to be predators. Unfortunately, we should remind ourselves that in the era of AI, the alpha predator in a dog-eats-dog world is most likely to be AI. No human, no human country. Now, I cannot predict what decisions people will actually make in the coming years, but as a historian, I don't believe in historical determinism. I don't think that either war or peace are inevitable. At least for a few more years before AI potentially takes over, war and peace are still a human choice. And we don't have to choose war, because among humans, wars are almost never fought over objective needs like food or like territory. They are almost always fought about historical and mythological narratives that we invent and believe. To come back to the war that is currently devastating my region of the world, Israelis and Palestinians don't really fight over food or territory. There is enough food between the Mediterranean and the Jordan River to feed everyone, and there is enough land to build houses and schools and hospitals for everyone. People fight over the stories in their imagination. For example, both Jews and Muslims believe that one particular rock in Jerusalem, the Holy Rock, under the dome of the rock, is among the most sacred objects in the world. And each nation believes that God gave us the right to own this Holy Rock. So let me end by quoting the Palestinian philosopher, Sari Nusseiba, who wrote a few years ago, that Jews and Muslims acting on religious beliefs and backed up by nuclear capabilities, are poised to engage in history's worst ever massacre of human beings over a rock. Thank you.
S05: Thank you very much indeed for those comments, which were a very challenging call to arms and not exactly cheering. Joining me in asking this in this debate is Professor Matthew Connolly, who is the director of the Centre for the Study of Existential Risk, that Martin is a co-founder of and which Yuval is now joining. Matthew has been looking at these issues for many, many years. I should point out his recent book, which is terrific, about the declassification engine, what history reveals about America's top secrets, which, along with Yuval's sweeping analyses of histories, provide a very bracing view of where we are. I'd like to start by asking you, Yuval, as a journalist who has spent much of my career dealing with stories and narratives, I'm fascinated by the emphasis you put on the narratives in our heads and the fact that at the end of the day, what we're fighting about today is so often about the narratives, not about or not just about actual tangible human needs. Do you see anything on the world stage today or in the Middle East from a narrative perspective that could cheer you up?
S01: That's a good question. I have to think about it.
S05: I mean, it's only Tuesday. We don't want to get too depressed on a Tuesday. What do you think about it, Matthew? Do you see anything on the world stage that is cheering up?
S04: Well, let me just first say I didn't think I was going to debate Yuval. So if I were, I think I would just surrender now. But I do have a few thoughts, though, about one in particular about how we could unpack the idea, which I think is a very powerful idea, the idea that for all the concern we have about climate change and about AI, those problems get even harder, right, if we imagine us trying to grapple with them in the context of a global conflict. But if I was to push back a little, if I were to do that, you know, let's say, you know, this war expanded where the U.S. and China were drawn into conflict. You know, the war is terrible for the environment. Like you can imagine all kinds of ways in which military conflict would create tremendous environmental problems. And there are many examples through history, you know, to illustrate that possibility. But on the other hand, it would probably have at least initially a devastating effect on the world economy. Like I was just hearing today estimate that even a blockade of Taiwan, you know, could lead to loss of some 10 percent of GDP worldwide. The last time that was that would be a lot worse than than COVID. But the last time we saw real progress on climate change was during COVID. Right. Because at least initially it had such an effect on the world economy. So we could start to unpack these things and see ways in which like, yes, there are cascading effects. And that's the best way to understand the whole constellation of risks that we face. But sometimes these effects, they don't all run in the same direction. On the other hand, I do think that if we're to see global conflict, it will probably accelerate the development of A.I. for military purposes. Right. I think that that we're already seeing that. Absolutely. Yeah. So I wonder if it's possible we could start to unpack these things and maybe they were going to find they won't all flow in the same direction.
S01: Is that possible? Yeah, I think one of the reasons, the good reasons to have a central like like Caesar is because these existential threats, they are not kind of in isolated silos of their own. They constantly interact and either amplify or perhaps sometimes run counter to each other. Coming back to your question, then, yes, I think that we have, of course, a battle of narratives in the world and we have had it for a long time. And in the 20th century, there are three main narratives about the history of the world that shaped human thinking and that still shape our thinking to a large extent. And two of them see conflict as inevitable. It's just the engine of history. But the third doesn't. I mean, the three big stories that were told in the 20th century, our fascism, communism and liberalism. Now, fascism and communism or Marxism more general. What's common to them is that they think about history in terms of conflict. Fascism argues that history is a conflict between nations. This is or between races. And this is the engine of history. And this is inevitable. There is no way to stop it. And it will only stop eventually if one nation conquers the whole world. And Marxism has a very similar way of thinking that history is an inevitable clash, a conflict only not between nations. This is just a smoke screen. The real conflict is between classes or between oppressors and oppressed. Like the whole of history is just oppressors and oppressed. But the conflict is inevitable. And again, it can peace can be achieved only at some end of time moment when there is just one class remaining. But the third way of thinking, the liberal story about the world says the world is not a conflict. Essentially, the story can be about cooperation, that humans are not, of course, they are divided into nations and classes, but they also have common experiences that people of all nations and people of all classes have certain common experiences because of which they have certain common interests and values, or at least they could have. If they opened their eyes, they would realize that they have some common interests and values. And this could be a basis for a history of cooperation and not a history of conflict. I think we are still, I mean, even though it changed a lot since the previous incarnations of these stories in the 20th century, we are still to a large extent in this debate over whether history is inevitably about conflict. Or is it possible to build a global order which is based on shared experiences and values and cooperation?
S05: Well, being cynical, the easiest way to build a shared order is to have something that everyone agrees to hate on and unify against. So short of an invasion of Martians, who the entire world could basically band together against, do you see anything that is actually going to change this debate towards more focus on cooperation? The existential threats. //S01: The existential threats.//
S01: This is the kind of the good side of having an existential threat, that if people realize the danger we are facing, then these are the Martians. So it's clearest in the case of AI that really is an alien invasion. I mean, for me, the letters AI, they don't stand for artificial intelligence. They stand for alien intelligence. Calling it artificial misses something. That artificial still gives us the impression that it's under our control in some way because it's an artifact that we created. And it's true that we created it, but the big danger is it is escaping our control. And calling it alien intelligence is much more accurate because it really thinks, processes information, makes decisions in a radically alien way.