350 – Existential Risk
Guests: Toby Ord Host: Markus Voelter Shownoter: Jochen Spalding
Humanity has always been exposed to potentially catastrophic risks that might endanger the continued existence of humanity. Asteroid impacts or supervolcano eruptions come to mind. But since about the invention of the atomic bomb, humanity has been able to wipe itself out, adding self-made existential risks to the natural ones. Oxford philosopher Toby Ord argues in his book The Precipice that those risks are much more likely than the natural ones. In this episode we explore this idea with him, and also discuss what we should do about this realization.
Toby Ord | Toby Ord | Nick Bostrom (OT 275 - Technikfolgenabschätzung) | Anthropogenic hazard (Toby Ord - The Precipice: Existential Risk and the Future of Humanity | John Leslie - The End of the World: The Science and Ethics of Human Extinction Hardcover | George Orwell - 1984) | Extinction event (OT 184 – Societal Complexity and Collapse) | Joseph Tainter | Derek Parfit (Derek Parfit - Reasons and Persons) | Game theory | Chicxulub impactor | Global catastrophic risk | Gamma-ray burst | Likelihood-ratio test | List of nuclear close calls | Nuclear winter
Dissolution of the Soviet Union <Dissolution of the Soviet Union> | Strategic Arms Reduction Treaty | Free-rider problem | Positive feedback loop | Methane Feedbacks to the Global Climate System in a Warmer World | Runaway greenhouse effect | The moist greenhouse limit | Increased insolation threshold for runaway greenhouse processes on Earth-like planets | Biotechnology | Pandemic | Killer rabbit virus on the loose | 1978 smallpox outbreak in the United Kingdom | Biosafety level | Faulty pipe blamed for UK foot and mouth outbreak | Tokyo subway sarin attack | Bioterrorism | Synthetic genomics | CRISPR | Gene drive | Artificial intelligence
Why do we care?01:21:48
Utilitarianism | Special relativity | Nihilism | International organizations | The Chronicles of Narnia (C. S. Lewis - The Chronicles of Narnia)
Epilog mit Markus und Nora02:02:18
long-time listener here.
I almost never comment, but I wanted to share my opinion on your discussion, since you asked.
I remember you did something similar before with the episode on populism.
As you mentioned in the episode, the problem is, that you don’t give your guest a chance to respond, which is I think problematic among other reasons, because you present your opinions without confronting them with expert knowledge or at least informed opinion. This is potentially irresponsible, because sometimes there are answers or at least more developed ideas about the topic in philosophy, political science, sociology, psychology or other social science subjects, that are maybe a bit outside of your personal expertise.
I value highly the work you do and omega-tau wouldn’t be so good if it wasn’t for your input during the interviews, but this debrief thing is not the best idea in my opinion.
Maybe if you come up with some thoughts ahead of the interview and give the guest a chance to prepare, maybe it could work, but on the other hand, I understand, you were aiming for something different.
Please keep up the good work, looking forward for every new episode.
> because you present your opinions without confronting them with expert knowledge or at least informed opinion
I don’t quite agree. I mean, I read the book, prepared the episode and just talk to the guy for two hours. I am not an expert, obviously, but I am also not uninformed.
> Maybe if you come up with some thoughts ahead of the interview and give the guest a chance to prepare,
Well, yes, of course! That’s what I always do! I send them a whole list of thoughts! The debrief developed from a conversation with Nora, who had, I think, interesting additional thoughts, after I had recorded.
Thanks for a very nice podcast.
I liked both the interview as well as the debrief. I agree, that framed badly it can come across as ‘so far you heard our guest, now we explain you as a podcast summary how it really works now that he/she can’t argue against it’.
That said, I think that omega tau listeners that are capable of listening to hours of deep content are also able to distinguish the two parts and properly calibrate the relative expertise in case you would have been framing it like that (which you didn’t in my opinion).
For me a good Podcast leaves me thinking about the topic, or discussing it with others. The debrief felt like just such a discussion. It won’t be a fit to every topic, but once in a while it could be nice.
Thinking about Nora’s polar bear: What if we humans are in fact preventing the polar bear from becoming more intelligent than we are? Maybe in the big picture more advanced polar bears would be better for the universe, but unfortunately for that we became smart enough for keeping them down? – I have no idea where this argument leads me, but there is no clear proof that humans are the final state. … Let’s get humans go extinct since we kill each other, then polar bears have evolutionary process making them understand the notes we left and do smarter things … I’m brabling thoughts, unstructured, but maybe somebody will find conclusions …
Markus & Nora
Enjoyed your podcast on Existential Risks. In your discussions, you touched upon a number of topics that I’ve thought about with respect to AI, but the topics didn’t form a coherent group, so I’m writing to summarize in a small space.
I officially became a physicist in 1965, but even before, my technical interests led to accusations/observations from friends in the humanities that I was “too logical”. But when I encountered Fortran 2, I had unambiguous disproof of my logical abilities. I may have desired a logical life, but, according to the error messages being printed out, I was manifestly incapable of implementing one. Here was a domain where all the rules were written down, and all commands were faithfully carried out, and it still took several tries to get anything to work.
Twenty years later, for 5 years I was involved in finding the relevance of Artificial Intelligence to finding and producing petroleum. So far as I can tell, the techniques that were in use back then (before The AI Winter) are no different from today — now, just more data, more speed, more complexity.
What I learned — way last century — was that life choices were based on emotions. I wanted to be logical only because I would be embarrassed if I chose something manifestly wrong. What humans decide is the “right” choice is an emotional evaluation. And emotions are not currently well-understood. Some researchers say they are deep in the physiology of the organism, some say they are learned in society. But we don’t understand how humans decide that a choice is “good”.
But we can’t transfer to a ‘Bot the ability to judge what is appropriate, when we don’t know how we choose the appropriate choice ourselves. I think it as likely that a ‘Bot will choose to rule the world as that it will choose to reach out and flip its circuit-breaker to OFF.
Dear Markus and Nora,
I enjoy (almost) all of your podcasts greatly, and particularly the more philosophical episodes such as this one. But since you asked, I did not get much out of your debrief. Which is fine, it’s simple enough to skip.
You made good arguments in the debrief that were worth discussing. Particularly Nora’s viewpoints were interesting, as they didn’t feature in the interview itself. However, I feel that my main interest in your podcasts is the deep knowledge and wide perspective of people with years of experience in a particular field, as explored by Markus’ and Nora’s “naive”† questions. That said, there might be great value in having an off-the-cuff debriefing with both Markus and Nora and the interviewee after all of you had time to listen to the episode.
And just to reiterate, the debrief is simple enough to skip, so keep doing it if you like. Just because it’s not for me doesn’t mean it’s without merit.
† “naive”, as in much more informed than whatever I would come up with, but remaining understandable to simple engineers such as myself ;-).
I liked the main talk with Toby and I liked the debriefing Talk with Nora. It doesn’t interfere, it’s like a compliment, a real debriefing. Exactly what I need after the main interview. :-) Guess a debriefing is not necessary after every episode, but after this one, I would not have liked to miss this conversation.
Thanks Lothar. It was certainly intended as a compliment. Although some people really didn’t like it (I got a few private emails that were very clear about that fact :-))
I personally define ‘value’ as low entropy. Information – low entropy, art- low entropy, animal life – extremely low entropy, computer chips extreme low entropy. Any loss of low entropy is thus ‘a waste’. Be it a life, a lost diamond ring, a broken computer, a mess made on the floor. Energy cannot be destroyed, but low entropy can, as such, this is the real thing to strive for and to save and cherish. Humans are capable of making very low entropy things, both with our organic capabilities and our machines, objects, information stores and art. We should thus strive to protect our existence but not at the expense of other equally low entropy things such as forests and non human animal life. As that is not net negative entropy reduction (effective transference of the sun’s low entropy supply).
I doubt many people think like me on this next point, but for me, the first crime with a bomb… Is it is such a waste of a bomb! Irrespective of who or what it destroys and in so doing vastly raising the entropy of the item. Just the bomb blowing itself up is a waste in itself. Especially with modern ultra high tech weapons. Many people don’t have a computer and power and a high tech tracking system and here we go and blow it up… For the purpose of blowing other things up. I understand that we cannot trust others to play the dilemma ‘the most nice way’ so we must protect ourselves… But that doesn’t mean we ever need use them.
I also personally see AI as a form of evolution. I see no reason why AI and silicon based machines cannot live in harmony with animal and plant life. There are easy mines to get to why destroy the world to get to the mines. I thus do not see them potentially taking over as a bad thing. They are our children. Just children of the mind and hands, not of the womb.
First of all i decided to comment that episode, although it is not my style to do it. I follow your podcast closely for about 120 episodes. Due to the fact that i am, as i read, not the first with a similar introduction, should give a hint on the outstanding importance or whatever of this episode.
Secondly, i would like to support more episodes like that, because it‘s about the definetly unsolved/unsolvable questions. Dont get me wrong, i really enjoy excourses in „Knicksteifigkeit“, for example according to cranes or wings. That‘s often complex, but understood and doesn‘t affect everyday live (decisions), at least for most poeople.
The third thing is, I want to encourage you to discuss explicitly human science topics, particularly BECAUSE your approach is from a natural scientifc point of view, which is in most ways more unbiased. Anyway, i am contradicting the first commentator here, mainly because your guest had all the time to think about every argumentative carelessness. especially as a philosopher. I don‘t see, why only specialists should talk about such topics, especially as long as rational arguments shall be the driving issues.
Fourth, i am convinced, that there is really no argument for the outstanding value of human existence (wie man es auch dreht und wendet). As mentioned, relativity was NOT invented, but described. Could give a hint. But after a closer look, even that doesn‘t count.
I rather enjoyed the episode. But let me add two arguments.
First, a kind of Pascal’s wager: Either human life has a purpose, or it has not. In the second case, whatever we do does not matter. But in the first case, human survival is necessary to achieve that purpose.
The second, somewhat orthogonal argument: At least at the moment, humans are the best bet of (Earth) life to make it past the 500 million year mark when the sun will start cooking Earth. Sure, polar bears might advance enough to take over the job of building an ark to Proxima Centauri. But the chances for Humans to make it are better (since we start from higher up the food chain).
As always so well worth listening to. Your post pod debrief I enjoyed. Keep up the excellent work
I like this episode very much, but at the same time I want the podcast to remain in the hard science. I enjoy the outro but I found it interesting that there was such a stop at religion but there was much talk about what is of higher meaning “to the universe” and debate between human life and polar bear life. I see this as a more grey transition, “why are we here” is in many ways a religious question.
On that front I find there is no true way to answer “what is of more value to the universe” without having a hypothesis on why we’re here. For me I find it interesting to think about the why we’re here question by first ditching the linearity of time. If time is finite in our universe, which there are many indication it is, then for there to be a reason for us to exist implies we must thing about in the context of all of the history of the universe. To me the answer is we exist to learn, ever expanding organized information. Almost a reverse entropy. In this context it is most valuable for humanity to survive any impending existential threat because we are close to crossing to being multi-planetary not just with our species but our information.
Love the podcast, keep up the great work. Keep the english episodes, there really are Americans who like to think for themselves and learn things and we like the english option. (Plus you’ll reach the whole globe most effectively that way)
Life… death… the universe…
Very nice episode.
I would be critical about the debriefing. Having a second session or clarify a few dangling ends that where forgotten in the actual interview (as has been done in other episodes) is good, but having a discussion on an (absent) interview partner should not become the norm in my opinion. Maybe do an interview with the two of you asking questions from different angles or viewpoints instead.