I admit I'm bored by the Mayan calendar talk. I've never been a Christian, so I never took all the Left Behind books seriously. I spent maybe five minutes with one from the series in my hands, leafing through it, gawking at the prose like a rubbernecker at the site of a particularly gruesome highway accident. (Supposedly the series has sold 35 million copies.) The economic disaster stuff is, to quote Wordsworth slightly out of context, too much with me. I lack the ironic distance. I have plenty of ironic distance when I read/listen/watch Glenn Beck, Rush Limbaugh, Rick Santorum, Michelle Bachmann, etc...but they're ultimately beastly boring to me. I check 'em out to see how bad "conservative" discourse can get. (O! True conservatives where art thou?)
Ya wanna know what really gets me going when I want that adrenal buzz of worry, fear, or paranoia? The idea that we'll get Artificial Intelligence going to super-human levels and it'll really do us some harm. I don't know where my Ironic Distance is - or if I have one at all - when I contemplate this kind of stuff, and I think that's why it "works" for me.
Bostrom, in the long article I linked to above talks about "anthropogenic" Existential Risks. It turns out Bostrom is one of the more interesting thinkers on Existential Risk out there. Later in the 21st century, it's possible we could be wiped out by malignantly intentional attacks or "simple" human error arising from hair-raisingly advanced technologies on advanced molecular nanotechnology, synthetic biology, or nuclear weapons. (How dull global warming, ocean acidification and collapse of ecosystems seem now in the face of such sexy existential megadeath killers!)
We could reach a stasis in which there's a permanent upper class that keeps everyone under control using surveillance and psychologically manipulating pharmaceuticals. A "global totalitarian dystopia," a "permanently stable tyranny." Designer pathogens are rapidly becoming a very real possibility. You can find the 1918 flu virus details online now; with rapid advances in sequencing and lab techniques becoming cheaper and easier to use...I can feel my heart rate speed up already.
And oh yes: non-anthropogenic risks are out there, too: supervolcanoes, asteroid impacts, and something I'd never heard of until Bostrom put me straight: "vacuum decay in space."
I'm reminded of Sir Martin Rees's book from 2003, Our Final Hour. Sir Martin estimates a 50/50 chance humanity makes it to 2100. Here's Rees talking for 6 minutes on this delightful theme, from last November:
In the Bostrom article, it seems that most of the Experts assessing existential risk are slightly more optimistic than Sir Martin: they seem to be somewhere around 10%-20% chance we'll not make it to 2100. Here's a Silicon Valley rich guy - Rick Schwall - who's worried about existential risk, just to add more people to our party...
Anyway, I thought it slightly ironic that a Transhumanist is arguing that we should make existential risk a priority over present human suffering. But Bostrom has very rational reasons: if we care about people in space - in other words, on the other side of the globe - simply because they're humans like us, then we ought to consider humans in time as well as space. They're still human, even if they haven't been born yet.
One of the three propositions seems very highly likely true:
1.) Almost, or all civilizations like ours go extinct before reaching technological maturity. Technological maturity is defined as something like Ray Kurzweil's or Hans Moravec's wettest dreams: Artificial Intelligence carried to a profound degree, solving the death problem, end of economic scarcity, etc. This proposition has been written alternately thus: No civilization will reach a level of technological maturity to the point where they can simulate reality that is so detailed so that "that reality" could be mistaken as "reality."
2.) Almost all technologically mature civilizations (on any possible planet) lose interest in creating ancestor simulations, which are computer simulations so dizzyingly complex and nuanced that the simulated minds would be conscious, or believe they're conscious. Sophisticated beings so profoundly adept at technological manipulation aren't interested/don't do simulations of reality for ancestors. If these beings DO do these simulations, they don't do many, for varying reasons having to do with wanting to use computational power for other things, or due to ethical objections about keeping simulated beings captive, etc.
3.) We're almost certainly living in a simulation. Now. You and me and everyone we know, our entire history and world, possibly.
One of these three is almost certainly true, and Bostrom has a preponderance of math (that I can't follow) to argue that Number 3 is most likely: we're living in a simulated reality. Does this allay your anxieties about the future? Recently we read that Ten Billion Earth-Like Planets May Exist in Our Galaxy. That's just our crummy little galaxy. There are billions of other galaxies. And then there's the multiverse: an infinite number of universes.
But I'm getting ahead of myself. We already have The Sims and many other technologies that suggest we ourselves are moving (with logarithmically accelerated speed due to Moore's Law and other factors) into a world in which we are simulating other realities and beings. Can we make them take-for-granted their world and assume that "Of course we're conscious entities!"?
Now: these advanced beings who may be simulating Us could be "here," because we don't know we're simulated. Or they could be Elsewhere. Does it matter at this point? And what's that goo on your computer screen? Did I just blow your mind?
Bostrom says it's possible that what you're in now is a "basement level of physical reality." But if any technologically mature civilization that hasn't succumbed to Existential Risk (I should've been capitalizing that term from the get-go: much more dramatic and befitting its own idea), and they DO do what we're already doing now in this reality, then they probably will run millions of simulations, because they can. The sheer number of simulations outnumbers the non-simulated worlds that we may encounter, so it's probable that we're living in a simulation. Here's a funny popular take on Bostrom's idea, from the NYT.
Okay, okay: I've seen some good guerrilla ontology in my day, but this one's way up there. If you're heard the Bostrom argument and either say maybe, yes we're living in a simulated reality and what of it?, or I see his points but refute him thus, or whatever, then you're seeing the Matrix for what it really is. Errr...right? Anyway, I guess if it's most likely (aside from certain named-biases Bostrom is quite frank about) that we're a simulation, why worry about anything? Oh yea: that whole discomfort and death thing. No matter how unreal we and our world "is,"or "are," it still seems too real to wish away. "Reality is that which, when you stop believing in it, it doesn't go away," to paraphrase Philip K. Dick, who knew a thing or three about simulations and irreality. (See below)
Still: we must admit that even if we're a very detailed computer simulation, it makes for wonderful novels and films that constitute a simulation inside a simulation...ummm...eh?
Idea: try spending a week constantly reminding yourself that your world and everything in it is being played out in some unimaginably complex hypermetasupercomputer program. Note if and how your perception of "reality" changes after seven days, and report your findings in the comments section. (I've done this exercize: It tended to sharpen my sense of irony, and really brought out the highlights in bold relief when I noted myself or someone else taking a relatively trivial thing a tad too seriously, but your results may differ wildly.)
Oh: another reason to worry about Existential Risk: we might not make it to the point where we can develop - reach technological maturation as a species - to do simulations of other beings...even though we might be a simulation ourselves. Uhh...I think? (Wha?)
I mentioned and linked to the idea that this is a very old notion, even older than Plato's Cave parable. It's like Chuang-Tzu saying he woke up remembering his dream that he was a butterfly, but then questioning if he was not really a butterfly dreaming he was a man. Or the counterculture intellectual Alan Watts, who, when asked, "What is life like after death?" And Watts quickly responded, "How do you know you're not dead already?"
Of course, this notion of fine-grained simulated realities that humans take "for real" is a favorite among science fiction writers. Philip K. Dick is the foremost example, using this idea as far back as the mid-late 1950s. See this list of books that use simulated realities and note how often PKD shows up.
[For readers of Wilson and Shea's 805 page Illuminatus! Trilogy, think of this theme of simulation and the Writer of that book?]
al-Ghazali the sufi intellectual and mystic, argued against Aristotle, who said the world had no end. Ghazali thought time was bounded and he developed an argument for many possible worlds, but that this one was the best one, because Allah is so great. I'm simplifying here, but in not only sufi but Hindu and Buddhist cosmology we see variations of these ideas appear. Other sufis were on board with many worlds, also...
The Many-Worlds Hypothesis (Everett-Wheeler-Graham) interpretation in quantum mechanics appeared in Unistat in the 1950s.
I can go on and on with this stuff, because it's difficult to find good LSD these days, and I've found I can simulate a trip by reading wiggy academic books on logic, sufi theology, quantum mechanics, and philosophy like Nick Bostrom's. I don't trust that dude selling magic mushrooms in the park; give me my dog-eared copy of Berger and Luckmann's The Social Construction of Reality or Nick Herbert's Quantum Reality instead. Just as good, and if things get too weirded-out, I can go for a walk.
I guess what I really wanted to do was to attempt to reassure you: no matter how Bad Things Get, you can always tell yourself, "It's not a big deal. I'm just playing out in some simulation run by some Being from a civilization that evaded its moment of Existential Risk." If it works for you, you can thank me later, no matter how fake I am.
Hey, that's what the OG is here for!
I watched about 12 "We're living in a simulation" dealios on You Tube. Some of them are pretty good, but are marred by stentorian voice-over, too-intrusive Carmina Burana-like music, or other little annoyances. I have chosen two videos in case anyone...well, in case.
Two good-looking philosophy students rap about Bostrom's idea. I liked the down-to-earthiness of them.
Morgan Freeman narrates a science channel episode that uses "God" as the Simulator. The CalTech scientist never mentions Bostrom; I don't know to what extent he's influenced by him or what. I liked this because of the illustration of our own ability to simulate virtual experiences, which eventually blur into "reality," or seem to: