The Black Hole Objection to Longtermism and Consequentialism
According to consequentialism, we should act to maximize good consequences. According to longtermism, we should act to benefit the long-term future. If either is correct, then it would be morally good to destroy Earth to seed a new, life-supporting universe.
Hypothetically, it might someday become possible to generate whole new universes. Some cosmological theories, for example, hypothesize that black holes seed new universes -- universes causally disconnected from our own universe, each with its own era of Big-Bang-like inflation, resulting in vastly many new galaxies. Maybe our own universe is itself the product of a black hole in a prior universe. If we artificially generate a black hole of the right sort, we might create a whole new universe.
Now let's further suppose that black holes are catastrophically expensive or dangerous: The only way to generate a new black-hole-seeded universe requires sacrificing Earth. Maybe to do it, we need to crash Earth into something else, or maybe the black hole needs to be sufficiently large that it swallows us up rather than harmlessly dissipating.
So there you are, facing a choice: Flip a switch and you create a black hole that destroys Earth and births a whole new universe, or don't flip the switch and let things continue as they are.
Let's make it more concrete: You are one of the world's leading high-energy physicists. You are in charge of a very expensive project that will be shut down tomorrow and likely never repeated. You know that if tonight you launch a certain process, it will irreversibly create a universe-generating black hole that will quickly destroy Earth. The new universe will be at least the size of our own universe, with at least as many galaxies abiding by the same general laws of physics. If you don't launch the process tonight, it's likely that no one in the future ever will. A project with this potential may never be approved again before the extinction of humanity, or if it is, it will likely have safety protocols that prevent black holes.
[Image: Midjourney rendition of a new cosmos exploding out of the back of a black hole]
If you flip the switch, you kill yourself and everyone you know. You break every promise you ever made. You destroy not only all of humanity but every plant and animal on Earth, as well as the planet itself. You destroy the potential of any future biological species or AI that might replace or improve upon us. You become by far the worst mass murderer and genocidaire that history has ever known. But... a whole universe worth of intelligent life will exist that will not exist if you don't flip the switch.
Do you flip the switch?
From a simple consequentialist or longtermist perspective, the answer seems obvious. Flip the switch! Assume you estimate that the future value of all life on, or deriving from, Earth is X. Under even conservative projections about the prevalence of intelligent life in galaxies following laws like our own, the value of a new universe should be at least a billion times X. If we're thinking truly long term, launching the new universe seems to be by far the best choice.
Arguably, even if you think there's only a one in a million chance that a new universe will form, you ought to flip that switch. After all, here's the expected value calculation:
Flip switch: 0 + 0.000001*1,000,000,000X = 1000X.
Don't flip switch: X + 0 = X.
(In each equation, the first term reflects the expected value of Earth's future given the decision and the second term reflects the expected value generated or not generated in the seeded universe.)
Almost certainly, you would simply destroy the whole planet, with no compensating good consequences. But if there's a one in a million chance that by doing so you'd create a whole new universe of massive value, the thinking goes, it's worth it!
Now I'm inclined to think that it wouldn't be morally good to completely destroy Earth to launch a new universe, and I'm even more strongly inclined to think it wouldn't be morally good to completely destroy Earth for a mere one in a million chance of launching a new universe. I suspect many (not all) of you will share these inclinations.
If so, then either the consequentialist and longtermist thinking displayed here must be mistaken, or the consequentialist or longtermist has some means of wiggling out of the black hole conclusion. Call this the Black Hole Objection.
Could the consequentialist or longtermist wiggle out by appealing to some sort of discounting of future or spatiotemporally disconnected people? Maybe. But there would have to be a lot of discounting to shift the balance of considerations, and it's in the spirit of standard consequentialism and longtermism that we shouldn't discount distant people and the future too much. Still, a non-longtermist, highly discounting consequentialist might legitimately go this route.
Could the consequentialist or longtermist wiggle out by appealing to deontological norms -- that is ethical rules that would be violated by flipping the switch? For example, maybe you promised not to flip the switch. Also, murder is morally forbidden -- especially mass murder, genocide, and the literal destruction of the entire planet. But the core idea of consequentialism is that what justifies such norms is only their consequences. Lying and murder are generally bad because they lead to bad consequences, and when the overall consequences tilt the other direction, one should lie (e.g., to save a friend's life) or murder (e.g., to stop Hitler). So it doesn't seem like the consequentialist can wiggle out in this way. A longtermist needn't be a consequentialist, but almost everyone agrees that consequences matter substantially. If the longtermist is committed to the equal weighting of long-term and short-term goods, this seems to be a case where the long-term goods would massively outweigh the short-term goods.
Could the consequentialist or longtermist wiggle out by appealing to the principle that we owe more to existing people than to future people? As Jan Narveson puts it, "We are in favor of making people happy, but neutral about making happy people" (1973, p. 80). Again, any strong application of this principle seems contrary to the general spirit of consequentialism and longtermism. The longtermist, especially, cares very much about ensuring that the future is full of happy people.
Could they wiggle out by suggesting that intelligent entities, on average, have zero or negative value, so that creating more of them is neutral or even bad? For example, maybe the normal state of things is that negative experiences outweigh positive ones, and most creatures have miserable lives not worth living. This is either a dark view on which we would be better off not have been born or a view on which somehow humanity luckily has positive value despite the miserable condition of space aliens. The first option seems too dark (though check out Schopenhauer) and the second unjustified.
Could they wiggle out by appealing to infinite expectations? Maybe our actions now have infinite long-term expected value, through their unending echoes through the future universe, so that adding a new positive source of value is as pointless as trying to sum two infinitudes into a larger infinitude. (Infinitudes come in different cardinalities, but one generally doesn't get larger infinitudes by summing two of them.) As I've argued in an earlier post, this is more of a problem for longtermism and consequentialism than a promising solution.
Could they wiggle out by appealing to risk aversion -- that is, the principle of preferring outcomes with low uncertainty? Maybe, but the principle is contentious and difficult to apply. Too strict an application of it is probably inconsistent with longtermist thinking. The long-term future is highly uncertain, and thus risk aversion seemingly justifies its sacrifice for more certain short-term goods. (As with discounting, this escape might be more available to a consequentialist than a longtermist.)
Could they wiggle out by assuming a great future for humanity? Maybe it's possible that humanity populates the universe far beyond Earth. This substantially increases the value of X. Let's generously assume that if we populate the universe far beyond Earth, the value of our descendants' lives is equal to the value of the whole universe you could create tonight by generating a black hole. Even so, given that there's substantial uncertainty that humanity will have so great a future, you should still flip the switch. Suppose you think there's a 10% chance. The expectations then become .1*X (don't flip the switch) vs. X (flip the switch). Only if you think it more likely that humanity has that great future than that the black hole generates some other species of set of species whose value is comparable to that hypothetical future, would it make sense to refrain from flipping the switch.
If we add the thought that our descendants might generate black holes, which generate new universes which generate new black holes, which generate new universes which generate new black holes, and so on, then we're back into the infinite expectations problem.
Philosophers are creative! I'm sure there are other ways the consequentialist or longtermist could try to wiggle out of the Black Hole Objection. But my inclination is to think the most natural move is for them simply to "bite the bullet": Admit that it would be morally good to destroy Earth to seed a new cosmic inflation, then tolerate the skeptical looks from those of us who would prefer you not to be so ready (hypothetically!) to kill us in an attempt to create something better.