Suppose we take the "simulation hypothesis" seriously: We might be living not in the "base level" of reality but instead inside of a computer simulation.
I've argued that if we are living in a computer simulation, it might easily be only city-sized or have a short past of a few minutes, days, or years. The world might then be much smaller than we ordinarily think it is.
David Chalmers argues otherwise in a response published on Monday. Today I'll summarize his argument and present my first thoughts toward a rebuttal.
The Seeding Challenge: Can a Simulation Contain Coherent, Detailed Memories and Records but Only a Short Past?
Suppose an Earth-sized simulation was launched last night at midnight Pacific Standard Time. The world was created new, exactly then, with an apparent long past -- fake memories already in place, fake history books, fake fossil records, and all the rest. I wake up and seem to recall a promise I made to my wife yesterday. I greet her, and she seems to recall the same promise. We read the newspaper, full of fake news about the unreal events of yesterday -- and everyone else on the planet reads their own news of the same events, and related events, all tied together in an apparently coherent web.
Chalmers suggests that the obvious way to make this work would be to run a detailed simulation of the past, including a simulation of my conversation with my wife yesterday, and our previous past interactions, and other people's past conversations and actions, and all the newsworthy world events, and so on. The simulators create today's coherent web of detailed memories and records by running a simulated past leading up to the "start time" of midnight. But if that's the simulators' approach, the simulation didn't start at midnight after all. It started earlier! So it's not the short simulation hypothesized.
This reasoning iterates back in time. If we wanted a simulation that started on Jan 1, 2024, we'd need a detailed web of memories, records, news, and artifacts recently built or in various stages of completion, all coherently linked so that no one detects any inconsistencies. The obvious way to generate a detailed, coherent web of memories and records would be to run a realistic simulation of earlier times, creating those memories and records. Therefore, Chalmers argues, no simulation containing detailed memories and records can have only a short past. Whatever start date in the recent past you choose, in order for the memories and records to be coherent, a simulation would already need to be running before that date.
Now, as I think Chalmers would acknowledge, although generating a simulated past might be the most obvious way to create a coherent web of memories and records, it's not the only way. The simulators could instead attempt to directly seed a plausible network of memories and records. The challenge would lie in seeding them coherently. If the simulators just create a random set of humanlike memories and newspaper stories, there will be immediately noticeable conflicts. My wife and I won't remember the same promise from yesterday. The news article dated November 1 will contradict the article dated October 31.
Call this the Seeding Challenge. If the Seeding Challenge can be addressed, the simulators can generate a coherent set of memories and records without running a full simulation of the past.
To start, consider geological seeding. Computer games like SimCity and Civilization can autogenerate plausible, coherent terrain that looks like it has a geological history. Rivers run from mountains to the sea. Coastlines are plausible. Plains, grasslands, deserts, and hills aren't checkered randomly on the map but cluster with plausible transitions. Of course, this is simple, befitting simple games with players who care little about strict geological plausibility. But it's easy to imagine more careful programming by more powerful designers that does a better job, including integrating fossil records and geological layers. If done well enough, there might be no inconsistency or incoherence. Potentially, before finalizing, a sophisticated plausibility and coherence checker could look for and repair any mistakes.
I see no reason in principle that human memories, newspaper stories, and the rest couldn't be coherently seeded in a similar way. If my memory is seeded first, then my wife's memory will be constrained to match. If the November 1 news stories are seeded first, then the October 31 stories will be constrained to match. Big features might be seeded first -- like a geological simulation might start with "mountain range here" -- and then details articulated to match.
Naturally, this would be extremely complicated and expensive! But we are imagining a society of simulators who can simulate an entire planet of eight billion conscious humans, and all of the many, many physical interactions those humans have with the simulated environment, so we are already imagining the deployment of huge computational power. Let's not underestimate their capacity to meet the Seeding Challenge by rendering the memories and records coherent.
This approach to the Seeding Challenge gains plausibility, I think, by considering the resource-intensiveness of the alternative strategy of creating a deep history. Suppose the simulators want a start date of midnight last night. Option 1 would be to run a detailed simulation of the entire Earth from at least the beginning of human history. Option 2 would be to randomly generate a coherent seed, checking and rechecking for any detectable inconsistencies. Even though generating a coherent seed might be expensive and resource intensive, it's by no means clear that it would be more expensive and resource intensive than running a fully detailed simulated Earth for thousands of years.
I conclude that Chalmers' argument against short-historied simulations does not succeed.
The Boundaries Challenge: Can a Simulation Be City-Sized in an Apparently Large World?
I have also suggested that a simulation could easily just be you and your city. Stipulate a city that has existed for a hundred years. Its inhabitants falsely believe they are situated on a large planet containing many other cities. Everyone and everything in the city exists, but everything stops at the city's edge. Anyone who looks beyond the edge sees some false screen. Anyone who travels out of the city disappears from existence -- and when they return, they pop back into existence with false memories of having been elsewhere. News from afar is all fake.
Chalmers' objection is similar to his objection to short-past simulations. How are the returning travelers' memories generated? If someone in the city has a video conversation with someone far away, how is that conversation generated? The most obvious solution again seems to be to simulate the distant city the traveler visited and to simulate the distant conversation partner. But now we no longer have only a city-sized simulation. If the city is populous with many travelers and many people who interact with others outside the city, to keep everything coherent, Chalmers argues, you probably need to simulate all of Earth. Thus, a city-sized simulation faces a Boundaries Challenge structurally similar to the short-past simulation's Seeding Challenge.
The challenge can be addressed in a similar way.
Rendering travelers' memories coherent is a task structurally similar to rendering the memories of newly-created people coherent. The simulators could presumably start with some random, plausible seeds, then constrain future memories by those first seeds. This would of course be difficult and computationally expensive, but it's not clear that it would be more difficult or more expensive than simulating a whole planet of interacting people just so that a few hundred thousand or a few million people in a city don't notice any inconsistencies.
If the city's inhabitants have real-time conversations with others elsewhere, that creates a slightly different engineering challenge. As recent advances in AI technology have vividly shown, even with our very limited early 21st century tools, relatively plausible conversation partners can easily be constructed. With more advanced technology, presumably even more convincing conversation partners would be possible -- though their observations and memories would need to be constantly monitored and seeded for coherence with inputs from returning travelers, other conversation partners, incoming news, and so on.
Chalmers suggests that such conversation partners would be simulations -- and thus that the simulation wouldn't stop at the city's edge after all. He's clearly right about this, at least in a weak sense. Distant conversation partners would need voices and faces resembling the voices and faces of real people. In the same limited sense of "simulation", a video display at the city's edge, showing trees and fields beyond, simulates trees and fields. So yes, the borders of the city will need to be simulated, as well as the city itself. Seeming-people in active conversation with real citizens will in the relevant sense count as part of the borders of the city.
But just as trees on a video screen need not have their backsides simulated, so also needn't the conversation partners continue to exist after the conversation ends. And just as trees on a video screen needn't be as richly simulated as trees in the center of the city, so also distant conversation partners needn't be richly simulated. They can be temporary shells, with just enough detail to be convincing, and with new features seeded only on demand as necessary.
The Boundary Problem for simulated cities introduces one engineering challenge not faced by short-history whole-Earth simulations: New elements need to be introduced coherently in real time. A historical seed can be made slowly and checked over patiently as many times as necessary before launch. But the city boundaries will need to be updated constantly. If generating coherent conversation partners, memories, and the like is resource intensive, it might be challenging to do it fast enough to keep up with all the trips, conversations, and news reports streaming in.
Here, however, the simulators can potentially take advantage of the fact that the city's inhabitants are themselves simulations running on a computer. If real-time updating of the boundary is a challenge, the simulators can slow down the clock speed or pause as necessary, while the boundaries update. And if some minor incoherence is noticed, it might be possible to rewrite citizens' memories so it is quickly forgotten.
So although embedding a city-sized simulation in a fake world is probably more complicated than generating a short-past simulation with a fake history, ultimately my response to Chalmers' objections is the same for both cases: There's no reason to suppose that generating plausible, coherent inputs to the city would be beyond the simulator's capacities, and doing so on the fly might be much less computationally expensive than running a fully detailed simulation of a whole planet with a deep history.
Related:
“1% Skepticism” (2017), Nous, 51, 271-290.
“Let’s Hope We’re Not Living in a Simulation” (2024), Philosophy & Phenomenological Research, online first: https://onlinelibrary.wiley.com/doi/10.1111/phpr.13125.
Chalmers, David J. (2024) “Taking the Simulation Hypothesis Seriously”, Philosophy & Phenomenological Research, online first: https://onlinelibrary.wiley.com/doi/10.1111/phpr.13122.
Hi Eric. I don't anything about this area of philosophy, but if an individual or group of individuals are advanced enough to simulate me and my city, in the detail that I actually encounter, well, that seems insanely impressive. No? Any such beings would be wildly more advanced than we are. So, why think we can speculate with any reliability about how they do it?
At one point, you wrote "Naturally, this would be extremely complicated and expensive!" Would it, though? Maybe it would almost be child's play for them. Again, if my actual experience is the result of simulation, then it seems to me that the powers behind it are so freakin' advanced that we would be silly to even speculate how they do it, or how hard it was for them.
I suppose I'm missing something obvious here!
thanks, eric, for these very interesting thoughts. i wrote some partial thoughts in reply a little while ago. time is passing and these are unfinished but maybe they're better than nothing!
one initial reaction is that it would be great to know more about how the "seeding" process works. i take it that what you have in mind is something like scripting, e.g. scripting a history for the world in as much detail as necessary to support the experiences of a few core simulated people. i suspect that the easiest method of large-scale scripting is via simulation, but presuambly there are other ways, more akin to what a novelist or a sceenwriter does.
the big downside of scripting compared to simulating is that simulations support all sorts of counterfactuals (if this had happened, this would have happened) whereas non-simulation scripting doesn't. perhaps this doesn't matter too much in scripting a full history for the world. but it will be tricky for a scenario with ongoing interaction between sims and scripted elements. this is perhaps clearest if the sims are non-deterministic, but the same goes if they are just practically unpredictable. a sim can perform multiple actions, each affecting the scripted world in different ways (except perhaps in special cases, e.g. scripting beyond the solar system, but i'm focusing on smaller-than-earth simulations here).
so presumably the script will have to be updated in an ongoing moment-by-moment way depending on sims' actions. any wholly scripted beings interacting with sims will have to be updated in moment-by-moment interaction. again by far the easiest way to do this reliably (consistent with laws, etc) will be by some sort of simulation -- maybe simulation in the head of the scriptwriters but this isn't really different in principle. maybe you have some different vision of how the interaction between simulation and scripting will go, in which case it would be good to hear it!
another idea, perhaps compatible with full simulation or with simulation plus scripting, is a tiered model where the level of detail can fall off fast: some core people and places (tier 1) are modeled in much detail, the other people and places they interact with (tier 2) are modeled in medium detail (perhaps just scripted), the other people and places that the tier-2 people and places interact with (tier 3) are modeled just in sketchy detail, and so on. so perhaps NYC and the people in it are in tier 1,our family and friends outside the city (including eric!) and places we've visited are at level 2, and so on. in another megalomaniacal version, tier 1 contains only me, and the other tiers extend from there according to interactions with me.
but now: is it really an open possibility for me that my family are just tier-2 models? when i zoom with eric, is it really plausible that his responses are produced by a script or a tier-2 simulation? his behavior is detailed and impressive, where tier-2 models presumably produce less impressive behavior. the same goes for books, music, and art that i've consumed by people outside tier 1. i don't think it's really plausible that these were produced by tier 2 processes. maybe the idea is that when the tier-2 models interact with tier-1 models (e.g. they zoom with me or even write a book or a blog post that i read), we raise their capacities to tier 1, but the rest of the time they're modeled at the less impressive tier 2? this gets tricky. i've never read "war and peace" but i've read a lot about it. is it just the cliffs notes version that exists right now, until one day i go to read it? or more naturally is it written already, at least in order to help generate the text about it that i've already read? worse, if half the people in the world have interacted with someone in NYC, then presumably they will all have to have been operating at tier-1 level for at least some of their lives. suddenly the tiered model starts to look pretty expensive!