11 Comments
User's avatar
Bryan Frances's avatar

Hi Eric. I don't anything about this area of philosophy, but if an individual or group of individuals are advanced enough to simulate me and my city, in the detail that I actually encounter, well, that seems insanely impressive. No? Any such beings would be wildly more advanced than we are. So, why think we can speculate with any reliability about how they do it?

At one point, you wrote "Naturally, this would be extremely complicated and expensive!" Would it, though? Maybe it would almost be child's play for them. Again, if my actual experience is the result of simulation, then it seems to me that the powers behind it are so freakin' advanced that we would be silly to even speculate how they do it, or how hard it was for them.

I suppose I'm missing something obvious here!

Expand full comment
Eric Schwitzgebel's avatar

I think this is a reasonable perspective. In my 2024 paper, I articulate three responses to conjectures about the size and duration of simulations. The first is to embrace radical ignorance, as you do here. If we take this approach, then (contra Chalmers) we shouldn't be confident that the sim is large and long-enduring; such confidence is incompatible with radical ignorance.

The second is to treat "I am in a large, long-enduring world" as a "hinge" proposition in Wittgenstein's/Coliva's sense. But hinges should be revisable (e.g., "no one has ever walked on the Moon" was once treated as a hinge proposition). Learning that one is in a simulation is exactly the type of condition that should prompt the revisability of hinges about the size of the world.

The third is to attempt best guesses about the capacities and motives of simulators. That's Chalmers' approach and what I try here. For example, it seems a likely guess (not certain, of course, but not 50/50) that it would be more expensive to run a simulation for thousands of years than for ten minutes. Chalmers tries to justify a (weakly) anti-skeptical response to the simulation hypothesis on such plausibility grounds; I am meeting him in his own territory.

Expand full comment
Bryan Frances's avatar

It's been a life-long dream for me to be associated with the phrase "radical ignorance". Working towards that goal every damn day!

Expand full comment
David Chalmers's avatar

thanks, eric, for these very interesting thoughts. i wrote some partial thoughts in reply a little while ago. time is passing and these are unfinished but maybe they're better than nothing!

one initial reaction is that it would be great to know more about how the "seeding" process works. i take it that what you have in mind is something like scripting, e.g. scripting a history for the world in as much detail as necessary to support the experiences of a few core simulated people. i suspect that the easiest method of large-scale scripting is via simulation, but presuambly there are other ways, more akin to what a novelist or a sceenwriter does.

the big downside of scripting compared to simulating is that simulations support all sorts of counterfactuals (if this had happened, this would have happened) whereas non-simulation scripting doesn't. perhaps this doesn't matter too much in scripting a full history for the world. but it will be tricky for a scenario with ongoing interaction between sims and scripted elements. this is perhaps clearest if the sims are non-deterministic, but the same goes if they are just practically unpredictable. a sim can perform multiple actions, each affecting the scripted world in different ways (except perhaps in special cases, e.g. scripting beyond the solar system, but i'm focusing on smaller-than-earth simulations here).

so presumably the script will have to be updated in an ongoing moment-by-moment way depending on sims' actions. any wholly scripted beings interacting with sims will have to be updated in moment-by-moment interaction. again by far the easiest way to do this reliably (consistent with laws, etc) will be by some sort of simulation -- maybe simulation in the head of the scriptwriters but this isn't really different in principle. maybe you have some different vision of how the interaction between simulation and scripting will go, in which case it would be good to hear it!

another idea, perhaps compatible with full simulation or with simulation plus scripting, is a tiered model where the level of detail can fall off fast: some core people and places (tier 1) are modeled in much detail, the other people and places they interact with (tier 2) are modeled in medium detail (perhaps just scripted), the other people and places that the tier-2 people and places interact with (tier 3) are modeled just in sketchy detail, and so on. so perhaps NYC and the people in it are in tier 1,our family and friends outside the city (including eric!) and places we've visited are at level 2, and so on. in another megalomaniacal version, tier 1 contains only me, and the other tiers extend from there according to interactions with me.

but now: is it really an open possibility for me that my family are just tier-2 models? when i zoom with eric, is it really plausible that his responses are produced by a script or a tier-2 simulation? his behavior is detailed and impressive, where tier-2 models presumably produce less impressive behavior. the same goes for books, music, and art that i've consumed by people outside tier 1. i don't think it's really plausible that these were produced by tier 2 processes. maybe the idea is that when the tier-2 models interact with tier-1 models (e.g. they zoom with me or even write a book or a blog post that i read), we raise their capacities to tier 1, but the rest of the time they're modeled at the less impressive tier 2? this gets tricky. i've never read "war and peace" but i've read a lot about it. is it just the cliffs notes version that exists right now, until one day i go to read it? or more naturally is it written already, at least in order to help generate the text about it that i've already read? worse, if half the people in the world have interacted with someone in NYC, then presumably they will all have to have been operating at tier-1 level for at least some of their lives. suddenly the tiered model starts to look pretty expensive!

Expand full comment
David Chalmers's avatar

i meant to say in the last sentence: suddenly the tiered model starts to look pretty expensive as well as pretty extensive!

Expand full comment
Eric Schwitzgebel's avatar

Fair enough. But if the comparison is a detailed "tier-1" simulation of the entire planet for thousands (or billions) of years, I'd guess it's much less expensive and extensive.

Expand full comment
Eric Schwitzgebel's avatar

Thanks so much for this thoughtful and detailed engagement, Dave! I think the one-city case is more challenging than the short-history case, so let's do the short-history case first. By "seeding" in a short-history I meant something similar to what SimCity or Civilization does: Create a plausible environment that looks as if it had a long history. Geological strata can be layered plausibly and scattered with fossils. Buildings can be clustered into towns, cities, and rural areas, and people can be started with well-enough matching fake memories. Of course, this would be very resource intensive, especially cross-checking to ensure consistency. But compared to running a detailed simulation of the whole world for billions of years, it seems likely to be less resource intensive (to the extent we can guess about such things at all). (If we can't guess, that's fine, but then if we have such terrible incomprehension of the rules of such simulations confidence that if we're in a simulation it's a large one is probably unwarranted.) Once the plausible past is constructed, there's no need for scripting. Everything can then just run forward in a fully detailed simulation.

In the city case, I was thinking something like your tier 1-2-3 story. Tiers 2 and 3 could either be scripted or simulated in a less rich way (say, a futuristic/better LLM for textual interactions like the present one). You're right that the behavior of the citizens will be practically unpredictable, and as a result, interaction partners will need to be upgraded as needed in tier when they become relevant. Someone who arrives in the city might be upgraded from Tier 2 to Tier 1 as they cross the border. Someone in a distant city might be upgraded from Tier 3 to Tier 2 when their specific behavior becomes relevant to a citizen (e.g., if you contact a travel agent in Vienna in preparation for a trip). But then others can be downgraded: The travel agent might fall back to Tier 3 when further interaction is unlikely; the visitor might fall back to Tier 2 when they leave town. Again, all this would be very complicated and resource-intensive, but plausibly less so than full Tier 1 simulation of every person and event in every remote part of the world if the simulators' target of interest is just New York City or Riverside.

Perfect consistency might be difficult to attain, but the simulators plausibly could have at least four ways to deal with that: (1.) pausing time to run consistency checks, (2.) rewinding to save points if an inconsistency gets out of hand, (3.) the general forgetfulness, failures of observation, and change-blindness of humans, and (4.) just tolerating a certain amount of inconsistency (our memories don't completely match, scientific studies don't always get the same results, eyewitnesses and history texts contradict each other, etc.).

Expand full comment
Mike Smith's avatar

"And if some minor incoherence is noticed, it might be possible to rewrite citizens' memories so it is quickly forgotten."

This is the point that I think most of these discussions overlook. It may be that the easiest way to cover any gaps or flaws in the simulation is to simply make us think they're not there, or just incapable of recognizing them for what they are. Changing episodic memories of a call with someone in a different city might be far easier than actually trying to create a facade for us to interact with. In some cases, it might be enough to just give us the feeling that it happened.

We also have a tendency to assume we'd be cognitively complete in these simulations. Depending on the goals of the simulation are, we may not be. But as long as we have the introspection of being a complete mind, we may be constitutionally incapable of realizing our fragmentary nature.

Interesting thoughts, as always Eric!

Expand full comment
Eric Schwitzgebel's avatar

Yes! I didn't land hard on this point in this post, since I thought I could defend against Chalmers adequately on other grounds, but I wholeheartedly endorse this. One thing I think Dennett nicely taught us is how ignorant we can be of gaps and changes in our experience and thoughts without noticing.

Expand full comment
Kenny Easwaran's avatar

I think that if the things outside the boundaries aren’t coherently generated, then there’s a natural sense in which it’s just the city, or just the recent past, that is real. But if the simulators are able to set up a simulated city in such a way that any interaction with things supposedly outside the city occurs in a perfectly coherent way, *as if* there were a world outside the city, then it might be natural to say that that outside world really is real! At least, all interactions with it are effectively the same as interactions with real things. It seems somehow reminiscent of the “holographic universe” idea some physicists have proposed, saying that the apparent three dimensions of space are just a projection from a two dimensional boundary condition.

https://en.m.wikipedia.org/wiki/Holographic_principle

Expand full comment
Eric Schwitzgebel's avatar

I'm inclined to disagree. Since our interactions with things outside the simulation are very limited, things could have a very impoverished existence and still be "as if" real, to us. Consider the possibility that only the Solar System is simulated and beyond the Oort Cloud is basically a giant TV screen projecting images as if we were embedded among a trillion galaxies. It would be as if those galaxies are real, but they would be real only in a very impoverished sense. Similarly, I'd suggest, if one's distant conversation partners only have active behaviors when one is actually having conversations with them and otherwise survive merely as data poised, and occasionally updated, to interact with you again in the future.

Expand full comment