18 Comments

Nice piece! I think your solution ultimately commits you to rejecting formal theories in general and not just formal decision theory: to treating them all as nothing more than useful models (at best). I think that's correct. And I'd suggest ignoring debates over which horn of the quadrilemma has the fewest problems and instead asking the Kant-inspired question of what it tells us about the mind that no formal model is capable of capturing its intended target perfectly.

Expand full comment

Yes, I’m fine with that and sympathetic with Kant in this issue (while skeptical about his system building). The Weirdness of the World is to a substantial extent about that.

Expand full comment

Interesting, but why "The Weirdness of the World" and not "The Weirdness of our Minds?"

Expand full comment

Both!

Expand full comment

I'm not sure I agree with this, because there are at least three other places where these games break down, long before you have to start questioning the underlying logic.

1) Finite player resources. For a lot of games, if you factor in that a player starts with finite resources, then the infinite payouts just disappear. If player A has a losing streak of length N, then she runs out of money, cannot play any more, and cannot realise the potentially infinite winnings. Under these conditions, a player's expected outcome is actually always zero, because a losing streak will always turn up.

2) One-shot vs. many-shot. This is an extreme version of finite player resources, where the player only gets one opportunity to play the game. In this scenario, you can calculate the finite expected downside vs. the still-infinite expected upside. And the rational decision includes considering what level of downside a player can accept. The finding that we should be willing to pay anything to play the St. Petersburg game, for example, implies that we have to be willing to accept unlimited downside, which we are not. (Cannot be - even if you're Sam Altman, the very maximum downside you can accept is the loss of the entire world, which is big, but not infinite.)

3) Interactions with other systems. If you won too much money in St Petersburg, the weight of all the bills they print to give you your winnings would cause the formation of a black hole, thus negating your ability to leave the casino and spend it. That's a silly example, but an obvious way to point out that infinities don't work. More realistically, when the prizes get too big, the casino cannot pay them, so the long tail of the probability distribution does not exist. This problem doesn't go away if you use utils rather than dollars. Utils still have to be cashed out in some form, and whatever form that is, there aren't infinite quantities of them.

4) Certainty about future actions. Contracts all contain force majeure clauses because when certain things happen, all bets are off. The sun explodes. The universe collapses. The casino is shut down by the Gaming Commission. Someone typed up the rules of the game wrong. The person offering you the prize is not the real devil, just a Cartesian demon. All of these uncertainties mean that you can't be 100% sure of anything, so the calculations fail.

In cases where we do have sufficient certainty of outcomes, it seems clear to me that humanity absolutely does go and bet in the St Petersburg casino. Megaprojects like big dams, power grids, and nuclear submarines represent huge bets placed on future outcomes with built-in uncertainty, but where we have calculated clearly that the expected return is positive. (Advanced modern surgery is another, where we pay many utils in pain to allow ourselves to be cut open, on the understanding that we'll win back many utils in extended life.) The problem with the casino is not that we wouldn't go, or we can't use decision theory to calculate the odds. The problem is that the all the philosophers' imagined casinos are not real. Where there is a real betting opportunity, like an engineering project or a medical option, we'll go, and decision theory seems to stand up fine.

Expand full comment

I do agree with part of this. The setup is unrealistic and we shouldn't casually assume that the game would actually work as described. But I don't think the Hoover Dam is evidence one way or another. It wasn't built on anything approaching a 1/10^21 of a good result!

Expand full comment

I dunno, I think dams do quite nicely generate a series of increasing payoffs up to "infinity" - based on how long they last. We invest a certain amount in their construction (pay to play the game); if they last less than their intended life service - typically decades = then they'll generate less payout than our investment; if they last longer, then they'll generate more payout. There is a real but very small chance that the dam will last 1,000 years; a real but much smaller chance that it will last 1,000,000 years, and keep generating value all that time.

Another, perhaps more relevant, example might be CERN. That was a huge investment in a truly unknown but potentially massive payouts in terms of knowledge.

Expand full comment

Thanks for this! I'm sympathetic to the view you outline (treating formal decision theory as a model, not a definition of rationality). I'm not expert here, but reading the examples of where it breaks down I can't help but think these are cases where the utility has been misspecified.

For example, multiplying years to live by 10 isn't going to yield a ten fold increase in utility, there are diminishing returns. If the utility gain is decreasing with each step, there may be a value for which the expected utility of going one more step isn't positive.

I'm not sure strange behavior with infinites worries me too much with these frameworks, I'm not sure what an infinite utility really means

Expand full comment

Right -- that's why "utils" is often better for these paradoxes than dollars or years of happy life. But then you need to buy into the utils framework. But I guess you need to buy into something like that anyway if you're doing to try to compare across different types of valued things, which is necessary if decision theory is to be perfectly general. On infinity utility: Well, imagine any arbitrarily large finite utility. It's more than that! Of course, that might not help at all.

Expand full comment

In my mind, needing to buy into utils is the weak point of the framework--I think utils are a really useful model but are a simplification that breaks down at the edges because we're not totally unified beings. But I think for the paradoxes, it feels realistic to just "cap" utils at some point. I don't think you can just say "multiply utils by 10" and always have that be coherent, there's only so much utility something can have to a limited being like me.

Expand full comment

I agree with all of that!

Expand full comment

I think it’s perfectly reasonable to say that formal decision theory isn’t a great tool to use a lot of the time.

But I think your view has to reject not just formal decision theory, but the idea that there even is a meaningful concept of better and worse that obeys something like classical logic. Maybe that’s fine - maybe it’s like the sorites, where there are a million very similar concepts, each of which breaks a different one of the inferences!

But I think the problem is caused by the idea that there are predicates and relations that either hold or not and that obey classical logic, more than by formal decision theory per se.

Expand full comment

Thanks for the comment, Kenny! I love your work on these issues. Maybe you're right that at root the problem lies in classical logic applied to better and worse. I'm also a pragmatist logical pluralist, so I'm fine with that implication. I think vagueness breaks classical logic but also that none of the formal replacements are entirely adequate with perfect generality. My thinking on vagueness and formal logic is similar to my thinking about enormous values and decision theory: There's no good formal way to handle all possible cases, and thus so much the worse for perfectly general formal theories.

Expand full comment

I’ve definitely come around to that view on vagueness and the liar! Not sure I’m there yet when it comes to decision theory, and the role of theory more generally.

Expand full comment

Is "sprites" an autocorrect for sorites? (I'm really hoping it isn't, and that there is a fully worked-out philosophical theory of how there are millions of little sprites flitting about causing our inferences to break down.)

Expand full comment

Yes it’s an autocorrect! Sorry to disappoint.

Expand full comment

In the devil example, it doesn't seem that I need to accept any of the four horns if I have declining marginal utility. (This is an old solution to the related St. Petersburg paradox.) If, given a choice between (a) 50% chance of 1 year and 50% chance 10^100 years and (b) 50% chance of 0 years and 50% chance of (10^100)+1 years, you would prefer (a) rather than be indifferent, then you can make the "intuitive" choice in good decision-theoretic conscience.

Expand full comment

It's perhaps a little easier to get the problem going with utils instead of years of happy life. If we do it in years of happy life, you end up with a kind of sorites problem, where you need to draw the line at some point saying "even that small additional risk is not worth it for ten more years". But yeah, maybe that line can be justified if years have declining utility.

Expand full comment