I feel like there are even more obvious arguments against credence-based weighting.
1) People don't think like that. An individual human faced with an individual robot will come to a judgment of whether that robot is conscious/deserving of moral consideration. People generally don't hold "schrodinger's consciousness" views of the entities around them (though I have to admit, we do seem to place people we've never met in a separate category that is less black-and-white).
2) The law can't work that way. Faced with a situation like a man punching a robot or a robot punching a man, a judge and jury will have to make a decision on whether the robot is responsible for its own actions or not. They can't make 50-50 decisions; there either is liability or there isn't. And the problem of legal precedent leads to...
3) *That* example from history. It's not such a bad thing to learn the lessons of history, and when you look at just how bad the 3/5 of a person thing turned out to be, it seems like a good reason to avoid trying it again.
I still agree with your conclusion (in theory, though there's little hope of it being implemented in practice), just for different reasons.
Thinking about how little hope there is of your design policy being respected: just look at how the new AIs have been constructed, by averaging out *human* language. AI designers are almost entirely concerned with making AI in our own image. This is understandable because we're the best and only example of intelligence that we know; but it also means that they're guaranteed to look a bit like us even when they're not actually anything like us.
Has any philosopher defended the assumption that it is possible to meaningfully assign numerical credences to competing philosophical views? (By "philosophical," I mean something like "not empirically or computationally resolvable.")
It seems obvious to me that this sort of talk is never going to get off the ground, but perhaps I'm missing something.
I think lots of philosopher do think this way, including me. I can't think of a defense of the idea of credences *specifically* for philosophical views (though there might be a defense out there). But whatever grounds there are for thinking that most of our beliefs are held with varying degrees of confidence, and whatever grounds there are for modeling that confidence with credences, I don't see why philosophical views would be excepted. For example, I incline toward materialism about the mind, but not strongly, so maybe with credence 55%. But I don't think substance dualism is the most plausible alternative, so my credence in it is probably around 5%.
I realize many philosophers think that way, which is why I'm asking the question. Having seen so many philosophers talking about credences in a way that strikes me as obviously problematic, I'm wondering what it is that I'm missing.
>whatever grounds there are for modeling that confidence with credences, I don't see why philosophical views would be excepted.
I suppose it depends on what grounds you have for modeling confidence with credences, but here is one possible explanation for why it can make sense to have credences for some sorts of beliefs but not others:
- For empirical questions, we can use Bayes' Theorem to update credences based on evidence, so when people have divergent opinions about the appropriate credence to assign to a given proposition, we expect them to converge as more evidence is gathered. (There are potential regress problems here, but in practice it mostly seems to work out fine, so maybe we don't need to worry about them too much?)
- For certain types of non-empirical questions, we can think in terms of reference classes (e.g. I have a 10% credence that the googolth digit of pi is 2, because I assume that digit belongs to a reference class of digits, about one tenth of which seem to be equal to 2. So I have a sort of pseudo-empirical justification for assigning credences.)
- For other types of non-empirical questions, there is no obvious reference class. If there is disagreement about the appropriate credence to assign a proposition, there is no way to resolve the disagreement. Many philosophical questions fall into this category, but also some non-philosophical questions: for example, I can state in vague terms that I think the Riemann hypothesis is "probably true," but what would it mean to assign it a credence of 92% vs. 97%?
What it might mean could be something like this. Suppose we found out that some mathematician had definitively established whether the Riemann hypothesis is true or not, and experts agree that the proof is solid. What odds would you lay on a bet that it comes out true? Of course, not everyone is game for explicit betting of that sort, but we make all kinds of wagers in life implicitly; they're just harder to describe formally.
Another angle is to rank your confidence in things. If there's something else you're 95% confident of (e.g., that a 20-sided die will not come up 20), are you more or less confident than that about the Riemann hypothesis?
Right, the wager is one obvious way to assign a messaging to credence talk. But I don't think you can lay a wager on the Riemann hypothesis itself, because any actua wager needs to resolve within a defined time period. What you can lay a wager on is propositions like "there will be a proof of the Riemann hypothesis, accepted by >95% of appropriately specified experts, within x years" or "there will be a counterexample found within y years." But that's not the same as wagering on the truth or falsity of the Riemann hypothesis itself.
This is an even bigger problem for typical philosophical problems, which don't usually resolve at all. You can place a wager on something like "In the 2039 Philpapers survey, 95% of philosophers of mind will say they accept (some tolerably precisely formulated philosophical position)," but you will lose.
TLDR:
(1) For typical empirical questions, you can have credences because you expect them to resolve as true or false.
(2) For some empirical and mathematical questions, assigning credences is somewhat conceptually problematic, because you don't expect them to resolve in any reasonable finite amount of time (so standard wager semantics can't be applied).
(3) For philosophical questions, assigning credences is highly problematic, because you don't expect them to resolve at all (so no conceivable wager semantics can be applied).
I feel like there are even more obvious arguments against credence-based weighting.
1) People don't think like that. An individual human faced with an individual robot will come to a judgment of whether that robot is conscious/deserving of moral consideration. People generally don't hold "schrodinger's consciousness" views of the entities around them (though I have to admit, we do seem to place people we've never met in a separate category that is less black-and-white).
2) The law can't work that way. Faced with a situation like a man punching a robot or a robot punching a man, a judge and jury will have to make a decision on whether the robot is responsible for its own actions or not. They can't make 50-50 decisions; there either is liability or there isn't. And the problem of legal precedent leads to...
3) *That* example from history. It's not such a bad thing to learn the lessons of history, and when you look at just how bad the 3/5 of a person thing turned out to be, it seems like a good reason to avoid trying it again.
I still agree with your conclusion (in theory, though there's little hope of it being implemented in practice), just for different reasons.
Thinking about how little hope there is of your design policy being respected: just look at how the new AIs have been constructed, by averaging out *human* language. AI designers are almost entirely concerned with making AI in our own image. This is understandable because we're the best and only example of intelligence that we know; but it also means that they're guaranteed to look a bit like us even when they're not actually anything like us.
Thanks, Phil. Your observations make sense to me!
Has any philosopher defended the assumption that it is possible to meaningfully assign numerical credences to competing philosophical views? (By "philosophical," I mean something like "not empirically or computationally resolvable.")
It seems obvious to me that this sort of talk is never going to get off the ground, but perhaps I'm missing something.
I think lots of philosopher do think this way, including me. I can't think of a defense of the idea of credences *specifically* for philosophical views (though there might be a defense out there). But whatever grounds there are for thinking that most of our beliefs are held with varying degrees of confidence, and whatever grounds there are for modeling that confidence with credences, I don't see why philosophical views would be excepted. For example, I incline toward materialism about the mind, but not strongly, so maybe with credence 55%. But I don't think substance dualism is the most plausible alternative, so my credence in it is probably around 5%.
I realize many philosophers think that way, which is why I'm asking the question. Having seen so many philosophers talking about credences in a way that strikes me as obviously problematic, I'm wondering what it is that I'm missing.
>whatever grounds there are for modeling that confidence with credences, I don't see why philosophical views would be excepted.
I suppose it depends on what grounds you have for modeling confidence with credences, but here is one possible explanation for why it can make sense to have credences for some sorts of beliefs but not others:
- For empirical questions, we can use Bayes' Theorem to update credences based on evidence, so when people have divergent opinions about the appropriate credence to assign to a given proposition, we expect them to converge as more evidence is gathered. (There are potential regress problems here, but in practice it mostly seems to work out fine, so maybe we don't need to worry about them too much?)
- For certain types of non-empirical questions, we can think in terms of reference classes (e.g. I have a 10% credence that the googolth digit of pi is 2, because I assume that digit belongs to a reference class of digits, about one tenth of which seem to be equal to 2. So I have a sort of pseudo-empirical justification for assigning credences.)
- For other types of non-empirical questions, there is no obvious reference class. If there is disagreement about the appropriate credence to assign a proposition, there is no way to resolve the disagreement. Many philosophical questions fall into this category, but also some non-philosophical questions: for example, I can state in vague terms that I think the Riemann hypothesis is "probably true," but what would it mean to assign it a credence of 92% vs. 97%?
What it might mean could be something like this. Suppose we found out that some mathematician had definitively established whether the Riemann hypothesis is true or not, and experts agree that the proof is solid. What odds would you lay on a bet that it comes out true? Of course, not everyone is game for explicit betting of that sort, but we make all kinds of wagers in life implicitly; they're just harder to describe formally.
Another angle is to rank your confidence in things. If there's something else you're 95% confident of (e.g., that a 20-sided die will not come up 20), are you more or less confident than that about the Riemann hypothesis?
Right, the wager is one obvious way to assign a messaging to credence talk. But I don't think you can lay a wager on the Riemann hypothesis itself, because any actua wager needs to resolve within a defined time period. What you can lay a wager on is propositions like "there will be a proof of the Riemann hypothesis, accepted by >95% of appropriately specified experts, within x years" or "there will be a counterexample found within y years." But that's not the same as wagering on the truth or falsity of the Riemann hypothesis itself.
This is an even bigger problem for typical philosophical problems, which don't usually resolve at all. You can place a wager on something like "In the 2039 Philpapers survey, 95% of philosophers of mind will say they accept (some tolerably precisely formulated philosophical position)," but you will lose.
TLDR:
(1) For typical empirical questions, you can have credences because you expect them to resolve as true or false.
(2) For some empirical and mathematical questions, assigning credences is somewhat conceptually problematic, because you don't expect them to resolve in any reasonable finite amount of time (so standard wager semantics can't be applied).
(3) For philosophical questions, assigning credences is highly problematic, because you don't expect them to resolve at all (so no conceivable wager semantics can be applied).