Discussion about this post

User's avatar
Phil H's avatar

I feel like there are even more obvious arguments against credence-based weighting.

1) People don't think like that. An individual human faced with an individual robot will come to a judgment of whether that robot is conscious/deserving of moral consideration. People generally don't hold "schrodinger's consciousness" views of the entities around them (though I have to admit, we do seem to place people we've never met in a separate category that is less black-and-white).

2) The law can't work that way. Faced with a situation like a man punching a robot or a robot punching a man, a judge and jury will have to make a decision on whether the robot is responsible for its own actions or not. They can't make 50-50 decisions; there either is liability or there isn't. And the problem of legal precedent leads to...

3) *That* example from history. It's not such a bad thing to learn the lessons of history, and when you look at just how bad the 3/5 of a person thing turned out to be, it seems like a good reason to avoid trying it again.

I still agree with your conclusion (in theory, though there's little hope of it being implemented in practice), just for different reasons.

Thinking about how little hope there is of your design policy being respected: just look at how the new AIs have been constructed, by averaging out *human* language. AI designers are almost entirely concerned with making AI in our own image. This is understandable because we're the best and only example of intelligence that we know; but it also means that they're guaranteed to look a bit like us even when they're not actually anything like us.

Expand full comment
Quiop's avatar

Has any philosopher defended the assumption that it is possible to meaningfully assign numerical credences to competing philosophical views? (By "philosophical," I mean something like "not empirically or computationally resolvable.")

It seems obvious to me that this sort of talk is never going to get off the ground, but perhaps I'm missing something.

Expand full comment
5 more comments...

No posts