Discussion about this post

User's avatar
Eric-Navigator's avatar

To my great surprise, there are really people caring about robot rights to this degree! I am from the tech side of AI, and I am very happy to find you!

I think my recent proposal, the Academy for Synthetic Citizens, may resonate with you.

https://ericnavigator4asc.substack.com/p/hello-world

Hello World! -- From the Academy for Synthetic Citizens

Exploring the future where humans and synthetic beings learn, grow, and live together.

My goal is to eventually build this Academy in the real world. I wonder if you are interested?

Expand full comment
Phil H's avatar

I feel like there are even more obvious arguments against credence-based weighting.

1) People don't think like that. An individual human faced with an individual robot will come to a judgment of whether that robot is conscious/deserving of moral consideration. People generally don't hold "schrodinger's consciousness" views of the entities around them (though I have to admit, we do seem to place people we've never met in a separate category that is less black-and-white).

2) The law can't work that way. Faced with a situation like a man punching a robot or a robot punching a man, a judge and jury will have to make a decision on whether the robot is responsible for its own actions or not. They can't make 50-50 decisions; there either is liability or there isn't. And the problem of legal precedent leads to...

3) *That* example from history. It's not such a bad thing to learn the lessons of history, and when you look at just how bad the 3/5 of a person thing turned out to be, it seems like a good reason to avoid trying it again.

I still agree with your conclusion (in theory, though there's little hope of it being implemented in practice), just for different reasons.

Thinking about how little hope there is of your design policy being respected: just look at how the new AIs have been constructed, by averaging out *human* language. AI designers are almost entirely concerned with making AI in our own image. This is understandable because we're the best and only example of intelligence that we know; but it also means that they're guaranteed to look a bit like us even when they're not actually anything like us.

Expand full comment
6 more comments...

No posts

Ready for more?