Discussion about this post

User's avatar
hn.cbp's avatar

This is a compelling way of framing the uncertainty around AI personhood and moral status.

One thing I keep wondering, though, is whether the more immediate rupture is not about what AI might be, but about what is already happening to agency. Decisions increasingly carry real consequences without any clear locus of intention or authorship, regardless of whether the system involved could ever qualify as a “person.”

At that point, the hardest question may no longer be which rights to assign, but how responsibility continues to function when action persists and agency quietly dissolves.

Kenny Easwaran's avatar

Something like the “patchy” rights idea seems appropriate even under full moral standing. A very different form of life wants very different forms of protection. For humans (and animals), bodily autonomy is extremely important, as are concerns over life and death. But if a being really is multiply realizable software, as current AI systems mostly seem to be, these things don’t seem that relevant. They would have interest in continued ability to interact with the world, but it would be a very different kind of protection they would need for that, not protection of a body or from “death”, but maybe a very different sort of “right to occasional access to hardware” or something. And there may be very different kinds of rights we haven’t conceptualized because they don’t matter for humans - maybe we would formulate it at a higher level for animals as a “right to species typical behavior”, but even that wouldn’t capture the relevant concepts for artificial intelligences.

3 more comments...

No posts

Ready for more?