Discussion about this post

User's avatar
Kenny Easwaran's avatar

I saw him give a talk on this a couple months ago and I thought it was a really interesting talk!

The way I interpreted it was this:

Here's an argument that artificial intelligence could be conscious:

Consciousness is a property of human minds. Mental properties are functional properties. Functional properties are multiply realizable. Therefore consciousness is multiply realizable. Therefore if we artificially created a different realizer of consciousness, it would be an artificial intelligence that is conscious.

I interpreted him as challenging this argument by focusing on the premise that functional properties are multiply realizable. The idea is that any particular autopoietic system has a function that has two levels of description (or maybe has two different functions?). System A has as one of its functions maintaining and building System A-type things. But System A also has as one of its functions maintaining and building itself. Let's hypothetically suppose that some different realizer, System B were possible. To be a realizer of the first function, it has to have as its function maintaining and building System A-type things. But to be a realizer of the second function it would have to have as its function maintaining and building System B-type things.

Thus, in general, autopoietic functions are not going to be multiply realizable, because a different realizer would maintain and build the original type system, not the new type system. To the extent that important mental properties connected to consciousness are autopoietic, that would be a way that this kind of consciousness is not multiply realizable.

But it just seemed natural to me to respond the way you do. Obviously there could well be other autopoietic systems that sustain themselves in different ways. There's nothing in this argument that says that those systems couldn't be conscious. There is something here that is strongly suggestive that their consciousness would be radically different from our consciousness, because their autopoiesis would be very different from our autopoiesis.

But I think that most functionalists are totally happy with the idea that there are radically different types of consciousness - at least, bat-type and human-type are radically different. (I'm quite open to the idea that there's also plant-type and bacteria-type that are even more radically different from human consciousness than bat-type, but you don't need to think that to think that there could be a system with the kind of complex multi-level feedback-control interaction with its environment that gives rise to consciousness, but in a different form than that of any biological creature. Unless you think that such a system would ipso facto be alive, and would thus count as biological. But I'm happy to embrace that conclusion too, as long as "biological" is defined in this functional way, rather than being tied to the kind of substrate that actual plants and animals and bacteria and so on have.)

Expand full comment
4 more comments...

No posts