9 Comments
Jul 18Liked by Eric Schwitzgebel

Another wonderful piece.

Your point could even apply to some organic, let's go with semi-sentient, mimics.

For instance, I (S2) could recite your piece (S1) word for word. Yet, it's meaning and the ability to create it (F) are far beyond me.

Expand full comment
author

You’re too modest, but yes!

Expand full comment
Jul 13Liked by Eric Schwitzgebel

This makes me wonder if babies count as consciousness mimics. For a long time, they’re engaged in mimicry that doesn’t understand the underlying meaning of their actions. Ironically, that makes the idea of emergent consciousness from mimicry seem *more* plausible to me.

Perhaps combining this with some Douglas Hofstadter loopiness? Maybe my mind is a conceptual network that uses previous thoughts as prompts to “predict and generate” my next thought.

I hope this is vaguely coherent. I’m probably out of my depth.

Expand full comment
author

It’s coherent, but I’m inclined to distinguish mimicry from imitation. Linguistic imitation normally involves conscious understanding as part of the explanation.

Expand full comment
Jul 12Liked by Eric Schwitzgebel

Nice article Eric. I like a quote from Christof Koch which I think hits home on this topic. It went something like “just because you can simulate weather on a computer, it doesn’t mean you will actually create a storm”. I believe too many people are conflating the simulated with the real. At the end of the day, computers are based on a Von Neumann architecture relying on 1's and 0's, rather than being based on actual physics. It is an abstraction, full stop. There is an important distinction here which gets lost too often..

Expand full comment
Jul 11Liked by Eric Schwitzgebel

I think in the mimic, we'd have to say that F is not present, wouldn't we? For two reasons:

1) A knowledge problem: if there is both similarity of superficial features and the same underlying feature, we couldn't know whether S2 existed "because of" S1 or the underlying F.

2) From our knowledge of evolution: S2 would evolve differently depending on whether it's a mimicry adaptation or a visible-sign-of-F adaptation.

In nature, the model and mimic groups tend to be quite well-defined, don't they? Because they're different species. And all the examples of mimics that I can think of off the top don't have the underlying feature F. E.g. brightly coloured mimc snakes are not poisonous; moths with eye patterns are not large; leaf insects are not inedible leaves...

Oh, carnivorous plants do sometimes genuinely supply some sugar to the visiting insects, as regular flowers do. Would that count? Are carnivorous plants mimics? Or are they just otherwise exploiting the insects' desire for sugar? Not sure.

The lack of natural evolution in AIs would seem to be a problem for this model. If our analogy is drawn from the natural world, it seems like the fact that AIs aren't drawn from that world could mean that the situation is not, in fact, as analogous as it looks.

Expand full comment
Jul 11Liked by Eric Schwitzgebel

I think the place I'd be interested in pushing on this argument form in any particular instance is the question of whether we think that S2 is selected purely to mimic S1, which has some important connection to F1, or whether we think it might rather be that some F2 is selected to mimic F1, producing S2 as a similar side effect to S1, or finally whether S and F are so entangled that selection on S is ipso facto at least partly selection on F.

The relationship between warning colors and actual poison is a clean one where it's clear that most of the selection really is on S itself, because S and F are themselves only conventionally linked by a separate evolutionary process.

But in cases where a tree grows sweet fruits to mimic the sweetness of other fruits in the area that some helpful animal likes to eat (and incidentally spread the seeds of), in mimicking the sweetness, the tree will almost certainly incidentally mimic the underlying feature (presence of simple carbohydrates) that the animal actually cares about. (This isn't completely certain - Miracle Berry (https://en.wikipedia.org/wiki/Synsepalum_dulcificum) seems to have found a way to mimic the sweetness without actually providing sugar! And with modern chemistry, we discovered things like aspartame and sucralose that do it too.)

I was thinking about signs of sexual attractiveness as other potential examples of phenomena in this area. I think that one of the points of "costly signaling" theory is that it's very hard to mimic the tail of a peacock without mimicking the underlying fitness it is a sign of. Human mimics of sexual attractiveness range along a spectrum - things like cosmetics might be literally only skin deep, but things like going to the gym to develop a sexually attractive physique do at least something for the strength-related traits it signals (though there's also a distinction between exercises that do more for strength and ones that do more for show).

With language ability, it's a little harder to say. I believe there are some opera singers who get really good at singing their lines with a native accent even without learning the language, but I think it's very difficult for a human actor to learn to do this in a non-singing role, let alone mimicking a back-and-forth conversation without having the understanding. Obviously, since humans have the capacity for back-end understanding, this is going to change the relative difficulty of mimicking the surface feature without mimicking the full understanding, compared to mimicking the surface feature by mimicking the full understanding. But for views on which consciousness just *is* a certain kind of information-processing, it might be very natural to think that the information-processing required for fluent conversation is so closely-related to the information-processing required for consciousness, that it's very difficult to select for just one without the other.

Expand full comment
author

Thanks for this terrific comment, Kenny. The fruit case sounds similar to what biologists call Mullerian mimicry, where species converge on a similar signal to efficiently convey the same message (eg a less common but actually toxix butterfly having the same color pattern as a more common toxic butterfly). Another comparison is a child learning a novel word, imitating the phonetic structure of the adult’s use, but in a way that involves actually understanding the referent (ie, that the new object actually is a “blicket”).

There are three ways about thinking about such cases in our model of mimicry, which you gesture at. One is to think of it as imitation rather than mimicry: The imitator aims to duplicate the actual F and not just the S1. Another is to take the aim to be to create the S2, but it happens that the way to do it is to create an F. A third is when the F is actually present to create an S2 to efficiently signal it (the Mullerian case).

So I think I just agree with the examples you give. If AI language involves any of that, either (1.) it isn’t mimicry, or (2.) it is mimicry, but a further explanation potentially reveals that behind the mimicry is real understanding. The force of the Mimicry Argument isn’t that mimics couldn’t be consciousness, just that when mimicry is present, that defeats the quick inference from S1 to F, and further argument is required.

Expand full comment
deletedJul 10Liked by Eric Schwitzgebel
Comment deleted
Expand full comment
author

Yep!

Expand full comment