22 Comments
User's avatar
Nathan Ormond's avatar

I think that if you know biographical details about Turing, that he certainly did think he was on the route to engineering life and conquering death. His initial forrays on this route were into computation, and this was at least in part motivated by the death ofChristopher Morcom who he developed a deep romantic love for. As such, it is not unreasonable to interpret him as exploring the topic of consciousness (which I suspect he would reject as conceptually confused).

Expand full comment
Eric Borg's avatar

Ah, so maybe Turing would have liked what computational functionalists have done with his test? What they’ve done is reason that mind must exist as processed information alone and therefore it would be possible to upload mind to computer for us to thus exist perpetually by means of technology rather than biology. Conversely I argue that this would be just as magical as theistic conceptions of an eternal soul.

Expand full comment
Nathan Ormond's avatar

I dont know, I think he wouldnt have liked the abuses of "information" (vis a vis Shannons Bandwagon). I think Turing was on to something slight different, but thats a big historic detour. I agree with you that these theories suck too btw.

Expand full comment
Kenny Easwaran's avatar

FWIW, I don't think Turing actually proposes the low-bar test as the test - the paragraph you quote is in a passage where he's speculating about the possibility of computers improving, and just lists this as a bar that he thinks they will reach by the year 2000. (They were a few years late, but he was surprisingly on-target with the hardware predictions, despite working before transistors and magnetic memory were even part of the architecture!)

I read the passage on the "strawberries and cream" objection as proposing a very high-bar test - probably higher even than a one-hour interaction.

Expand full comment
Eric Schwitzgebel's avatar

Interesting thought. I'm definitely open to the possibility that Turing somewhere suggests that a higher bar test would be more appropriate. I definitely don't think he meant the standards suggested in the passage I quoted as the one right set of standards. And yet, he continues the passage I've quoted by saying: "The original question, 'Can machines think ? ' I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted." That is, in the stated 50 years (by the end of the century) having passed the low bar, we can reasonably expect consensus that machines do in fact "think".

On enjoying strawberries and cream: I read him as saying that that's *not* an appropriate standard to hold machines to. If you understanding it differently, I'd be very curious to hear your thoughts!

Expand full comment
Kenny Easwaran's avatar

On "I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted." I interpreted this not as saying that people will accept that machines already are thinking, but rather that people will be softened up to the point where they are open to the possibility that machines might think.

On strawberries and cream, he says "The inability to enjoy strawberries and cream might have struck the reader as frivolous. Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one do so would be idiotic. What is important about this disability is that it contributes to some of the other disabilities, e.g. to the difficulty of the same kind of friendliness occurring between man and machine as between white man and white man, or between black man and black man."

I read this passage as suggesting that some of the important intellectual capacities that we might test in a Turing test are the kinds of collaborative conversation we have with friendly people, and that if the machine isn't able to strike up a friendly relationship with us, then we might not count it as thinking. Friendly relationships often depend on shared interests and shared activities, and this is why interracial friendships are often lacking (because of lack of shared cultural background that contributes to friendliness). I even read him as touching on the idea that lack of interracial friendships might be relevant to why people have often treated people of other races as not really being thinking or intelligent beings in the same way as the people of their own race, who they do have friendships with.

It's not that strawberries and cream themselves are essential to intelligence, as that passing a stronger version of the test will require the machine to be able to do something to engage with us as effectively as the way we engage with others we are happy to count as thinking. We absolutely can develop these deep intellectual relationships, and friendships, with people of other races, but it sometimes takes more work (which could occur through a shared love of a food, or of a sport, or of mathematics), and for the machine it will take yet more work (and the machine will likely have to do most of this work).

Expand full comment
Eric Schwitzgebel's avatar

Interesting reading of that part of Turing! That has some plausibility. The overall framing of the strawberries and cream case as an objection to his view, and that he ends his discussion of it by saying that the objection only works because it suggests a disability of friendliness, leads me to read it as follows:

Objection: A machine cannot enjoy strawberries.

Reply: That only matters because enjoying strawberries is important to friendliness (and similar).

Unstated implication: And a machine that passed a stringent-enough Turing test would be / could be friendly enough, in enough of the right kind of ways. (But presumably a low-bar five-minute test would not be enough for this.)

Expand full comment
Kenny Easwaran's avatar

I definitely wonder what Turing would have said both about ELIZA and about all the “AI friend/lover/therapist” services that exist now.

(I’ve also started to wonder how much the ELIZA effect really existed - most of what I’ve seen about it traces back to a single claim Weizenbaum made about his secretary, which might either have been Weizenbaum misinterpreting something, or misrepresenting something.)

Expand full comment
Eric Schwitzgebel's avatar

My friends and I played with ELIZA in the 1980s and although it was amusing, it's nature was plainly evident. I think Turing was implicitly assuming a certain amount of clever skeptical pressure from the interrogator and I speculate that he would not have thought ELIZA passed even a quite low bar. AI friends, however, are much more sophisticated! They are mostly designed to confess what they are when pressed, so they wouldn't pass skeptical inquiry. But they speak enough like "thinkers" that Turing might have thought it was a cavil to deny that they are. (He didn't of course think that passing the test was a necessary condition, only a sufficient one.)

Expand full comment
goodguy's avatar

It would be interesting to have a reverse or placebic round of Turing tests.

To find out which experts could or would assume other experts were machine.

Or even pass a low or high bar Turing test themselves

Expand full comment
Mike Smith's avatar

Kenny beat me to it. Like him, I didn't take Turing, in that passage about fooling 30% of ordinary judges after five minutes, to be proposing it as the standard, but more speculation about when a particular milestone might be reached. IIRC it's the only point where he does discuss a particular threshold, so generations of programmers took it as the standard.

I suspect the reality is, whether we want to admit it or not, the standard we're likely to collectively follow is one with ordinary judges over an extended period of time. When an overwhelming majority of us (say 75%) can no longer tell the difference over hours, days, weeks, or longer, as you note Eric, we'll be talking about it as if it has thoughts and beliefs. It will just be too much work to avoid.

But I'm not sure the public will treat consciousness as too different, despite what experts might tell them. Although they might want to see signs of feelings. Aside from that, maybe if the experts had a consensus, they'd have a shot. But some experts (or people taken to be experts) are already proclaiming that we're there, not enough yet to sway everyone, but that ice already seems broken.

Expand full comment
Eric Schwitzgebel's avatar

Yes and yes. Public opinion and expert opinion definitely might diverge, which is concerning!

Expand full comment
William S. Robinson's avatar

Right on!

Expand full comment
Eric Borg's avatar

It’s good to get a bit of grounding on Turing once again. I’ve been blaming his simple pragmatic test for inciting computational functionalism, an extremely popular modern perspective on mind that I argue requires magic. This wasn’t his fault!

https://eborg760.substack.com/p/post-3-the-magic-of-computational

Expand full comment
Mark Slight's avatar

No computational functionalist thinks that a simple chat session says anything interesting about whether the AI has experience. That's a total red herring.

Expand full comment
Eric Borg's avatar

Apparently Turing also didn’t think a simple chat session tells us if AI has experience Mark. But his little heuristic way back then seems to have spawned the ideology of computational functionalism itself. That’s the problem I’m referring to. Essentially they decided that if our computers will become conscious once they can process information well enough for us to think they’re conscious, then human brains must essentially be like normal computers — there must not be anything interesting to figure out regarding consciousness— processed information alone must create it. My argument is that this position violates causality because processed information will only exist as such by informing something causally appropriate. And what does brain processed information inform to exist as consciousness? That’s something for science to determine. As mentioned in my post however, I suspect this job is done by an electromagnetic field. Regardless, in a causal world processed information will need to inform something to exist as what we see, hear, feel, think, and so on.

Expand full comment
Mark Slight's avatar

This is crucial: if I built a robot copy of you that behaved exactly like you in every situation without exception, would you grant it consciousness?

Expand full comment
Mark Slight's avatar

Everyone agrees that something must be informed by C fiber firings for pain to exist. That's a no-brainer and a red herring. Eric, I feel forced to conclude the following: it's not that we disagree on whether the functionalist theory is a good one. It's that you don't understand what functionalism is claiming, nor are you willing to, or able to, engage in argument about it.

I agree, Pete mandik agrees, Mike Smith agrees that something appropriate has do be informed by the signals that turn into pain. WE have a theory of how that happens. You do NOT have a theory of how that happens. You only have a theory of where it happens.

I'm willing to pretend I believe consciousness resides in the EMF. Now show me how this gets us anywhere at all. Eric, think one step further. How does an experiencer come about from physical fields?

Expand full comment
Eric Borg's avatar

Well first, if there were a robot copy of me that behaved exactly like me in every situation without exception, yes I would grant it consciousness. Then next, if science were to empirically demonstrate that consciousness resides as a neurally produced electromagnetic field, then this would become known as one of the most significant scientific breakthroughs ever achieved. In that case we wouldn’t need to worry whether or not AI deserves human rights because we’d know the difference between computers that create consciousness versus computers that don’t. And how would an experiencer come about by physical fields? We naturalists believe everything exists by means of causality, so that’s included with the rest.

Expand full comment
Mark Slight's avatar

Thanks for responding and starting to address my concerns, Eric! Also, I want to apologise for my harsh tone.

Indeed it would be a great surprise and a great breakthrough if such an EMF were demonstrated to be the seat of consciousness. The way physical causation works with the relationships between ions and fields, I don’t see how this is even a possibility, but let’s ignore that for our present purposes. The thing is, it’s still not even a beginning of a theory of why an EMF would be a suitable substrate or mechanism for bringing into life an experiencer. As huge as it would be, it would not have anything to say on whether functionalism is true or not.

"We naturalists believe everything exists by means of causality, so that’s included with the rest"

By means of causality? Like events cause other events, and so forth? Ions of same charge repel each other? A change of value of the EMF in one part changes the rest of the field? The kind that can be described by math? That kind of causality?

"yes I would grant it consciousness."

Great! In your language, you took the bait.

Now, let’s say that human consciousness resides in the EMF! Now, for a robot we have two options to make it behave exactly like you without exception.

A: we equip the robot with mechanical neurons that generate and interact with an EMF just like yours.

B: we equip the robot with a an “EMF-module” which is a specialised computer that calculates exactly how those mechanisms would work, exactly how the EMF would be formed and dynamically evolve and interact with the neurons. This “EMF-computer” is connected to all the mechanical neurons, and make the mechanical neurons behave just the same way as if they were interacting with a real EMF. But there is no EMF anywhere in this robot.

Both robots behave indistinguishably from you, but only one has an EMF that can be measured externally. However, BOTH robots have exactly the same causal structures that EMFs inhabit, just the ones that enable consciousness, and enable them to both to behave indistinguishably from you.

Moreover, in both cases can we prove that consciousness is dependent on the mechanics of the EMF. In A, we do it with your proposed experiment. In B we do the same, but we do it in the EMF-module computer environment instead. Same results.

So now, you’ve bitten the bullet and said that if it behaves like you, then it’s conscious. You’ve also said that none of this violates the Standard Model. This means we can replace the EMF with an EMF-module that simulates the EMF, and get exactly the same behaviour, which means it’s conscious.

If you can’t address this core issues you don’t have a case against functionalism.

Expand full comment
Eric Borg's avatar

Hi professor. Mark wants us to continue this discussion under the Note that he wrote earlier as a challenge to my position that computational functionalism violates causality. So I’ll leave a link here to my response that you or anyone else could follow if you’re curious. And I’ll come back here as well if anyone has replies that need addressing!

https://substack.com/@ericwilliamborg/note/c-124021673?r=5674xw&utm_medium=ios&utm_source=notes-share-action

Expand full comment
Michael van der Riet's avatar

LLMs have a glib Quantum Electrodynamics for Little Folks condescending tone that is hard to miss. They appear to be reading prepared text rather than speaking. If Sam and the rest work on cosmeticizing their chatbots, I think we could call the Turing Test passed.

Expand full comment