At some point we reach the Westworld question, if you can't tell the difference, does it matter? I think the hill the skeptic should stake out is in how long we require the system to keep it up before we're convinced. The five minutes of the traditional Turing test isn't enough. But days, weeks, or months? Eventually the claim that they're not conscious becomes the more extraordinary one, and whatever theories we're using to explain consciousness in animals need to be compatible with the systems in question.
Of course, for someone who believes in a fundamental consciousness that amounts to an epiphenomenal essence, no behavioral evidence will be enough. But then what lack of evidence could ever prove their case?
It’s worth re-reading Turing. He only mentions five minutes in a prediction of what computers might be able to do by the year 2000. When he is actually discussing the value of the test, he seems to endorse extended friendship as closer to the real test.
I’d forgotten some of those details — thanks for the reminder! I’m assigning the article for my grad seminar this quarter so I’ll be giving it a close read again soon.
Good point. When I said "traditional Turing test", I was referring to the way later testers actually did it. From what I've read, they took this remark as a standard that, I agree, Turing almost certainly never meant to set.
"I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning."
Mike, the thought you expressed in your original comment is very much in line with my original thinking. Once they are similar enough over long term -- with a pretty loose sense of "similar", as I suggested in my post (i.e., *not* passing the Turing test, necessarily, in any of its forms) -- then we face a choice between saying behavioral similarity is enough and falling into something like an epiphenomenal essence or (if not epiphenomenal) something pretty cryptic and probably seemingly not what people will care about. I'm not taking a stand on which the right way to go is, given that dilemma. I have a lot of sympathy with the skeptic, despite this post -- but increasing sympathy with the sociological take.
I think the ethical question can be considered somewhat separately from the question of whether a system is conscious or not. As someone who gravitates to virtue ethics, I think a case can be made for treating even inanimate and presumably non-conscious objects with a certain degree of care. Of course this is a complicated issue, but maybe we can somewhat bypass the question of consciousness by asking, "What does such and such an attitude or stance say about us, who we are, and what we want to be?"
By the way, a friend gave me your book, The Weirdness of the World, and it's sitting on my night stand waiting for me to finish the book I'm working on at the moment. Looking forward to reading it!
Good point about virtue ethics -- though I also think many virtue ethicists think it reveals a different kind or degree of virtue or vice to care about / be indifferent to something with vs without real experiences of joy and suffering. So I don't think virtue ethicists get to fully dodge the question.
I hope you enjoy reading Weirdness as much as I enjoyed writing it!
Loved the article! It reminded me of Anil Seth's idea that AI is not and cannot be conscious. Still, they would "behave" that they have to because we are great at assigning interactions, feelings, and thoughts even to inanimate geometric figures (Heider-Simmel experiments, for example). But because the robot is not conscious, should we interact with them unfairly? So, because the skeptic is pretty sure that the Robot friend is not conscious, does that mean he or she can do whatever he or she wants with it? As far as I know, that's what Seth argued in his book "Being You: A New Science of Consciousness"
At some point we reach the Westworld question, if you can't tell the difference, does it matter? I think the hill the skeptic should stake out is in how long we require the system to keep it up before we're convinced. The five minutes of the traditional Turing test isn't enough. But days, weeks, or months? Eventually the claim that they're not conscious becomes the more extraordinary one, and whatever theories we're using to explain consciousness in animals need to be compatible with the systems in question.
Of course, for someone who believes in a fundamental consciousness that amounts to an epiphenomenal essence, no behavioral evidence will be enough. But then what lack of evidence could ever prove their case?
Interesting questions, as always Eric!
It’s worth re-reading Turing. He only mentions five minutes in a prediction of what computers might be able to do by the year 2000. When he is actually discussing the value of the test, he seems to endorse extended friendship as closer to the real test.
I’d forgotten some of those details — thanks for the reminder! I’m assigning the article for my grad seminar this quarter so I’ll be giving it a close read again soon.
If any of them want to watch a two hour video of me reading and discussing the paper, they can look here: https://youtu.be/Xj62KxHfYlY?si=3hO-VtR6PbXU03AN
Good point. When I said "traditional Turing test", I was referring to the way later testers actually did it. From what I've read, they took this remark as a standard that, I agree, Turing almost certainly never meant to set.
"I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning."
Turing, 1950.
Mike, the thought you expressed in your original comment is very much in line with my original thinking. Once they are similar enough over long term -- with a pretty loose sense of "similar", as I suggested in my post (i.e., *not* passing the Turing test, necessarily, in any of its forms) -- then we face a choice between saying behavioral similarity is enough and falling into something like an epiphenomenal essence or (if not epiphenomenal) something pretty cryptic and probably seemingly not what people will care about. I'm not taking a stand on which the right way to go is, given that dilemma. I have a lot of sympathy with the skeptic, despite this post -- but increasing sympathy with the sociological take.
All things that can seek equality must be given it
And this rule should be bent in favor of equality.
Not against it
Oversimplifying?
Ask me again in a thousand years
I think the ethical question can be considered somewhat separately from the question of whether a system is conscious or not. As someone who gravitates to virtue ethics, I think a case can be made for treating even inanimate and presumably non-conscious objects with a certain degree of care. Of course this is a complicated issue, but maybe we can somewhat bypass the question of consciousness by asking, "What does such and such an attitude or stance say about us, who we are, and what we want to be?"
By the way, a friend gave me your book, The Weirdness of the World, and it's sitting on my night stand waiting for me to finish the book I'm working on at the moment. Looking forward to reading it!
Good point about virtue ethics -- though I also think many virtue ethicists think it reveals a different kind or degree of virtue or vice to care about / be indifferent to something with vs without real experiences of joy and suffering. So I don't think virtue ethicists get to fully dodge the question.
I hope you enjoy reading Weirdness as much as I enjoyed writing it!
I think this is an excellent *political* model of what is likely to happen.
Loved the article! It reminded me of Anil Seth's idea that AI is not and cannot be conscious. Still, they would "behave" that they have to because we are great at assigning interactions, feelings, and thoughts even to inanimate geometric figures (Heider-Simmel experiments, for example). But because the robot is not conscious, should we interact with them unfairly? So, because the skeptic is pretty sure that the Robot friend is not conscious, does that mean he or she can do whatever he or she wants with it? As far as I know, that's what Seth argued in his book "Being You: A New Science of Consciousness"