New Article (Yay!) on How to Continue to Be Puzzled about Robot/A.I. Consciousness
How can you tell whether a robot, or some other A.I. system, has conscious experiences (i.e., phenomenal consciousness, i.e., that there's "something it's like" to be that system)?
The question matters because conscious experience, or at least certain types of conscious experience, is the most valuable thing in the universe. A planet devoid of all consciousness -- mere stones and algae, say -- is missing something amazing that Earth has: joy, sadness, insight, inspiration, wonder, imagination, relief, longing, understanding, sympathy... each assumed to have experiential components. If we build robots who genuinely possess consciousness, as opposed to being mere empty machines (so to speak), we will have succeeded in something wondrous: the creation of a new type of experiential entity, with a new range of experiential capacities. Such entities will deserve our solicitude and care, perhaps even more care than we owe to human strangers, due to obligations attached to being its creators.
This question throws us upon one of the most difficult questions in all of science: how to detect the presence or absence of conscious experience in entities very different from us.
[the android Data from Star Trek, testifying in a 24th-century court trial concerning his status as a conscious entity with rights]
Now there are two views on which the question is easy. According to panpsychism, consciousness is ubiquitous, so even currently existing robots are conscious, or contain consciousness, or participate in some cosmic consciousness. Far on the other end, according to biological views of consciousness (especially John Searle's view), no artificially constructed, non-biological system could ever be conscious, no matter how sophisticated it seemed to outside observers. Both views are extreme, so let's set them aside (despite their merits).
If we cut off those extremes, we are still left with a wide range of middling views about robot consciousness -- all the way from very liberal views according to which we are already close to creating robot consciousness to very conservative views in which the possibility might require radically new technologies of the far distant future. Even among moderate views, positions differ regarding what's necessary for consciousness like ours (the right kind of integrated information? the right kind of "global workspace"? higher-order self-monitoring?).
These debates show no sign of subsiding in the foreseeable future. So it would be nice if we could have a relatively theory-neutral test of robot consciousness -- a test that at least most moderately inclined theorists of consciousness could agree was diagnostic, despite continuing disputes about underlying theory.
The most famous relatively theory-neutral test is the Turing Test, according to which a machine counts as "thinking", or (adapting to the present case) "being conscious", if its verbal outputs are indistinguishable from those of an ordinary adult human. Unfortunately, the Turing Test has at least three crucial limitations:
First, some entities that most of us would agree have conscious experiences, such as babies and dogs, fail the test.
Second, the test relies exclusively on patterns of external behavior, and so it assumes the falsity of any theory of consciousness on which consciousness depends on internal mechanisms separable from outward behavior (which is probably, in fact, most current theories).
Third, currently existing chatbots already come close to passing it despite, on most moderate views of consciousness, not being conscious. This suggests that the test is liable to "cheating strategies" in which a machine could pass by superficial imitation of human linguistic patterns.
In her 2019 book, Susan Schneider comes to the rescue, proposing a couple of, purportedly, relatively theory-neutral tests of robot consciousness.
One is the AI Consciousness Test (ACT), which is a version of the Turing Test designed to limit cheating strategies by preventing the machine from having access to textual data on human discussions of consciousness. The ACT also focuses the machine's responses to philosophical questions concerning consciousness (life after death, soul swapping, etc.). Schneider and her collaborator on this test, Edwin Turner, hope that with the right kinds of restrictions and a focus on questions concerning consciousness, the machine would only speak like a human if it had genuine introspective access to real conscious experiences.
Schneider's second test is the Chip Test, which involves gradually replacing parts of your brain with silicon (or other artificial) chips, then introspecting. If you introspectively detect that consciousness continues to be as vividly present, you can infer that the silicon chip supports consciousness. If you introspectively detect a decline in consciousness, then you can infer that the chip does not adequately support consciousness.
So now, to the new article promised in the title of this post.
Back in 2017 or 2018, my awesome undergraduate student David Billy Udell became fascinated with these issues. He decided to write an honors thesis about them, focusing on Schneider's two tests (familiar from her presentations in popular media and then later from an advance draft of her book that she kindly shared). David finished his thesis in 2019, then went off to graduate school in philosophy at CUNY. Together, we revised his critique of Schneider into a journal article, finally published last weekend.
His/our fundamental criticism is that neither test is as theory-neutral as it might seem. In other words, they have an "audience problem". Liberals about A.I. consciousness will probably think the tests are unnecessary or too stringent; skeptics and conservatives will probably think they aren't stringent enough. The tests are a partial advance, helpful to a limited range of theorists whose doubt or skepticism is of exactly the right sort to be addressed by the specifics of the tests. However, in short, the ACT is still open to cheating/mimicking strategies, despite Schneider and Turner's efforts. And the Chip Test relies on an awkward combination of skepticism about the purported introspections of fully-chipped robots and uncritical acceptance of the tester's purported introspections after partially replacing their brain with chips.
For the full critique, see the official published version at Journal of Consciousness Studies or the final MS version here).
What a pleasure to see David's work now in print in a prominent journal -- hopefully the start of a magnificient academic career!