The Nesting Problem for Theories of Consciousness
In 2016, Tomer Fekete, Cees Van Leeuwen, and Shimon Edelman articulated a general problem for computational theories of consciousness, which they called the Boundary Problem. The problem extends to most mainstream functional or biological theories of consciousness, and I will call it the Nesting Problem.
Consider your favorite functional, biological, informational, or computational criterion of consciousness, criterion C. When a system has C, that system is, according to the theory, conscious. Maybe C involves a certain kind of behaviorally sophisticated reactivity to inputs (as in the Turing Test), or maybe C involves structured meta-representations of a certain sort, or information sharing in a global workspace, or whatever. Unless you possess a fairly unusual and specific theory, probably the following will be true: Not only the whole animal (alternatively, the whole brain) will meet criterion C. So also will some subparts of the animal and some larger systems to which the animal belongs.
If there are relatively functionally isolated cognitive processes, for example, they will also have inputs and outputs, and integrate information, and maybe have some self-monitoring or higher-order representational tracking -- possibly enough, in at least one subsystem, if the boundaries are drawn just so, to meet criterion C. Arguably too, groups of people organized as companies or nations receive group-level inputs, engage in group-level information processing and self-representation, and act collectively. These groups might also meet criterion C.[1]
Various puzzles, or problems, or at least questions immediately follow, which few mainstream theorists of consciousness have engaged seriously and in detail.[2] First: Are all these subsystems and groups conscious? Maybe so! Maybe meeting C truly is sufficient, and there's a kind of consciousness transpiring at these higher and/or lower levels. How would that consciousness relate to consciousness at the animal level? Is there, for example, a stream of experience in the visual cortex, or in the enteric nervous system (the half billion neurons lining your gut), that is distinct from, or alternatively contributes to, the experience of the animal as a whole?
Second: If we want to attribute consciousness only to the animal (alternatively, the whole brain) and not to its subsystems or to groups, on what grounds do we justify denying consciousness to subsystems or groups? For many theories, this will require adjustment to or at least refinement of criterion C or alternatively the defense of a general "exclusion postulate" or "anti-nesting principle", which specifically forbids nesting levels of consciousness.
Suppose, for example, that you think that, in humans, consciousness occurs in the thalamacortical neural loop. Why there? Maybe because it's a big hub of information connectivity around the brain. Well, the world has lots of hubs of complex information connectivity, both at smaller scales and at larger scales. What makes one scale special? Maybe it has the most connectivity? Sure, that could be. If so, then maybe you're committed to saying that connectivity above some threshold is necessary for consciousness. But then we should probably theorize that threshold. Why is it that amount rather than some other amount? And how should we think about the discontinuity between systems that barely exceed the threshold versus barely fall short?
Or maybe instead of a threshold, it's a comparative matter: Whenever systems nest, whichever has the most connectivity is the conscious system. But that principle can lead to some odd results. Or maybe it's not really C (connectivity, in this example) alone but C plus such-and-such other features, which groups and subsystems lack. Also fine! But again, let's theorize that. Or maybe groups and subsystems are also conscious -- consciousness happens simultaneously at many levels of organization. Fine, too! Then think through the consequences of that.[3]
My point is not that these approaches won't work or that there's anything wrong with them. My point is that this is a fundamental question about consciousness which is open to a variety of very different views, each of which brings challenges and puzzles -- challenges and puzzles which philosophers and scientists of consciousness, with a few exceptions, have not yet seriously explored.
--------------------------
Notes
[1] For an extended argument that the United States, conceived of as an entity with people as parts, meets most materialist criteria for being a consciousness entity, see my essay here. Philip Pettit also appears to argue for something in this vicinity.
[2] Giulio Tononi is an important exception (e.g., in Oizumi, Albantakis, and Tononi 2014 and Tononi and Koch 2015).
[3] Luke Roelofs explores a panpsychist version of this approach in his recent book Combining Minds, which was the inspiration for this post.