Comparing the Major Theories of Consciousness ned block abstract This article compares the three frameworks for theories of consciousness that are taken most seriously by neuroscientists: the view that consciousness is a biological state of the brain, the global workspace perspective, and an account in terms of higher order states. The comparison features the “explanatory gap” (Nagel, 1974; Levine, 1983), the fact that we have no idea why the neural basis of an experience is the neural basis of that experience rather than another experience or no experience at all. It is argued that the biological framework handles the explanatory gap better than do the global workspace or higher order views. The article does not discuss quantum theories or “panpsychist” accounts according to which consciousness is a feature of the smallest particles of inorganic matter (Chalmers, 1996; Rosenberg, 2004). Nor does it discuss the “representationist” proposals (Tye, 2000; Byrne, 2001a) that are popular among philosophers but not neuroscientists. Three theories of consciousness Higher Order The higher order approach says that an experience is phenomenally conscious only in virtue of another state that is about the experience (Armstrong, 1978; Lycan, 1996a; Byrne, 1997; Carruthers, 2000; Byrne, 2001b; Rosenthal, 2005a). This perspective comes in many varieties, depending on, among other things, whether the monitoring state is a thought or a perception. The version to be discussed here says that the higher order state is a thought (“higher order thought” is abbreviated as HOT) and that a conscious experience of red consists in a representation of red in the visual system accompanied by a thought in the same subject to the effect that the subject is having the experience of red. Global Workspace The global workspace account of consciousness was first suggested by Bernard Baars (1988) and has been developed in a more neural direction by Stanislas Dehaene, Jean-Pierre Changeux, and their colleagues (Dehaene, Changeux, Nacchache, Sackur, & Sergent, 2006). The account presupposes a neural network approach in which there is competition among neural coalitions involving both frontal and sensory areas (Koch, 2004), the winning coalitions being conscious. Sensory ned block New York University, New York, New York stimulation causes activations in sensory areas in the back of the head that compete with each other to form dominant coalitions (indicated by dark elements in the outer rings in f igure 77.1). Some of these dominant coalitions trigger central reverberations through long-range connections to frontal cortex, setting up activations that help to maintain both the central and peripheral activations. The idea that some brain areas control activations and reactivations in other areas is now ubiquitous in neuroscience (Damasio & Meyer, 2008), and a related idea is widely accepted: that one instance of reciprocal control is one in which workspace networks in frontal areas control activations in sensory and spatial areas (Curtis & D’Esposito, 2003). It is useful in thinking about the account to distinguish between suppliers and consumers of representations. Perceptual systems supply representations that are consumed by mechanisms of reporting, reasoning, evaluating, deciding, and remembering, which themselves produce representations that are further consumed by the same set of mechanisms. Once perceptual information is “globally broadcast” in frontal cortex this way, it is available to all cognitive mechanisms without further processing. Phenomenal consciousness is global broadcasting. Although the global workspace account is motivated and described in part in neural terms, the substantive claims of the model abstract away from neuronal details. Nothing in the model requires the electrochemical nature of actual neural signals. The architectural aspects of the model can just as easily be realized in silicon-based computers as in protoplasm. In this respect, the global workspace theory of consciousness is a form of what philosophers call functionalism (Block, 1980), according to which consciousness is characterized by an abstract structure that does not include the messy details of neuroscience. Another functionalist theory of consciousness is the integrated information theory (Tononi & Edelman, 1998), according to which the level of consciousness of a system at a time is a matter of how many possible states it has at that time and how tightly integrated its states are. This theory has a number of useful features—for example, retrodicting that there would be a loss of consciousness in a seizure in which the number of possible states drops precipitously block: comparing the major theories of consciousness Gazzaniga_77_Ch77.indd 1111 1111 6/19/2009 10:07:59 AMFigure 77.1 Schematic diagram of the global workspace. Sensory activations in the back of the brain are symbolized by dots and lines in the outside rings. Dominant sensory neural coalitions (dark lines and dots) compete with one another to trigger reverberatory activity in the global workspace (located in frontal areas) in the center of the diagram. The reverberatory activity in turn maintains the peripheral excitation until a new dominant coalition wins out. (Tononi & Koch, 2008). Unfortunately, such predictions would equally follow from an integrated information theory of intelligence (in the sense of the capacity for thought, as in the Turing test of intelligence)—which also drops in a seizure. Consciousness and intelligence are on the face of it very different things. We all understand science fiction stories in which intelligent machines lack some or all forms of consciousness. And on the face of it, mice or even lower animals might have phenomenal consciousness without much intelligence. The separation of consciousness and cognition has been crucial to the success of the scientific study of consciousness. In a series of papers that established the modern study of consciousness, Crick and Koch (1990, 1998) noted in particular that the basic processes of visual consciousness could be found in nonprimate mammals and were likely to be independent of language and cognition. Although its failure to distinguish consciousness and intelligence is crippling for the current prospects of the integrated information theory as a stand-alone theory of consciousness, I will mention it at the end of the article in a different role: as an adjunct to a biological theory. The Biological Theory The third of the major theories is the biological theory, the theory that consciousness is some sort of biological state of the brain. It derives from Democritus (Kirk, Raven, & Schofield, 1983) and Hobbes (1989), but was put in modern form in the 1950s by Place (1956), Smart (1959), and Feigl (1958). (See also Block, 1978; Crane, 2000; Lamme, 2003.) I will explain it using as an example the identification of the visual experience of (a kind of) motion in terms of a brain state that includes activations of a certain sort of area MT+ in the visual cortex. Although this explanation is useful as an example, we can expect that any theory of visual experience will be superseded. 1112 consciousness Visual area MT+ reacts to motion in the world, different cells reacting to different directions. Damage to MT+ can cause loss of the capacity to experience this kind of motion; MT+ is activated by the motion aftereffect; transcranial magnetic stimulation of MT+ disrupts these afterimages and also can cause motion “phosphenes” (Zihl, von Cramon, & Mai, 1983; Britten, Shadlen, Newsome, & Movshon, 1992; Heeger, Boynton, Demb, Seideman, & Newsome, 1999; Kammer, 1999; Cowey & Walsh, 2000; Kourtzi & Kanwisher, 2000; Huk, Ress, & Heeger, 2001; Rees, Kreiman, & Koch, 2002; Théoret, Kobayashi, Ganis, Di Capua, & Pascual-Leone, 2002). However, it is important to distinguish between two kinds of MT+ activations, which I will call nonrepresentational activations and representational activations. Some activations in the visual system are very weak, do not “prime” other judgments (that is, do not facilitate judgments about related stimuli), and do not yield above-chance performance on forced-choice identification or detection (that is, they do not allow subjects to perform above chance on a choice of what the stimulus was or even whether there was a stimulus or not). On a very liberal use of the term “representation” in which any neural activation that correlates with an external property is a representation of it (Gallistel, 1998), one might nonetheless call such activations of MT+ representations, but it will be useful to be less liberal here, describing the weak activations just mentioned as nonrepresentational. (The term “representation” is very vague and can be made precise in different equally good ways.) However, if activations of MT+ are strong enough to be harnessed in subjects’ choices (at a minimum in priming), then we have genuine representations. (See Siegel, 2008, for a discussion of the representational contents of perceptual states.) Further, there is reason to think that representations in MT+ that also generate feedback loops to lower areas are at least potentially conscious representational contents (Pascual-Leone & Walsh, 2001; Silvanto, Cowey, Lavie, & Walsh, 2005). (For a dissident anti-feedback-loop perspective, see Macknik & Martinez-Conde, 2007.) Of course, an activated MT+ even with feedback to lower visual areas is not all by itself sufficient for phenomenal consciousness. No one thinks that a section of visual cortex in a bottle would be conscious (Kanwisher, 2001). What makes such a representational content phenomenally conscious? One suggestion is that active connections between cortical activations and the top of the brain stem constitute what Alkire, Haier, and Fallon (2000) call a “thalamic switch.” There are two important sources of evidence for this view. One is that the common feature of many if not all anesthetics appears to be that they disable these connections (Alkire & Miller, 2005). Another is that the transition from the vegetative state to the minimally conscious state (Laureys, 2005) involves these connections. However, there 6/19/2009 10:07:59 AM Gazzaniga_77_Ch77.indd 1112is some evidence that the “thalamic switch” is an on switch rather than an off switch (Alkire & Miller, 2005) and that corticothalamic connections are disabled as a result of the large overall decrease in cortical metabolism (Velly et al., 2007; Alkire, 2008; Tononi & Koch, 2008)—which itself may be caused in part by the deactivation of other subcortical structures (Schneider & Kochs, 2007). Although this area of study is in flux, the important philosophical point is the three-way distinction between (1) a nonrepresentational activation of MT+, (2) an activation of MT+ that is a genuine visual representation of motion, and (3) an activation of MT+ that is a key part of a phenomenally conscious representation of motion. The same distinctions can be seen in terms of the global workspace theory as the distinction among (1) a minimal sensory activation (the gray peripheral activations in figure 77.1), (2) a peripheral dominant coalition (the black peripheral activations in figure 77.1), and (3) a global activation involving both peripheral and central activation (the circled activations in figure 77.1 that connect to the central workspace). The higher order account is focused on the distinction between a visual representation and a conscious visual representation (2 versus 3), a visual representation that is accompanied by a higher order thought to the effect that the subject has it. Here are some items of comparison between the three theories. According to the biological account, global broadcasting and higher order thought are what consciousness does rather than what consciousness is. That is, one function of consciousness on the biological view is to promote global broadcasting, and global broadcasting in some but not all cases can lead to higher order thought. Further, according to the biological view, both the global workspace and higherorder thought views leave out too many details of the actual working of the brain to be adequate theories of consciousness. Information in the brain is coded electrically, then transformed to a chemical code, then back to an electrical code, and it would be foolish to assume that this transformation from one form to another is irrelevant to the physical basis of consciousness. From the point of view of the biological and global workspace views, the higher-order-thought view sees consciousness as more intellectual than it is, but from the point of view of higher-order-thought accounts, the biological and global workspace accounts underestimate the role of cognition in consciousness. The global workspace and higherorder-thought accounts are sometimes viewed as superior to the biological account in that the biological account allows for the possibility that a subject could have a phenomenally conscious state that the subject does not know about (Block, 2007a, 2007b). And this is connected to the charge that the biological account—as compared with the other accounts—neglects the connection between phenomenal consciousness and the self (Church, 1995; Harman, 1995; Kitcher, 1995). The higher order and global workspace accounts link consciousness to the ability to report it more tightly than does the biological view. On the higher-order-thought view, reporting is just expressing the higher order thought that makes the state conscious, so the underlying basis of the ability to report comes with consciousness itself. On the global workspace account, what makes a representational content conscious is that it is in the workspace, and that just is what underlies reporting. On the biological account, by comparison, the biological machinery of consciousness has no necessary relation to the biological machinery underlying reporting, and hence there is a real empirical difference among the views that each side seems to think favors its own view (Block, 2007b; Naccache & Dehaene, 2007; Prinz, 2007; Sergent & Rees, 2007). To evaluate and further compare the theories, it will be useful to appeal to a prominent feature of consciousness, the explanatory gap. The explanatory gap Phenomenal consciousness is “what it is like” to have an experience (Nagel, 1974). Any discussion of the physical basis of phenomenal consciousness (henceforth just consciousness) has to acknowledge the “explanatory gap” (Nagel, 1974; Levine, 1983): nothing that we now know, indeed nothing that we have been able to hypothesize or even fantasize, gives us an understanding of why the neural basis of the experience of green that I now have when I look at my screen saver is the neural basis of that experience as opposed to another experience or no experience at all. Nagel puts the point in terms of the distinction between subjectivity and objectivity: the experience of green is a subjective state, but brain states are objective, and we do not understand how a subjective state could be an objective state or even how a subjective state could be based in an objective state. The problem of closing the explanatory gap (the “Hard Problem” as Chalmers, 1996, calls it) has four important aspects: (1) we do not see a hint of a solution; (2) we have no good argument that there is no solution that another kind of being could grasp or that we may be able to grasp at a later date (but see McGinn, 1991); so (3) the explanatory gap is not intrinsic to consciousness; and (4) most importantly for current purposes, recognizing the first three points requires no special theory of consciousness. All scientifically oriented accounts should agree that consciousness is in some sense based in the brain; once this fact is accepted, the problem arises of why the brain basis of this experience is the basis of this one rather than another one or none, and it becomes obvious that nothing now known gives a hint of an explanation. block: comparing the major theories of consciousness Gazzaniga_77_Ch77.indd 1113 1113 6/19/2009 10:08:00 AMThe explanatory gap was first brought to the attention of scientists through the work of Nagel (1974) and Crick and Koch (Crick, 1994; Crick & Koch, 1998). Many would argue that the candid recognition of what we do not understand played an important role in fueling the incredible wave of research that still engulfs us. How do the three theories account for the explanatory gap? The HOT view says that consciousness of, say, red is a matter of three ingredients: a higher order thought, a representation with the content red, and an aboutness relation between the first and the second. According to the HOT perspective, each of these ingredients can exist individually without any consciousness. We have unconscious thoughts— for example, subliminal representations of red—and those unconscious thoughts are, unconsciously, about things. According to the HOT theory, if a subject has an unconscious representation of red, and then forms an unconscious thought about the representation of red, the representation of red automatically is conscious. Of course, in some trivial sense of “conscious” we might decide to call that representation of red conscious, meaning only that there is a higherorder thought about it; but if the HOT theory is about consciousness in the full-blooded sense in which for a state to be conscious is for there to be something it is like to be in that state, there is a fundamental mystery for the HOT view. It may seem that this is just the explanatory gap in a new form, one appropriate to the HOT theory, but that assertion is a mistake. Consider the prime order thought (POT) view— which says that thoughts about thoughts about thoughts . . . are always conscious so long as the number of embeddings is prime. There is a puzzle of the POT view’s own making of why a prime number of embeddings creates consciousness, but that puzzle is not the real explanatory gap. The real explanatory gap is the problem of why the neural basis of a conscious state with a specific conscious quality is the neural basis of that conscious quality rather than another or nothing at all. The real explanatory gap does not assume any specific theory except the common basis of all scientific approaches in the 21st century, that conscious qualities have a brain basis. The problem for the HOT perspective is that it is part of the idea of it that putting together ingredients that are not in themselves conscious (thought, aboutness, and representation) automatically exhibits consciousness. The most neuroscience can do is explain thought, explain aboutness, and explain representation. But there is no reason to expect— and it is not part of any HOT perspective—that neuroscience will find some magic glow that occurs when those things combine. The fact that the HOT theory cannot recognize the real explanatory gap makes it attractive to people who do not 1114 consciousness agree that there is an explanatory gap in the first place—the HOT theory is a kind of “no consciousness” theory of consciousness. But for those who accept an explanatory gap (at least for our current state of neuroscientific knowledge), the fact that the HOT theory does not recognize one is a reason to reject the HOT theory. The HOT theory is geared to the cognitive and representational aspect of consciousness, but if those aspects are not the whole story, the HOT theory will never be adequate to consciousness. This very short argument against the HOT approach also applies to the global workspace theory, albeit in a slightly different form. According to the global workspace account, the answer to the question of why the neural basis of my experience of red is the neural basis of a conscious experience is simply that it is globally broadcast. But why is a globally broadcast representation conscious? This is indeed a puzzle for the global workspace theory but it is not the explanatory gap because it presupposes the global workspace theory itself, whereas the explanatory gap (discussed previously) does not. The most neuroscience can do for us according to the global workspace account is explain how a representation can be broadcast in the global workspace, but the task will still remain of explaining why global broadcasting, however realized, is conscious. In principle, global broadcasting could be realized in an electronic system rather than a biological system, and of course the same issue will arise. So that issue cannot be special to the biological realization of mind. The biological account, by contrast, fits the explanatory gap—indeed, I phrased the explanatory gap in terms of the biological account, asking how we can possibly understand how consciousness could be a biological property. So the biological account is the only one of the three major theories to fully acknowledge the explanatory gap. From the point of view of the HOT and global workspace theories, their task concerning the explanatory gap is not to show how they can accommodate it, but rather to explain away our impression that there is one. One such attempt will be considered in the next section. There is a fine line between acknowledging the explanatory gap and surrendering to dualism, as also discussed in the next section. The explanatory gap and dualism Dualism is the view that there is some aspect of the mind that is not physical (Chalmers, 1996). It comes in many varieties, but the issues to be discussed do not depend on any specific variety. Let us start with a historical analogy (Nagel, 1974). A pre-Socratic philosopher would have no way of understanding how heat could be a kind of motion or of how light could be a kind of vibration. Why? Because the pre-Socratic 6/19/2009 10:08:00 AM Gazzaniga_77_Ch77.indd 1114philosopher did not have the appropriate concepts of motion—namely, the concept of kinetic energy and its role—or of vibration—namely, the concepts involved in the wave theory of light—that would allow an understanding of how such different concepts could pick out the same phenomenon. What is a concept? A concept is a mental representation usable in thought. We often have more than one concept of the same thing. The concept light and the concept electromagnetic radiation of 400–700 nm pick out the same phenomenon. What the pre-Socratic philosopher lacks is a concept of light and an appropriate concept of vibration (one that requires a whole theory). What is missing for the pre-Socratic is not just the absence of a theoretical definition but a lack of understanding of what things are grouped together from a scientific point of view. We now realize that ripples in a pond, sound, and light are all phenomena of the same kind: waves. And we now realize that burning, rusting, and metabolizing are all cases of oxidation (Churchland, 2002), but the preSocratics, given their framework in which the basic categories were fire, earth, air, and water, would have had no way to grasp these facts. One upshot is that if superscientists of the future were to tell us what consciousness is, we probably would not have the conceptual machinery to understand, just as the pre-Socratic would not have the conceptual machinery to understand that heat is a kind of motion or that light is a kind of vibration. Armed with this idea, we can see how to steer between the explanatory gap and dualism. What we lack is an objective neuroscientific concept that would allow us to see how it could pick out the same phenomenon as our subjective concept of the experience of green. And we can expect that we do not even have the right subjective concept of the experience of green, since we are not sure what subjective phenomena truly should be grouped together. The resolution of the apparent conflict between the explanatory gap and physicalism is that subjectivity and objectivity can be seen as properties of concepts rather than properties of the states that the concepts are concepts of. This idea, that we can see arguments that apparently indicate ontological dualism—that is, a dualism of objects or substances or properties—as really an argument for conceptual dualism, stems from Nagel (1974) and Loar (1990/1997) and is sometimes called New Wave physicalism (see Horgan & Tienson, 2001). Another way of seeing the point is to consider Jackson’s (1982) famous thought experiment concerning Mary, a neuroscientist of the distant future who knows everything there is to know about the scientific basis of color experience, but has grown up in a black-and-white environment. When she sees red for the first time, she learns what it is like to see red, despite already knowing all the scientific facts about seeing red. Does this show that the fact of what it is like to see red is not a scientific fact? No, because we can think of what Mary learns in terms of her acquiring a subjective concept of a state that she already had an objective concept of. Imagine someone who already knows that Lake Michigan is filled with H2O, but learns something new: that Lake Michigan is f illed with water. What this person learns is not a new fact but a new piece of knowledge, involving a new concept, of a fact the person already knew. Similarly, Mary acquires new knowledge, but that new knowledge does not go beyond the scientific facts that she already knew about, and so does not support any kind of dualism. (This line of thought is debated in Block, 2006; White, 2006.) Importantly, this line of reasoning does not do away with the explanatory gap but rather reconceives it as a failure to understand how a subjective and an objective concept can pick out the same thing. These points about different concepts of the same thing have sometimes been used to try to dissolve the explanatory gap (Papineau, 2002). The idea is that the false appearance of an explanatory gap arises from the gap between a subjective concept of a phenomenally conscious state and an objective concept of the same state. But note: I can think the thought that the color I am now experiencing as I look at an orange (invoking a subjective concept of orange) is identical to the color between red and yellow (invoking an objective concept of orange). But this use of the two kinds of concepts engenders no explanatory gap. Thus far, the score is biological theory 1, HOT and global workspace 0. But the competition has not yet encountered the heartland of the HOT theory.
Leave a Reply