104

by

in

Consciousness-of It is very often (but not always—Dretske, 1993) assumed that a conscious state is a state that one is conscious of being in (Lycan, 1996a). I am willing to agree in order to focus on other matters. The HOT theory has an attractive explanation of this claim, because consciousness-of can be cashed out as being the object of a HOT. However, there are two other accounts of why a conscious state is one that one is conscious of being in, and these accounts are preferable to the HOT account—according to the viewpoint of the biological theory and the global workspace theory. The deflationary account (Sosa, 2003) says that all there is to being conscious of one’s experience is the triviality that in having an experience, one experiences it, just as one smiles one’s smile and dances one’s dance. Consciousness-of in this sense is to be firmly distinguished from attending to one’s experience (Burge, 2006). One can have a conscious experience of red, and that experience can have whatever awareness comes with conscious experience, even in the absence of top-down attention to it (Koch & Tsuchiya, 2007). Another rival to the higher order account of why a conscious state is one that one is conscious of is the same order account in which block: comparing the major theories of consciousness  Gazzaniga_77_Ch77.indd 1115 1115 6/19/2009 10:08:00 AMa conscious pain is reflexive in that it is about itself. That is, it has a content that turns back on itself, and that is what makes a pain a state one is conscious of. This view had its beginnings in Aristotle (Caston, 2002) and was later pursued by Brentano (1874/1973). (See Burge, 2006; Kriegel & Williford, 2006.) Either one of the deflationary or same order accounts can be adopted by advocates of the biological view and the global workspace view, so I see no real advantage for the HOT view here. Further problems for the HOT theory I argued that the HOT theory cannot recognize an explanatory gap, but my argument was oversimple because it neglected a crucial distinction between two types of HOT theories. The kind of HOT theory that cannot recognize an explanatory gap is the ambitious HOT theory of phenomenal consciousness that analyzes phenomenal consciousness in terms of higher order thought. But there is also a modest and therefore innocuous form of the HOT theory that just says that, in addition to phenomenal consciousness, there is another kind of consciousness, higher order consciousness. Phenomenal consciousness is one thing, and higher order consciousness is another. The modest form can recognize an explanatory gap for phenomenal consciousness. The modest account is suggested by Lycan’s remark, “I cannot myself hear a natural sense of the phrase ‘conscious state’ other than as meaning ‘state one is conscious of being in’ ” (Lycan, 1996b). As Lycan recognizes, what one can and cannot “hear” leaves the theoretical options open. The modest account is tantamount to a verbal claim—that there is a sense of the term “conscious” (distinct from “phenomenal consciousness”) that has a higher order meaning—and does not dictate that there is no explanatory gap. The very short argument against the HOT theory (that it does not recognize an explanatory gap and so is false) is an argument only against the ambitious form of the HOT theory. In the rest of this section, I will explain some other problems with the ambitious HOT theory that also do not apply to the modest version. The first thing to realize about the HOT theory in both the ambitious and modest forms is that it needs considerable qualification. Suppose I consciously infer that I am angry from my angry behavior, or—in a slightly different kind of case that need not involve conscious inference—I am aware of my anger in noticing my angry fantasies. In these cases we would not say the anger is thereby conscious. Further, Freudians sometimes suppose that a subject can unconsciously recognize his own desire to, for example, kill his father and marry his mother, along with the need to cloak that desire in a form that will not cause damage to the self. But we would not say that in virtue of such an unconscious HOT (one that cannot readily become conscious) about it, the desire is therefore conscious! These examples concerning 1116  consciousness what we would say suggest that a HOT about a state is not something we regard as sufficient for the state to be conscious. Defenders of the HOT theory introduce complications in the HOT theory to try to avoid these counterexamples. Rosenthal (2005a) says that S is a conscious state if and only if S is accompanied by a thought to the effect that the subject is in S that is arrived at without inference or observation of which the subject is conscious. The italicized phrase avoids the problems posed by conscious observation of angry fantasies and conscious inference by stipulating that HOTs arrived at by conscious observation and inference are not sufficient for consciousness. (Another stipulation that I will not describe is supposed to handle the Freudian issue.) Suppose as a result of biofeedback training I come to have noninferential nonobservational knowledge of states of my liver (Block, 1995). Since we would not count the state of the liver as conscious in virtue of the HOT about it, Rosenthal (2000b, p. 240) further stipulates that only mental states can be conscious. What if I have a HOT about my future or past mental state? Rosenthal (2000b, p. 241) further stipulates that if one has a thought about a state, that makes it conscious only when one thinks of it as present to oneself. As Bertrand Russell noted in an often-quoted passage (1919, p. 71), “The method of ‘postulating’ what we want has many advantages; they are the same as the advantages of theft over honest toil.” Honest toil is not required if the HOT view is understood as a modest account, since stipulation is not a problem in a stipulated sense of a term, but ad hoc stipulation is a problem if we take the HOT view as an ambitious account, especially as an empirical theory of consciousness. A second class of issues concerns the “mismatch problem,” the possibility of a mismatch in content between a sensory representation and the accompanying HOT. What phenomenally conscious quality does an experience have if a HOT to the effect that one has a dull throbbing pain in the toe is accompanied not by any representation of toe damage but instead a visual representation of red—or by no sensory representation at all? If the sensory representation determines the conscious quality all by itself, the contents of HOTs are irrelevant here, and if here, why not elsewhere? And if the HOT determines the conscious quality without the sensory representation, then the contents of sensory representations are irrelevant—so what is the difference between thinking you have a delightful experience and actually having one (Byrne, 1997; Neander, 1998; Balog, 2000; Rey, 2000; Levine, 2001)? Of course, new sophistication in one’s HOTs, as when one learns to recognize different wines, can cause a corresponding differentiation in the sensory states that the HOTs are about, but HOTs are not always causally self-fulfilling (if only!), and in any case, causal self-fulfillment does not answer the constitutive question of what the difference is between thinking you have an experi6/19/2009 10:08:00 AM Gazzaniga_77_Ch77.indd 1116ence of a certain sort and actually having one. Rosenthal (2000b; 2000a; 2005b, pp. 217–219) claims that a HOT is sufficient for a conscious state even without any sensory representation that the HOT is about. But suppose I have a sharp pain that causes a HOT to the effect that I have a sharp pain, through the normal processes by which pains often cause metacognitions about them. And suppose that by chance I also have a qualitatively different sharp pain (one pain is a bit sharper than the other) that produces no HOT at all. The content of the HOT—that I have a sharp pain—does not distinguish between the two pains even though by any ordinary standard it is about one of them but not the other. If the HOT theory follows common sense, saying that one pain is conscious but the other is not, it is hard to see how that (partly causal) way of cashing out aboutness could be compatible with the claim that a HOT to the effect that I am in pain could be a conscious pain on its own without any sensory representation. A third class of issues concerns children. If you have seen and heard a circumcision, you may find it difficult to doubt that it hurts. Relevant evidence: newborns who are circumcised without anesthesia or analgesia are more stressed by later vaccination even 6 months later (Taddio, Goldbach, Ipp, Stevens, & Koren, 1995). My point is not that you should be totally convinced of phenomenal consciousness in early infancy, but rather that you should be convinced that there is a better case for phenomenal consciousness in infancy than there is for those instances of phenomenal consciousness being accompanied by higher order thought. One point against higher order thought in infancy is that frontal cortex, the likely neural home of thought about thought (Stone, Baron-Cohen, & Knight, 1998) is immature in infancy. Gazzaniga, Ivry, and Mangun (2002, pp. 642643) discuss two sources of evidence that areas of the brain that specialize in sensory and motor function develop significantly earlier than areas responsible for thinking. One source of evidence derives from autopsy results on human brains from age 28 weeks after conception to 59 years of age. The result, diagrammed in figure 77.2, is that auditory synaptic density peaks at about 3 months (and probably likewise for synaptic density in other sensory areas), whereas the association areas of the frontal cortex peak at about 15 months. Similar results apply to PET imaging, which measures glucose metabolism in different parts of the brain. As infants become more mature, our confidence in their phenomenal consciousness increases, as does our confidence in their capacity for higher order thought. However, it continues to be doubtful that phenomenally conscious states are always accompanied by higher order thoughts. Children even up to age 3–4 have difficulty thinking about their own states of mind. For example, Alison Gopnik and her colleagues (Gopnik & Graf, 1988) used a tube that was open at both ends and contained a window that could be open or closed. The child would be asked to either look in the window or reach into the side and identify a common object, for example, a spoon. Then with the apparatus taken away, the Figure 77.2 Relative synaptic density of auditory and frontal cortex. Conceptual age is age from conception. The peak at the left of roughly three months (postnatal) reflects a high number of auditory synapses relative to frontal synapses. (From Gazzaniga, Ivry, & Mangun, 2002.) block: comparing the major theories of consciousness  Gazzaniga_77_Ch77.indd 1117 1117 6/19/2009 10:08:00 AMchild was asked how he or she knew the spoon was in the tube. The children were nearly random in their answers, probably because, as Gopnik has pointed out in a series of papers (see Gopnik, 2007), they have difficulty attending to and thinking about their own representational states. Marjorie Taylor and her colleagues have compared “source amnesia” for representational states of mind with skills (Esbensen, Taylor, & Stoess, 1997). For example, some children were taught to count in Japanese, whereas other children were taught the Japanese word for “three.” Children were much less likely to be able to name the source of their representational state than the source of their counting skill. (For example, “You just taught me” in answer to the skill question versus “I’ve always known” in answer to the representational state question.) The source amnesia results apply most directly to conscious intentional states rather than conscious perceptual states, but to the extent that perceptual states are representational, they may apply to them as well. Older autistic children who clearly have phenomenally conscious states also have problems attending to and thinking about representational states of mind (Baron-Cohen, 1995; Charman & Baron-Cohen, 1995). Will a defender of the ambitious HOT theory tell us that these autistic children lack phenomenal states? Or that contrary to the evidence they do have HOT states? I emphasize that it is difficult for young children and autists to think about representational states of mind—but not impossible. Indeed, children as young as 13 months can exhibit some ability to track others’ beliefs (Onishi & Baillargeon, 2005; Surian, Caldi, & Sperber, 2007). In the case of false belief, as in many other examples of cognition, a cognitive achievement is preceded by a highly modular and contextualized analog of it, one that partly explains the development of the cognitive achievement. My point is not that metacognition in all its forms is impossible in young children and autists but that, at all ages, our justification for attributing conscious states exceeds our justification for attributing metacognitive states. Although the empirical case against the higher-orderthought point of view is far from overwhelming, it is strong enough to make the question salient of what the advantages of the ambitious higher-order-thought theory of consciousness actually are (as contrasted with the advantages of the modest version, which none of these points apply to). But how do we know whether a version of the HOT theory is ambitious or modest? One way to tell is to ask whether, on that theory, a phenomenally conscious state— considered independently of any HOT about it—is something that is bad or good in itself. For example, Carruthers (1989, 1992) famously claimed that because pains in dogs, cats, sheep, cattle, pigs, and chickens are not available to be thought about, they are not felt and hence not anything to be concerned about; that is, they are states with no moral 1118  consciousness significance. (Carruthers later, in 1999, took a different view on the grounds that frustration of animal desires is of moral significance even though the pains themselves are not.) I turn now to related issues about the self that may seem to go against the biological view. The self The biological view may seem at a disadvantage with respect to the self. Since Hume (1740/2003) described the self as “that connected succession of perceptions,” many (Dennett, 1984; Parfit, 1984) have thought about persons in terms of integrated relations among mental states. The global workspace view seems well equipped to locate consciousness as self-related given that broadcasting in the global workspace is itself a kind of integration. And the HOT view at least requires the integration of one state being about another. By contrast, it looks as if, on many views of the biological basis of a conscious state (Block, 1995), it could exist without integration, and this point has resulted in accusations of scanting the self (Church, 1995; Harman, 1995; Kitcher, 1995). One response would be to favor a biological neural basis of consciousness that itself involves integration (Tononi & Edelman, 1998; Tononi & Koch, 2008). But it is worth pointing out that phenomenal consciousness has less to do with the self than critics often suppose. What is the relation between phenomenal consciousness and the self? We could raise the issue by thinking about pain asymbolia, a syndrome in which patients have pain experiences without the usual negative affect (Aydede, 2005): they do not seem to mind the pain. In this syndrome, patients sometimes describe the pains as painful for someone else, and perhaps they are right given pain’s unusual lack of connection to the subject’s emotions, planning, and valuation. Here is a question about such a dissociation syndrome: If such a subject thinks about the painfulness of such a pain (as opposed to its merely sensory aspect), is the painfulness thereby phenomenally conscious? It would seem not, suggesting that the kind of integration supplied by HOTs is not actually sufficient for consciousness. Here is another conundrum involving the relation between phenomenal consciousness and the self. In many experiments, activation in the fusiform face area at the bottom of the temporal lobe has been shown to correlate with the experience of a face. Now, injury to the parietal lobe often causes a syndrome called visuospatial extinction. If the patient sees a single object, the patient can identify it, but if there are objects on both the right and the left, the patient claims not to see one—most commonly the one on the left. However two fMRI studies (Rees et al., 2000; Rees, Wojciulik, et al., 2002) have shown that in patient GK, when GK claims not to see a face on the left, his fusiform face area lights up almost as much as when he reports seeing the 6/19/2009 10:08:00 AM Gazzaniga_77_Ch77.indd 1118face. One possibility is that the original identification of the fusiform face area as the neural basis of face experience was mistaken. But another possibility is that the subject genuinely has face experience that he does not know about and cannot know about. Wait—is that really a possibility? Does it even make sense to suppose that a subject could have an experience that he does not and cannot know about? What would make it his experience? The question about GK can be answered by thinking about the subject’s visual field. We can answer the question of what the visual field is by thinking about how it is measured. If you look straight ahead, hold a rod out to the side, and slowly move it forward, you will be able to see it at roughly 100° from the forward angle. If you do the same coming down from the top, you will see it at roughly 60°, and if you do it from the bottom, you will see it at roughly 75°. More accurately, it is measured with points of light or gratings. Thus the visual field is an oval, elongated to the right and left, and slightly larger on the bottom. The Humphrey Field Analyzer HFA-II-I can measure your visual f ield in as little as 2 minutes. The United Kingdom has a minimum visual field requirement for driving (60° to the side, 20° above and below); U.S. states vary widely in their requirements (Peli & Peli, 2002). I mention these details to avoid skepticism about whether the visual field is real. The visual field can help us think about GK. If GK does genuinely experience the face on the left that he cannot report, then it is in his visual field on the left side, and as such has relations to other items in his visual field, some of which he will be able to report. The fact that it is his visual f ield shows that it is his experience. I caution the reader that this discussion concerns the issue of whether it makes sense to describe GK as having an experience that he cannot know about and does not constitute any evidence for his actually having an experience that he cannot know about (but see Block, 2007a). A second point about the relation between phenomenal consciousness and the self is that self-related mental activities seem inhibited during intense conscious perception. Malach and his colleagues (Goldberg, Harel, & Malach, 2006) showed subjects pictures and audio clips with two different types of instructions. In one version, subjects were asked to indicate their emotional reactions as positive, negative, or neutral. In another version (in which the stimuli were presented much faster), subjects were asked to categorize the stimuli, for example, as animals or not. Not surprisingly, subjects rated their self-awareness as high in the introspective task and low in the categorization task. And this testimony was supported by fMRI results that showed that the introspective task activated an “intrinsic system” that is linked to judgments about oneself, whereas the categorization task inhibited the intrinsic system, activating instead an extrinsic system that is also activated when subjects viewed clips from Clint Eastwood’s The Good, the Bad, and the Ugly. Of course this result does not show that intense perceptual experiences are not part of a connected series of mental states constituting a self, but it does suggest that theories that bring any sense of self into phenomenal experience are wrongheaded. Malach’s result disconfirms the claim that a conscious visual experience consists in a perceptual state causing a thought to the effect that I myself have a visual experience (Rosenthal, 2005a). Machine consciousness The global workspace account lends itself particularly well to the idea of machine consciousness. There is nothing intrinsically biological about a global workspace. And the HOT view also is friendly to machine consciousness. If a machine can think, and if it can have representational contents, and if it can think about those contents, it can have conscious states, according to the HOT view. Of course, we do not know how to make a machine that can think, but whatever difficulties are involved in making a machine think, they are not difficulties about consciousness per se. (However, see Searle, 1992, for a contrary view.) By comparison, the biological theory says that only machines that have the right biology can have consciousness, and in that sense the biological account is less friendly to machine consciousness. Information is coded in neurons by electrical activations that travel from one part of a neuron to another, but in the most common type of transfer of information between neurons, that electrical coding is transformed into a chemical coding (by means of neurotransmitters) which transfers the information to another neuron where the coding of information is again electrical. On the biological view, it may well be that this transfer of coding of information from electrical to chemical and back to electrical is necessary to consciousness. Certainly it would be foolish to discount this possibility without evidence. As should be apparent, the competitors to the biological account are profoundly nonbiological, having more of their inspiration in the computer model of the mind of the 1960s and 1970s than in the age of the neuroscience of consciousness of the 21st century. (For an example, see McDermott, 2001.) As Dennett (2001, 234) confesses, “The recent history of neuroscience can be seen as a series of triumphs for the lovers of detail. Yes, the specific geometry of the connectivity matters; yes, the location of specific neuromodulators and their effects matter; yes, the architecture matters; yes, the f ine temporal rhythms of the spiking patterns matter, and so on. Many of the fond hopes of opportunistic minimalists [a version of computationalism: NB] have been dashed: they had hoped they could leave out various things, and they have learned that no, if you leave out x, or y, or z, you can’t explain how the mind works.” Although Dennett resists the block: comparing the major theories of consciousness  Gazzaniga_77_Ch77.indd 1119 1119 6/19/2009 10:08:00 AMobvious conclusion, it is hard to avoid the impression that the biology of the brain is what matters to consciousness—at least the kind we have—and that observation favors the biological account. acknowledgments I am grateful to Susan Carey, Peter Carruthers, Christof Koch, David Rosenthal, and Stephen White for comments on an earlier version.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *