Consciousness, accessibility, and the mesh between psychology and neuroscience Ned Block Department of Philosophy, New York University, New York, NY 10003 ned.block@nyu.edu Abstract: Howcanwedisentangle the neural basis of phenomenal consciousness from theneural machinery of the cognitive access that underlies reports of phenomenal consciousness? We see the problem in stark form if we ask how we can tell whether representations inside a Fodorian module are phenomenally conscious. The methodology would seem straightforward: Find the neural natural kinds that are the basis of phenomenal consciousness in clear cases– when subjects are completely confident and we have no reason to doubt their authority– and look to see whether those neural natural kinds exist within Fodorian modules. But a puzzle arises: Do we include the machinery underlying reportability within the neural natural kinds of the clear cases? If the answer is “Yes,” then there can be no phenomenally conscious representations in Fodorian modules. But how can we know if the answer is “Yes”? The suggested methodology requires an answer to the question it was supposed to answer! This target article argues for an abstract solution to the problem and exhibits a source of empirical data that is relevant, data that show that in a certain sense phenomenal consciousness overflows cognitive accessibility. I argue that we can find a neural realizer of this overflow if we assume that the neural basis of phenomenal consciousness does not include the neural basis of cognitive accessibility and that this assumption is justified (other things being equal) by the explanations it allows. Keywords: access consciousness; accessibility; change blindness; consciousness; mind/body problem; NCC; phenomenal consciousness; refrigerator light illusion; reportability; unconscious; vegetative state; working memory 1. Introduction In The Modularity of Mind, Jerry Fodor argued that significant early portions of our perceptual systems are modular in a number of respects, including that we do not have cognitive access to their internal states and representations of a sort that would allow reportability (Fodor 1983; see also Pylylshyn 2003; Sperber 2001). For example, one representation that vision scientists tend to agree is computed by our visual systems is one which reflects sharp changes in luminosity; another is a representation of surfaces (Nakayama et al. 1995). Are the unreportable representations inside these modules phenomenally conscious? Presumably there is a fact of the matter. But since these representations are cognitively inaccessible and therefore utterly unreportable, how could we know whether they are conscious or not? It may seem that the appropriate methodology is clear in principle even if very difficult in practice: Determine the natural kind (Putnam 1975; Quine 1969) that constitutes the neural basis of phenomenal consciousness in completely clear cases– cases in which subjects are completely confident about their phenomenally conscious states and there is no reason to doubt their authority– and then determine whether those neural natural kinds exist inside Fodorian modules. If they do, there are conscious within-module representations; if they don’t, there are not. But should we include the machinery underlying reportability within the natural kinds in the clear cases? Apparently, in order to decide whether cognitively inaccessible and #2008 Cambridge University Press therefore unreportable representations inside modules are phenomenally conscious, we have to have decided already whether phenomenal consciousness includes the cognitive accessibility underlying reportability. So it looks like the inquiry leads in a circle. I will be calling this problem “the methodological puzzle of consciousness research.” The first half of this article is about the methodology of breaking out of this circle. The second half brings empirical NED BLOCK is Silver Professor in the Departments of Philosophy and Psychology, and in the Center for Neural Science, at New York University. He was formerly Chair of the MIT Philosophy Program. HeisaFellowofthe AmericanAcademy of Arts and Sciences and a recipient of fellowships from the Guggenheim Foundation, the National Endowment for the Humanities, the American Council of Learned Societies and the National Science Foundation. He is a past President of the Society for Philosophy and Psychology, a past Chair of the MIT Press Cognitive Science Board, and past President of the Association for the Scientific Study of Consciousness. The Philosophers’ Annual selected his papers as one of the “ten best” in 1983, 1990, 1995 and 2002. The f irst of two volumes of his collected papers came out in 2007. 481 0140-525X/08 $40.00Block: Consciousness, accessibility, and the mesh evidence to bear on actually breaking out of it, using the principle that other things being equal, a mesh between psychology and neuroscience is a reason to believe the theory that leads to the mesh. 2. Two illustrations Before giving a more precise statement of the methodological puzzle, I’ll give two illustrations that are intended to give the reader a feel for it. Nancy Kanwisher and her colleagues (Kanwisher 2001; Tong et al. 1998) have found impressively robust correlations between the experience of faces and activation at the bottom of the temporal lobe, usually in the subject’s right hemisphere in what they call the “fusiform face area.” One method that has been used to investigate the neural basis of face perception exploits a phenomenon known as “binocular rivalry” (see Koch 2004, Ch. 16). Presented with a face-stimulus to one eye and a house stimulus to the other, the subject experiences a face for a few seconds, then a house, then a face, and so on. Examination of the visual processing areas of the brain while the face/ house perceptual alternation is ongoing, found stronger shifts with the percept in the fusiform face area than in other areas. The fusiform face area lights up when subjects are experiencing seeing a face and not when subjects are experiencing seeing a house, despite the fact that the stimuli are unchanging. The fusiform face area also lights up when subjects imagine faces (O’Craven & Kanwisher 2000). In highly constrained experimental situations, observers viewing functional magnetic resonance imaging (fMRI) recordings are 85% accurate in telling whether subjects in a scanner are seeing faces or houses (Haynes & Rees 2006). However, Rafi Malach and his colleagues (Hasson et al. 2004) have been able to get similar results from free viewing of movies by correlating activations in a number of subjects (see also Bartels & Zeki 2004). There has been some dispute as to what exactly the fusiform face area is specialized for, but these issues can be put aside here. (See Grill-Spector et al. 2006, 2007; Kanwisher 2006, 2007; Tsao et al. 2006). No one would suppose that activation of the fusiform face area all by itself is sufficient for face-experience. I have never heard anyone advocate the view that if a fusiform face area were kept alive in a bottle, that activation of it would determine face-experience– or any experience at all (Kanwisher 2001). The total neural basis of a state with phenomenal character C is itself sufficient for the instantiation of C. The core neural basis of a state with phenomenal character C is the part of the total neural basis that distinguishes states with C from states with other phenomenal characters or phenomenal contents,1 for example the experience as of a face from the experience as of a house. (The core neural basis is similar to what Semir Zeki [Zeki 2001; Zeki & Bartels 1999] has called an essential node.) So activation of the fusiform face area is a candidate for the core neural basis– not the total neural basis– for experience as of a face (see Block 2005; Chalmers 2000; Shoemaker 1981). For purposes of this target article, I adopt the physicalistic view (Edelman 2004) that consciousness is identical to its total neural basis, rather than John Searle’s view 482 that consciousness is determined by but not identical to its neural basis (McLaughlin 1992; Searle 1992). The issue of this article is not physicalism versus dualism, but rather, whether consciousness includes the physical functions involved in the cognitive accessibility that underlies reportability. What is the total minus core neural basis? That is, what is the neural background required to make a core neural basis sufficient for a phenomenally conscious experience? There is some evidence that there is a single neural background of all experience involving connections between the cortex and the upper brain stem including the thalamus (Churchland 2005; Laureys 2005; Llina´s 2001; Llina´s et al. 1998; Merker 2007; Tononi & Edelman 1998). This background can perhaps be identified with what Searle (2005) calls the “unified conscious field.” Perhaps the most convincing evidence is that disabling connections to the thalamus seems the common core of what different general anesthetics do (Alkire & Miller 2005). Although Merker (2007) does not make the distinction between core and total, he presents evidence that children born pretty much without a cortex can have the conscious field with little or nothing in the way of any conscious contents: that is, they have the total without much in the way of core neural bases. NancyKanwisher (2001) and Dan Pollen (2003; in press) argue that activation of areas of the brain involved in spatio-temporal binding is required for perceptual phenomenology. Of course some states that have phenomenology, for example, emotions and thoughts, are not experienced as spatially located. But Kanwisher and Pollen may be right about temporal aspects of visual experience. Further, Antonio Damasio (1999) and Pollen argue that all experience requires a sense of self, partly based in the posterior parietal lobe. If true, this would be part of the background. At the risk of confusing the reader with yet another distinction, it is important to keep in mind the difference between a causal condition and a constitutive condition. For example, cerebral blood flow is causally necessary for consciousness, but activation of the upper brainstem is much more plausibly a constitutive condition, part of what it is to be conscious. (What does “constitutive” mean? Among other things, constituent: Hydrogen is partially constitutive of water since water is composed of hydrogen and oxygen.) The main issue of this article is whether the cognitive access underlying reportability is a constitutive condition of phenomenal consciousness. Here is the illustration I have been leading up to. There is a type of brain injury which causes a syndrome known as visuo-spatial extinction. If the patient sees a single object on either side, the patient can identify it, but if there are objects on both sides, the patient can identify only the one on the right and claims not to see the one on the left (Aimola Davies 2004). With competition from the right, the subject cannot attend to the left. However, as Geraint Rees has shown in two fMRI studies of a patient identified as “G.K.,” when G.K. claims not to see a face on the left, his fusiform face area (on the right, fed strongly by the left side of space) lights up almost as much as when he reports seeing the face (Driver & Vuilleumier 2001; Rees et al. 2000, 2002b). Should we conclude that G.K. has face experience that– because of lack of attention– he does not know about? Or that the BEHAVIORAL AND BRAIN SCIENCES (2007) 30:5/6Block: Consciousness, accessibility, and the mesh fusiform face area is not the whole of the core neural basis for the experience, as of a face? Or that activation of the fusiform face area is the core neural basis for the experience as of a face but that some other aspect of the total neural basis is missing? How are we to answer these questions, given that all these possibilities predict the same thing: no face report? I will use the phrase “core neural basis of the experience” instead of Frances Crick’s and Christof Koch’s “NCC,” for neural correlate of consciousness. Mere correlation is too weak. At a minimum, one wants the neural underpinnings of a match of content between the mental and neural state (Chalmers 1998; Noe¨ & Thompson 2004). 3. The puzzle The following is a principle that will be appealing to many (though it is not to me): Whatever it is about a state that makes it unreportable, would also preclude its being phenomenally conscious. We can call this the Phenomenally Conscious ! Reportable Principle, or for short, the Phenomenal ! Reportable Principle. But how could we test the Phenomenal ! Reportable Principle? If what we mean by a “direct” test is that we elicit reports from subjects about unreportable states, then a direct test will always be negative. And it might seem that there could not be an indirect test either, for an indirect test would have to be based on some direct method, that is, a method of investigating whether a state is phenomenally conscious independently of whether it is reportable– a method that apparently does not exist. Here is a brain-oriented version of the point: Suppose empirical investigation finds a neural state that obtains in all cases in which a phenomenally conscious state is reportable. Such a neural state would be a candidate for a core neural basis. Suppose in addition, that we find that the putative core neural basis is present sometimes when the state is unreportable because mechanisms of cognitive access are damaged or blocked. Would that show the existence of unreportable phenomenal consciousness? No, because there is an alternative possibility: that we were too quick to identify the core neural basis. Perhaps the supposed core neural basis that we identified is necessary for phenomenal consciousness but not quite sufficient. It may be that whatever it is that makes the state unreportable also makes it unconscious. Perhaps the cognitive accessibility mechanisms underlying reportability are a constitutive part of the core neural basis, so that without them, there cannot be a phenomenally conscious state. It does not seem that we could find any evidence that would decide one way or the other, because any evidence would inevitably derive from the reportability of a phenomenally conscious state, and so it could not tell us about the phenomenal consciousness of a state which cannot be reported. So there seems a fundamental epistemic (i.e., having to do with our knowledge of the facts rather than the facts themselves) limitation in our ability to get a complete empirical theory of phenomenal consciousness. This is the methodological puzzle that is the topic of this article. Note that the problem cannot be solved by giving a definition of “conscious.” Whatever definition one offers of this and other terms, the puzzle can be put in still other terms– there would still be the question, does what it is like to have an experience include whatever cognitive processes underlie our ability to report the experience? The problem does not arise in the study of, for example, water. On the basis of the study of the nature of accessible water, we can know the properties of water in environments outside our light cone– that is, in environments that are too far away in space and time for signals traveling at the speed of light to reach us. We have no problem in extrapolating from the observed to the unobserved, and even unobservable in the case of water, because we are antecedently certain that our cognitive access to water molecules is not part of the constitutive scientific nature of water itself. In homing in on a core neural basis of reportable episodes of phenomenal consciousness, we have a choice about whether or not to include the aspects of those neurological states that underlie reportability within the core neural basis. If we do, then unreportable phenomenally conscious states are ruled out; if we do not, unreportable phenomenally conscious states are allowed. Few scientifically minded people in the twenty-first century would suppose that water molecules are partly constituted by our cognitive access to them (Boghossian 2006), but few would be sure whether phenomenal consciousness is or is not partly constituted by cognitive access to it. It is this asymmetry that is at the root of the methodological puzzle of phenomenal consciousness. This issue– whether the machinery of cognitive accessibility is a constitutive part of the nature of phenomenal consciousness– is the focus of this target article. I will not mention evidence concerning inaccessible states within Fodorian modules, or whether G.K. has face experience, but I do claim to show that the issue of whether the cognitive accessibility underlying reportability is part of the constitutive nature of phenomenal consciousness can be resolved empirically and that we already have evidence for a negative answer. I now turn to a consideration of reportability, but first I want to mention one issue that will not be part of my discussion. Readers are no doubt familiar with the “explanatory gap” (Levine 1983; Nagel 1974), and the corresponding “hard problem” of phenomenal consciousness (Chalmers 1996): the problem of explaining why the neural basis of a given phenomenal quality is the neural basis of that phenomenal quality rather than some other phenomenal quality or none at all. No one has any idea what an answer would be, even a highly speculative answer. Is the explanatory gap an inevitable feature of our relation to our own phenomenology? Opinions differ (Churchland 1994; McGinn 1991). I argue that we can make at least some progress on solving the methodological puzzle even without progress in closing the explanatory gap. I have been talking about consciousness versus reportability, but reportability is not the best concept to use in thinking about the puzzle. 4. Cognitive accessibility versus reportability Empirical evidence about the Phenomenal ! Reportable Principle seems unobtainable, but that is an illusion: that principle is clearly false even though another closely 483 BEHAVIORAL AND BRAIN SCIENCES (2007) 30:5/6Block: Consciousness, accessibility, and the mesh related principle is problematic. If a locked-in subject loses control of the last twitch, all mental states can become unreportable. There has been progress in using electrodes implanted in the brain, and less intrusively, electroencephalographic (EEG) technology to enable patients to communicate with the outside world. But if the patient is not trained with these technologies before the total loss of control of the body, these technologies may not work. (See the articles on this topic in the July 2006 issue of Nature.) There is a distinct problem with the Phenomenal !Reportable Principle, namely that a person who is not paralyzed may lose all ability to produce or understand language, and so not have the language capacity required for reporting. In some forms of this syndrome (profound global aphasia), subjects clearly have phenomenal states– they can see, they have pain, and they can make clear what they want and don’t want in the manner of a pre-linguistic child– but they are totally without the ability to report in any non-extended sense of the term. (Cometothink of it, the same point applies to pre-linguistic children and animals.) And if an aphasic also had locked-in syndrome, the unfortunate conjunctively disabled person would be doubly unable to report conscious states. But there is no reason to think that conscious states would magically disappear. Indeed, given that aphasia is fairly common and locked-in syndrome, though infrequent, is not rare, no doubt there have been such conjunctive cases. Of course there can be nonverbal reports: giving a thumbs-up and shaking one’s head come to mind. But not every behavioral manifestation of cognitive access to a phenomenalstate is a report, except in an uninterestingly stretched version of the term. Reportability is a legacy of behaviorism that is less interesting than it has seemed. Themoreinteresting issue in the vicinity is not the relation between the phenomenal and the reportable, but rather the relation between the phenomenal and the cognitively accessible. Adrian Owen and colleagues (Owen et al. 2006) report that a patient who, at the time of testing, satisfied the criteria for a vegetative state, responded to requests to imagine a certain activity in a way indistinguishable from normal patients on an fMRI scan. Her premotor cortex was activated upon being asked to imagine playing tennis, and her parahippocampal place area was activated on being asked to imagine walking through rooms in her house. Paul Matthews objected that the brain activity could have been an associative response to the word “tennis,” but Owen counters that her response lasted 30 seconds– until he asked her to stop (Hopkin 2006). In an accompanying article in Science, Lionel Naccache insists on behavioral criteria for consciousness. He says, “Consciousness is univocally probed in humans through the subject’s report of his or her own mental states” and notes that Owen and colleagues “did not directly collect such a subjective report” (Naccache 2006b). But the evidence is that the patient is capable of an intentional act, namely, the act of imagining something described. That should be considered no less an indication– though of course a fallible indication– of consciousness than an external behavioral act. As an editorial in Nature suggests, instead of “vegetative state” we should say “outwardly unresponsive.” (Nature, Editorial 2006). 484 In the rest of this article, I will be talking about cognitive accessibility instead of reportability. Reportability is a behavioristic ladder that we can throw away. In previous papers (Block 1995b; 2001; 2005), I have argued that there can be phenomenally conscious states that are not cognitively accessible. (I put it in terms of phenomenal consciousness without access consciousness.) But I am mainly arguing for something weaker here. Cognitive accessibility could be a causally necessary condition of phenomenal consciousness without being a constitutive part of it. Bananas constitutively include CH2O molecules but not air and light. Still, without air and light, there could be no bananas– they are causally necessary. The focus here is on whether accessibility is constitutively necessary to phenomenal consciousness, not whether it is causally necessary. 5. Why the methodological puzzle matters I will mention two ways inwhich it matters whether wecan f ind out whether phenomenal consciousness includes cognitive accessibility. First, if we cannot get evidence about this, we face a fundamental limit in empirical investigation of the neural basis of phenomenal consciousness– we cannot tell whether the putative core neural basis we have found is the neural basis of phenomenal consciousness itself or the neural basis of phenomenal consciousness wrapped together with the cognitive machinery of access to phenomenal consciousness. Second, there is a practical and moral issue having to do with assessing the value of the lives of persons who are in persistent vegetative states. Many people feel that the lives of patients in the persistent vegetative state are not worth living. But do these patients have experiences that they do not have cognitive access to? It is not irrational to regard a rich experiential life– independently of cognitive access to it– as relevant to whether one would want the feeding tube of a loved one removed. 6. Phenomenal consciousness and Awareness Wemaysuppose that it is platitudinous that when one has a phenomenally conscious experience, one is in some way aware of having it. Let us call the fact stated by this claimwithout committing ourselves on what exactly that fact is– the fact that phenomenal consciousness requires Awareness. (This is awareness in a special sense, so in this section I am capitalizing the term.) Sometimes people say Awareness is a matter of having a state whose content is in some sense “presented” to the self or having a state that is “for me” or that comes with a sense of ownership or that has “me-ishness” (as I have called it; Block 1995a). Very briefly, three classes of accounts of the relation between phenomenal consciousness and Awareness have been offered. Ernest Sosa (2002) argues that all there is to the idea that in having an experience one is necessarily aware of it is the triviality that in having an experience, one experiences one’s experience just as one smiles one’s smile or dances one’s dance. Sosa distinguishes this minimal sense in which one is automatically aware of one’s experiences from noticing one’s experiences, which is not BEHAVIORAL AND BRAIN SCIENCES (2007) 30:5/6Block: Consciousness, accessibility, and the mesh required for phenomenally conscious experience. At the opposite extreme, David Rosenthal (2005) has pursued a cognitive account in which a phenomenally conscious state requires a higher order thought to the effect that one is in the state. That is, a token experience (one that can be located in time) is a phenomenally conscious experience only in virtue of another token state that is about the first state. (See also Armstrong 1977; Carruthers 2000; and Lycan 1996 for other varieties of higher order accounts.) A third view, the “Same Order” view says that the consciousness-of relation can hold between a token experience and itself. A conscious experience is reflexive in that it consists in part in an awareness of itself. (This view is discussed in Brentano 1874/1924; Burge 2006; Byrne 2004; Caston 2002; Kriegel 2005; Kriegel & Williford 2006; Levine 2001, 2006; Metzinger 2003; Ross 1961; Smith 1986). The same order view fits both science and common sense better than the higher order view. As Tyler Burge (2006) notes, to say that one is necessarily aware of one’s phenomenally conscious states should not be taken to imply that every phenomenally conscious state is one that the subject notices or attends to or perceives or thinks about. Noticing, attending, perceiving, and thinking about are all cognitive relations that need not be involved when a phenomenal character is present to a subject. The mouse may be conscious of the cheese that the mouse sees, but that is not to say that the mouse is conscious of the visual sensations in the visual field that represent the cheese, or that the mouse notices or attends to or thinks about any part of the visual field. The ratio of synapses in sensory areas to synapses in frontal areas peaks in early infancy, and likewise for relative glucose metabolism (Gazzaniga et al. 2002, p. 642–43). Since frontal areas are likely to govern higher-order thought, low frontal activity in newborns may well indicate lack of higher-order thoughts about genuine sensory experiences. The relevance of these points to the project of the target article is this: the fact of Awareness can be accommodated by either the same order view or the view in which Awareness is automatic, or so I will assume. Hence, there is no need to postulate that phenomenal consciousness requires cognitive accessibility of the phenomenally conscious state. Something worth calling “accessibility” may be intrinsic to any phenomenally conscious state, but it is not the cognitive accessibility that underlies reporting. The highly ambiguous term “conscious” causes more trouble than it is worth in my view. Some use the term “conscious” so as to trivially include cognitive accessibility. To avoid any such suggestion I am from here on abandoning the term “phenomenal consciousness” (which I think I introduced [Block 1990; 1992]) in favor of “phenomenology.” In the next section, I discuss the assumption underlying the methodological puzzle, and in the section after, how to proceed if we drop that assumption. 7. Correlationism Correlationism says that the ultimate database for phenomenology research consists in reports which allow us to find correlations between phenomenal states and features, on the one hand, and scientifically specifiable states and features– namely, neural states and features– on the other. These reports can be mistaken, but they can be shown to be mistaken only on the basis of other reports with which they do not cohere. There is no going beyond reports. One version of correlationism is stated in David Papineau’s (2002) Thinking about Consciousness, in which he says: If the phenomenal property is to be identical with some material property, then this material property must be both necessary and sufficient for the phenomenal property. In order for this requirement to be satisfied, the material property needs to be present in all cases where the human subjects report the phenomenal property– otherwise it cannot be necessary. And it needs to be absent in all cases where the human subjects report the absence of the phenomenal property– otherwise it cannot be sufficient. The aim of standard consciousness research is to use these two constraints to pin down unique material referents for phenomenal concepts. (Papineau 2002, p. 187) Consider, for example, what an adherent of this methodology would say about patient G.K. mentioned earlier. One kind of correlationist says we have misidentified the neural basis of face experience and so some aspect of the neural basis of face experience is missing. That is, either the activation of the fusiform face area is not the core neural basis for face experience, or, if it is, then in extinction patients some aspect of the total neural basis outside the core is missing. Another kind of correlationist does not take a stand on whether G.K. is having face experience, saying that we cannot get scientific evidence about it. So there are two versions of correlationism. Metaphysical correlationism– the first version just mentioned– says that there is (or can be) an answer to the sort of question I have raised about G.K. and that answer is no. The metaphysical correlationist thinks that the cognitive access relations that underlie the subject’s ability to report are a part of what constitutes phenomenology, so there could not be phenomenology without cognitive accessibility (Papineau 1998). Epistemic correlationism says that G.K. might be having face experience without cognitive accessibility, but that the issue is not scientifically tractable. According to epistemic correlationism, cognitive accessibility is intrinsic to our knowledge of phenomenology but not necessarily to the phenomenal facts themselves. Epistemic correlationism is more squarely the target of this article, but I will say a word about what is wrong with metaphysical correlationism. Why does the metaphysical correlationist think G.K. cannot be having face experience? Perhaps it is supposed to be a conceptual point: that the very concepts of phenomenology and cognitive accessibility make it incoherent to suppose that the first could occur without the second. Or it could be an empirical point: the evidence (allegedly) shows that the machinery of cognitive accessibility is part of the machinery of phenomenology. I have discussed the conceptual view elsewhere (Block 1978; 1980). The neuroscientists Stanislas Dehaene and Jean-Pierre Changeux (2004) appear to advocate epistemic correlationism. (References in the passage quoted are theirs but in this and other quotations to follow citations are in the style of this journal.) Dehaene and Changeux write: 485 BEHAVIORAL AND BRAIN SCIENCES (2007) 30:5/6Block: Consciousness, accessibility, and the mesh Weshall deliberately limit ourselves, in this review, to only one aspect of consciousness, the notion of conscious access …Like others (Weiskrantz 1997), we emphasize reportability as a key property of conscious representations. This discussion will aim at characterizing the crucial differences between those aspects of neural activity that can be reported by a subject, and those that cannot. According to some philosophers, this constitutes an “easy problem” and is irrelevant to the more central issues of phenomenology and self-awareness (e.g., Block 1995b). Our view, however, is that conscious access is one of the few empirically tractable problems presently accessible to an authentic scientific investigation. (Dehaene & Changeux 2004, p. 1145–1146) Kouider et al. (2007) say: “Given the lack of scientific criterion, at this stage at least, for defining conscious processing without reportability, the dissociation between access and phenomenal consciousness remains largely speculative and even possibly immune to scientific investigation” (p. 2028). (Access-consciousness was my term for approximately what I am calling “cognitive accessibility” here.) In a series of famous papers, Crick and Koch (1995) make use of what appears to be metaphysical correlationism. They argue that the first cortical area that processes visual information, V1, is not part of the neural correlate of phenomenology because V1 does not directly project to the frontal cortex. They argue that visual representations must be sent to the frontal cortex in order to be reported and in order for reasoning or decision-making to make use of those visual representations. Their argument in effect makes use of the hidden premise that part of the constitutive function of visual phenomenology is to harness visual information in the service of the direct control of reasoning and decision-making that controls behavior. Jesse Prinz (2000) argues for the “AIR” theory, for attended intermediate representations. The idea is that “consciousness arises when intermediate-level perception representations are made available to working memory via attention.” Because of the requirement of connection to working memory, this is a form of metaphysical correlationism. David Chalmers (1998) endorses epistemic correlationism. He says, Given the very methodology that comes into play here, we have no way of definitely establishing a given NCC as an independent test for consciousness. The primary criterion for consciousness will always remain the functional property we started with: global availability, or verbal report, or whatever. That’s how we discovered the correlations in the first place. 40-hertz oscillations (or whatever) are relevant only because of the role they play in satisfying this criterion. True, in cases where we know that this association between the NCC and the functional property is present, the NCC might itself function as a sort of “signature” of consciousness; but once we dissociate the NCC from the functional property, all bets are off. (Chalmers 1998) Victor Lamme(2006)givestheexampleofthesplit-brain patient who says he does not see something presented on the left, but nonetheless can draw it with his left hand. There is a conflict between normal criteria for conscious states. Lamme says that “preconceived notions about the role of language in consciousness” will determine our reaction and there is no objective truth about which view is right.” He argues for “letting arguments from neuroscience override our intuitive and introspective notion of 486 consciousness,” using neuroscientific considerations to motivate us to define “consciousness” as recurrent processing, in which higher areas feed back to lower areas, which in turn feed forward to the higher areas again, thereby amplifying the signal. He doesn’t claim the definition is correct, just that it is the only way to put the study of consciousness on a scientific footing. Although Lamme does not advocate correlationism in either its metaphysical or epistemic forms, his view depends on the idea that the only alternative to epistemic correlationism is neurally based postulation. Often philosophers– Hilary Putnam (1981) and Dan Dennett (1988; 1991) come to mind– argue that two views of the facts about consciousness are “empirically indistinguishable”– and then they in effect conclude that it is better to say that there are no such facts than to adopt epistemic correlationism. One example is Putnam’s thought experiment: We find a core neural basis for some visual experience, but then note that if it occurs in the right hemisphere of a split-brain patient, the patient will say he doesn’t see anything. If we restore the corpus callosum, the patient may then say he remembers seeing something. But we are still left with two “empirically indistinguishable” hypotheses, that the hypothesis of the core neural basis is correct so the memory is veridical and, alternatively, that the memory is false. I will give an empirical argument that we can achieve a better fit between psychology and neuroscience if we assume that the perspectives just described are wrong. 8. An alternative to epistemic correlationism The alternative I have in mind is just the familiar default “method” of inference to the best explanation, that is, the approach of looking for the framework that makes the most sense of all the data, not just reports (Harman 1965; Peirce 1903, Vol. V, p. 171). The reader may feel that I have already canvassed inference to the best explanation and that it did not help. Recall that I mentioned that the best explanation of all the data about observed water can give us knowledge of unobserved– even unobservable– water. I said that this approach does not apply straightforwardly to phenomenology. The reasoning that leads to the methodological puzzle says that inevitably there will be a choice about whether to include the neural basis of cognitive access within the neural basis of phenomenology. And that choice– according to this reasoning– cannot be made without some way of measuring or detecting phenomenology independently of cognitive access to it. But we don’t have any such independent measure. As I noted, there is a disanalogy with the case of water, since we are antecedently certain that our access to information about water molecules is not part of the natural kind that underlies water molecules themselves. But we are not certain (antecedently or otherwise) about whether our cognitive access to our own phenomenology is partly constitutive of the phenomenology. Without antecedent knowledge of this– according to the reasoning that leads to the methodological puzzle– we cannot know whether whatever makes a phenomenal state cognitively inaccessible also renders it non-phenomenal. BEHAVIORAL AND BRAIN SCIENCES (2007) 30:5/6Block: Consciousness, accessibility, and the mesh Here is the fallacy in that argument: The best theory of all the data may be one that lumps phenomenology with water molecules as things whose constitutive nature does not include cognitive access to it. To hold otherwise is to suppose– mistakenly– that there are antecedent views– or uncertainties in this case– that are not up for grabs. Perhaps an analogy will help. It might seem, offhand, that it is impossible to know the extent of errors of measurement, for any measurement of errors of measurement would have to be derived from measurement itself. But we can build models of the sources of measurement error and test them, and if necessary we can build models of the error in the first level models, and so on, stopping when we get a good predictive fit. For example, the diameter of the moon can be measured repeatedly by a number of different techniques, the results of which will inevitably vary about a mean. But perhaps the diameter of the moon is itself varying? The issue can be pursued by simultaneously building models of source of variation in the diameter itself and models of error in the various methods of measurement. Those models contain assumptions which can themselves be further tested. The puzzle of how it is possible to use measurement itself to understand errors of measurement is not a deep one. As soon as one sees the answer, the problem of principle falls away, although it may be difficult to build the models in practice. I do not believe that the same is true for the methodological puzzle. One reason is the famous “explanatory gap” that I mentioned earlier. There may be reasonable doubt whether the method of inference to the best explanation can apply in the face of the explanatory gap. A second point is that with the demise of verificationism (Uebel 2006), few would think that the nature of a physical magnitude such as length or mass is constitutively tied to our measurement procedures. The mass of the moon is what it is independently of our methods of ascertaining what it is. But verificationism in the case of consciousness is much more tempting– see Dan Dennett’s “first person operationism” (Dennett 1991) for a case in point. Lingering remnants of verificationism about phenomenology do not fall away just because someone speaks its name. The remainder of this article will describe evidence that phenomenology overflows cognitive accessibility, and a neural mechanism for this overflow. The argument is that this mesh between psychology and neuroscience is a reason to believe the theory that allows the mesh. The upshot is that there are distinct mechanisms of phenomenology and cognitive accessibility that can be empirically investigated.
Leave a Reply