34

by

in

What does Kornblith mean when he talks about ‘responsiveness to reasons’? He doesn’t really say, but it’s clear that he doesn’t mean responsiveness to reasons qua reasons, but, rather, just that the behaviour in question conforms to, or is consistent with, the reasons one has. That is why he can be so confident that the plover can be responsive to reason. However, we don’t typically think that beliefs are so easily had. Once again, the example of breathing explains why. When my carbon dioxide levels are low and I respond algorithmically by breathing more deeply and frequently, I am, in the relevant sense, responding to the reasons I have. But something can be a breathing, living organism, without having any beliefs at all (consider a kitten born in a persistent vegetative state). However, even if we ignored this difficulty and gave Kornblith the claim that the plover has not merely ‘informational states’ about its environment, but beliefs about it, it still wouldn’t be enough. As we have already seen, with our example of the habitual depressive, content-sensitive transitions between judgements are not sufficient for reasoning. 5. Non-reflective reasoning in humans So, we haven’t been given a characterization of reasoning that, while being purely first-order, predicts the intuitively correct extension for the concept. And we haven’t been shown a clear-cut instance of reasoning in non-human animals. But what about reasoning in adult humans? Clearly, human beings are capable of reasoning in a way that the Taking account captures perfectly; and they often do. In such reasoning, everything is fully explicit and self-conscious: the thinker knows what premises he is reasoning from; he takes those premises to support a given conclusion, C; he takes them to do so because they bear a certain relation to C; and, finally, he infers C because of all these facts. Call this a fully reflective case of reasoning. Much human reasoning isn’t fully reflective in this sense. It seems automatic, unlaboured and unreflective. Doesn’t that show that although Taking might correctly characterize some special instances of human reasoning, it doesn’t correctly characterize its essence : even while Taking might be contingently involved in some reasoning, it can’t be required for reasoning. Kornblith writes: Ben believed, when he first got into the car that he could get home by turning left on Main Street. When he got to Main Street, however, he found out that there was construction blocking the route. Seeing the construction had the effect of changing his belief about how he might get home: he no longer believed that turning left on Main would do the trick. Is Ben’s change in belief rational only if he also, first, thinks about what he believes, and second, thinks about his desire for consistent and coherent belief, and finally, in light of all this, changes his beliefs in light of the previous reflections? (46) Two considerations blunt the effectiveness of this sort of point. Consider, first, a case in which you learn some set of rules explicitly – how to operate the elevator in your building, for example – and, for a while, follow them explicitly. After a while, though, operating the elevator may become automatic, unlaboured and unreflective: you step in, press the appropriate buttons all the while with your mind on other things. That doesn’t mean that your previous grasp of the relevant rules is playing no role in guiding your behaviour. It’s just that the guidance has gone ‘ tacit ’. In cases where, for whatever reason, the automatic behaviour gets disrupted, you will find yourself trying to retrieve the rule that is guiding you and to formulate its requirements explicitly once again. If you ask me to give you an account of tacit guidance, I could say a few things, but perhaps not much that’s very helpful. What we can’t doubt, it seems to me, is the reality and ubiquity of the phenomenon, which is why we need to be thinking a lot more about tacit guidance than we have. Once we recognize the reality of tacit guidance, I see no reason to insist that whenever a rule provides tacit guidance, it must also at some point have provided explicit guidance. In some cases it is possible, indeed likely, that a rule’s guidance should always have been tacit. That would appear to be the case with the rules that guide our linguistic behaviour, our moral behaviour or, for that matter, our reasoning. Because we know about automation and about tacit guidance, no example of the type that Kornblith invokes can do much to refute reflective accounts of reasoning. For all that’s been shown so far, then, Taking may still be correct as an account of reasoning, provided we allow that states of taking may guide our inferential behaviour tacitly. It is natural to ask how exactly the relation between the fully reflective case and the relatively automatic ordinary case should be conceived. What I want to say is that the fully reflective case is the Platonic Form of reasoning. It is the ideal form of reasoning, one that is only approximated by ordinary reasoning, in much the way in which the Platonic form of a circle is only approximated by ordinary circles. We depart from ideal reasoning because (i) it is a costly, laborious process that we often can’t cognitively afford; (ii) in various cases we don’t know exactly which principles of reasoning we are operating with and it would be a matter of some difficulty to uncover them; and, finally, (iii) for a wide variety of purposes, it works just fine to use automated procedures and shortcuts. Why should the fully reflective case be thought of as the ideal, with the tacit case a mere approximation? Why shouldn’t we rather think that the tacit case is one sort of perfectly good kind of reasoning, and the fully reflective case a different kind of reasoning, each with its own virtues and defects? Something like this alternative picture of the relation between the two types of reasoning is becoming entrenched in the suggestion that we have two different sorts of system for reasoning, the socalled System 1 and System 2 (see Kahneman 2011 ). The reason for resisting this bifurcated picture derives from the fact that the aim of all reasoning is always the same: to help us figure out what we have most reason to believe. Furthermore, on internalist ways of thinking about what you have most reason to believe, you have most reason to believe something only if it is possible for you to figure out, by reflection alone, that you have most reason to believe it. And that in turn requires that, in the ideal case, all of the factors that are relevant to assessing your reasoning should be open to reflective view. 6. Normativity, responsibility and control This completes my discussion of Kornblith’s positive case for claiming that no correct account of reasoning need have reflective elements. Despite what he claims, he has not shown that there is some purely first-order condition that is such that, when it’s added to a causal transition between informational states, or even to a causal transition between judgements, what results is a clear-cut case of reasoning. Nor has he exhibited a clear-cut case of reasoning that doesn’t involve a reflective element. For all that he has shown, something like Taking is required if a causal transition is to be reasoning. Later on in the article, I will look at Kornblith’s negative case for claiming that no correct account of reasoning need have reflective elements. Before doing so, though, I want to consider whether the issue that divides Kornblith and me is purely terminological. Let’s suppose I am right that the plover does not reason and that content-sensitive transitions between judgements are not sufficient for reasoning. To what does that matter? Suppose we introduced a new word, ‘schmeasoning’, which is such that it applies both to reasoning properly so-called, and to merely content-sensitive transitions. What important differences between the plover’s behaviour and adult human reasoning would such a manoeuvre obscure? Which word has a better chance of ‘carving epistemology at its joints’? The main features that would be obfuscated by such a conflation would be reasoning’s normative features. You can be held responsible for the way you reason; and you can be blamed for having reasoned badly and praised for having reasoned well. What do these observations imply about the nature of reasoning? What constraints do they place on our account of the nature of reasoning? Obviously, we are here in quite treacherous territory, something analogous to the question of free will within the cognitive domain. We cannot hope to discuss the issue here in full generality, let alone settle it. But unless you are a sceptic about responsibility, you will think that there are some conditions that distinguish between mere mechanical transitions and those cognitive transitions towards which you can adopt a participant reactive attitude, to use Strawson’s famous expression. And what we know from reflection on cases, such as those of breathing and the habitual depressive, is that mere transitions between mental states – no matter how content-sensitive they may be – are not necessarily processes for which one can be held responsible. It is only if there is a substantial sense in which they are transitions that a thinker performed that he/she can be held responsible for them. That is the fundamental reason why such transitions cannot, in and of themselves, amount to reasoning. If you insisted on using a single word both for reasoning properly so-called and for merely contentsensitive transitions, we would have to point out that you would have to adopt entirely different sets of normative attitudes towards the two types of processes, despite calling them by the same name. You might think that the case of the plover shows that this train of thought is completely wrongheaded. For isn’t it clear that, although the plover doesn’t control its ‘inferences’, and so can’t be held responsible for them, those transitions are still subject to substantive normative assessment? 4 After all, we can regard the plover as moving from given premise propositions to a given conclusion proposition. And for any such set of propositions, we can always assess whether the conclusion follows from the premises, in a suitably broad sense of ‘follows’ that includes inductive arguments. So, of course we can assess the plover’s inference as good or bad, as done well or badly, despite the plover’s not having control over it. This argument is mistaken. To assess whether a conclusion follows from a set of premises is to assess only the validity of the argument represented by the reasoning; it is not an assessment of the reasoning itself. You don’t count as having reasoned well just because your conclusion follows from your premises. As we all now know, Fermat’s Last Theorem (FLT) does follow from the Peano axioms. But if you inferred FLT directly from those axioms you wouldn’t count as having reasoned well unless you were also in a position to provide a chain of deductions from the Peano axioms to FLT that proceeded in steps that are ‘small enough’ (what we call a ‘proof’). (I will be suggesting in a moment that no normal human adult could so much as infer FLT from the Peano axioms, let alone do so well.) What does ‘small enough’ mean? The exact theoretical explication of this notion is still unclear (see Schechter m.s .). But a reasonable hypothesis, one that confirms Taking, is that the steps have to be small enough for a thinker to be in a position to take them to be valid (or to have an impression of them as valid). Indeed, this very constellation of facts explains not only why certain inferences would be bad, even if valid, but also why certain inferences seem impossible, even though valid. Suppose Goldbach’s Conjecture to be true. Suppose someone were to claim to infer Goldbach directly from the Peano axioms, without the benefit of any intervening ‘small’ deductions. We wouldn’t believe him. It’s not just that we think he couldn’t be justified in making such a transition. More than that, we feel that no such transition could be an inference to begin with, at least for creatures like ourselves. No one like us is in a position to take in the relation between the premises and the conclusion (we make allowances for Ramanujan). By contrast, any transition whatsoever could be hard-wired in. One could even imagine a creature in which the transition from the Peano axioms to the FLT or to Goldbach is hard-wired in as a basic transition. What explains this discrepancy between a transition that could simply be wired-in versus one that could be the result of inference. A tempting answer is that merely wired-in transitions can be anything you like because there is no requirement that the thinker be in a position to approve the transition; they can just be programmed in; by contrast, inferential transitions must be driven by a recognition (or at least an impression) on the thinker’s part that the premises support the conclusion. Another aspect of reasoning that indicates the intimate connection between reasoning and reflective control is that, even with respect to inferences which we regard as most automatic and unreflective, we take ourselves to have the capacity to reflect on them, to evaluate the principle of reasoning that they presuppose and to change our allegiance to that principle if we see fit to do so. We have the freedom to change the principles of reasoning that we employ; that is why we can be held responsible for them.But this capacity requires second-order abilities: to reflect on what one’s rules of inference are; to evaluate whether they are correct; and to amend them if need be. Kornblith doesn’t want to deny that we have such a capacity. But he does want to deny that our capacity to modify our principles of inference requires second-order capacities. He seems to think that the plover is just as capable of changing its manner of reasoning as we are. And he believes that this is illustrated by the way the plover adjusts to the established harmlessness of a potential predator in modifying its broken-wing display. As I said above, this claim seems to me completely unwarranted. I see no case for saying that the plover is modifying its ‘reasoning’, as opposed to saying that it already had in place a complex algorithm that called for it to do distinct things under distinct input conditions. This contrasts starkly with the capacities that we credit adult humans with. A human being may reflect on the fact that all his life he has been relying on an unrestricted version of Modus Ponens (MP); he may believe that he has come up with some decisive counter-examples to this principle in certain sorts of case (e.g. when conditionals are embedded in larger conditionals, to pick a realistic case); resolve henceforth never to reason with unrestricted MP ever again; and stand a chance of fulfilling that resolution if he is careful enough. Each of these normative differences between adult human reasoning and the plover’s transitions is well explained by Taking-style accounts but seems inexplicable on purely first-order accounts. As far as I’m concerned, these normative features are essential to reasoning. An account that cannot vindicate them cannot be correct. However, even if you insisted on introducing a single word that covered both the plover’s mechanical transitions and human reasoning, you would have to agree that it is only appropriate to adopt a participant reactive attitude towards human reasoning and not towards the plover’s transitions. 7. Kornblith against accounts that incorporate reflection Kornblith doesn’t worry about much of this, or with providing an adequate first-order account, because he believes he’s got something on the order of a proof that no account incorporating a reflective element could work. The ‘proof’, of course, centres on the threat that a reflective requirement on reasoning will lead to a vicious regress.How does Kornblith argue for this? He first maintains that on the most straightforward way of appealing to higher-order resources the result will be nothing short of a vicious regress: … one particularly simple version of the view that reasoning requires second-order beliefs about reasons leads straightforwardly to an infinite regress. If one holds that my belief that p is a reason for believing that q, then trouble is just around the corner. This belief about reasons must surely play a role in my reasoning, for the whole point of this account is to distinguish between cases in which one merely has a belief that plays no such role, and the cases in which beliefs are genuinely involved in reasoning. If the higher-order belief is merely a by-stander to reasoning it cannot do the work of distinguishing between genuine cases of reasoning and cases in which one has a reason but is not moved by it. But once the second-order belief about reasons must itself play an active role in reasoning, the requirement that reasoning involve a higher-order appreciation of one’s reasons comes into play again. So now one needs a (third-order) belief … and so on. (45) Now, I know something or two about regress arguments: I’ve made a career out of them. And there is no doubt that a danger of this sort looms for second-order accounts of reasoning. I myself have pressed various versions of such an objection to such accounts ( Boghossian 2014 ). But the friend of reflection-based accounts of reasoning has many options for evading such a regress. Let’s grant for the moment Kornblith’s assumption that the second-order element would have to be an explicit belief , as opposed, for example, to something more akin to an intuition or (as I am inclined to favour) a tacit belief; and let’s agree that if this belief is to be of any use, it has to ‘have a role’ in the reasoning. Still, it wouldn’t follow that the role it’s got to have has to be that of another premise . It would be fatal to the view, of course, if its role were that of another premise, because then the relation to the conclusion would have to be inferential and a regress would be launched. However, the obvious alternative is that the belief in question has the role not of another premise but of a background enabling condition. There is nothing obviously incoherent in the idea that a set of propositions (or beliefs) count as the premise bases for a given conclusion only when that conclusion is partly explained by a certain sort of second-order belief. Crispin Wright (2001) put the point very well when he said in response to one of my attempts at a regress argument: It is clear how the simple internalist must reply to Boghossian. To staunch her view against all threat of Carrollian regress, she must insist that recognition of the validity of a specific inference whose premises are known provides a warrant to accept a conclusion not by providing additional information from which the truth of the conclusion may be inferred, but in a direct manner … . (79–80) Wright is talking here about justification by inference rather than inference itself; but the point can be adapted to the topic that we are concerned with. The possibility that Wright is describing here is even more evident if we construe the second-order element as an intuition rather than a belief. Elijah Chudnoff (2013) has a particularly interesting recent treatment of this option, one in which he attempts to answer my Lewis Carroll-style objections. In some recent work ( 2014 ), I’ve explored a different way of evading a regress objection to accounts of reasoning that incorporate reflective elements. In that work, I consider the possibility that a thinker’s ‘taking it that p supports q ’ is a tacit belief that guides the thinker’s inferential behaviour in the way in which a thinker’s rule of inference guides his behaviour. Indeed, the view I explore is that the two states are at root the same. Since I believe that there are independent reasons for us to take this sort of non-inferential ruleguidance as primitive ( Boghossian 2008 ), that gives me reason to think that we get the notion of taking p to support q for free. This is obviously not the time at which to look into the details of such a view. I mention all this merely to point out that there is no easy regress argument with which to dispatch all accounts of inference that incorporate reflective elements. 5 8. Conclusion I understand the powerful motives that impel Kornblith to shy away from reflective accounts of central philosophical phenomena, like reasoning. Regress problems loom for such accounts and they also seem both phenomenologically oversophisticated and unfair to children and non-human animals. And if one has been antecedently seduced by the charms of a broadly naturalistic worldview, the case for seeking purely causal first-order accounts can seem overwhelming.But just as it won’t do to deny the reality of consciousness, simply because it’s difficult to understand, so it won’t do to deny the reality of reasoning. 1 Talk given at an ‘Author Meets Critics’ session at the Eastern APA in December 2014. I am grateful to my co- symposiasts, Hilary Kornblith and Declan Smithies and to our chair Meg Schmidt, for useful discussion both during and after the session, and also to the members of the audience. 2 ‘Reflection’ can be used in a variety of ways and I’m not sure Kornblith is always consistent. I will use it to stand for any form of higher-order thought, regardless of the process that leads up to it. In particular, I don’t intend for it to designate a process of ‘reflecting on’ one’s thoughts. 3 I have stated the inputs and outputs of reasoning in terms of acceptances rather than beliefs, to allow room for the fact that one can reason from suppositions, imperatives, and so forth. For the sake of concreteness, I will follow Kornblith in working with the case of beliefs or judgements; but everything I say should apply quite generally. 4 Smithies, in his response to Kornblith, also highlights our responsibility for reasoning when it is under our reflective control, although he sides with Kornblith in allowing for the possibility of unreflective reasoning in non-human animals, and thereby shares the burden of giving an adequate first-order account of reasoning. 5 My thinking on the content of the rules guiding reasoning and on the exact relation between taking state and rule-guidance has evolved since Boghossian (2014) . I hope to explain some of this in greater detail elsewhere.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *