Dilating and contracting arbitrarily David Builes1 SophieHorowitz2 1Massachusetts Institute of Technology 2University of Massachusetts Amherst 3The University of Texas at Austin Correspondence DavidBuiles,MassachusettsInstituteof Technology. Email:dbuiles@princeton.edu SophieHorowitz,UniversityofMassachusetts Amherst. Email:shorowitz@umass.edu MiriamSchoenfield,TheUniversityof TexasatAustin. Email:miriam.schoenfield@austin.utexas.edu 1 INTRODUCTION MiriamSchoenfield3 Abstract Standard accuracy-based approaches to imprecise credences have the consequence that it is rational to move between precise and imprecise credences arbitrarily, without gaining any new evidence. Building on the Educated Guessing Framework of Horowitz (2019), we develop an alternative accuracy-based approach to imprecise credences that does not have this shortcoming. We argue that it is always irrational to move fromaprecisestatetoanimprecise state arbitrarily, however it can be rational to move from an imprecise state to a precise state arbitrarily. KEYWORDS accuracy, ambiguity aversion, comparativism, educated guesses, formal epistemology, imprecise probability, the Principal Principle Suppose you have no idea whether P. You’re completely clueless. Can you rationally move from a state of uncertainty about P to a state in which you’re opinionated about P, without a change in your evidence? What about the reverse? Suppose you start out with a pretty firm opinion about P. Can you rationally move from your opinionated state to a state of uncertainty just for kicks? Many people have the intuition that it’s always (epistemically) irrational to revise one’s beliefs without a change in one’s evidence. But why would this be? Plausibly, it’s fine for our preferences to shift for arbitrary reasons. Whynot our belief states? There are many ways to address the question of what would make arbitrary shifts in one’s opinions irrational, and which answers will be satisfying will depend on what general epistemological framework one is working within. Here we’ll be thinking about the question from an accuracy-based perspective. In other words, we’ll be assuming that the requirements of epistemic rationality are grounded in a concern with being accurate which, in the broadest sense, can be thought of as a concern with “getting things right”– having one’s belief state in some sense match, or approximate, the way the world really is. If we assume that rational agents’ belief states are representable by precise probability functions, then there is a straightforward answer to the question of why a rational agent won’t shift belief states arbitrarily. According to the most popular ways of thinking about credal accuracy, an agent with a NOÛS. 2022;56:3–20. ©2020 Wiley Periodicals LLC 3 wileyonlinelibrary.com/journal/nous4 BUILES ET AL. certain probability function will regard any alternative probability function as worse, from the point of view of accuracy, than her own.1 Shifting from one probability function to another without new evidence will therefore look like a bad idea from the perspective of an agent who wants her belief state to be accurate. However, it’s plausible that in some cases– perhaps cases in which our evidence is ‘incomplete’ or ‘non-specific’– we lack any precise credence (consider, for example, the proposition that one of the authors of this paper is currently wearing a striped shirt). Consideration of such cases has led some philosophers to argue that we sometimes adopt and/or ought to adopt attitudes that are ‘imprecise’ and hence better represented by a set of credence functions (a ‘representor’) rather than a single one (more on how these sets are generated later).2 But once imprecise credal states enter the mix, the story about whether accuracy considerations permit arbitrary doxastic shifts becomes significantly more complicated. This story is the topic of our paper. To get clearer about the question we’ll be addressing, it will be helpful to introduce some terminology. We’ll think of precise states as represented by either single credence functions or, when convenient, sets containing a single credence function. Imprecise states are represented by sets containing more than one credence function.3 We’ll say that an agent dilates when she moves (without a change in evidence) from a precise state, p, to an imprecise state M that has p as a member. We’ll say that an agent contracts when she moves (without a change in evidence) from an imprecise M to a precise p which is a member of M. Our question is the following: Question: Are there accuracy-based reasons to avoid dilating? Are there accuracy-based reasons to avoid contracting? We’ll be arguing for: Answer: There are accuracy-based reasons to avoid dilating, but there are no accuracybased reasons to avoid contracting. In defending our answer we’ll be assuming a particular view about what having an imprecise credal state amounts to. This view is based on comparativism about subjective probability more broadly: the view that we should understand subjective probabilities as representations of facts concerning the agent’s comparative confidence ordering. We’ll be thinking of agents with imprecise probabilities as those whose comparative confidence ordering is incomplete (more details on this picture later). Here’s the plan: We’ll first explain why the standard epistemic utility theory framework (EUT) permits both dilation and contraction, and why this result is problematic. We’ll then show that an alternative to EUT, Sophie Horowitz’s educated-guess framework, forbids dilation. This is a point in its favor. However, as we’ll see, like EUT, the educated guess framework permits contraction. We will explain why the permissibility of contraction makes sense, at least from an accuracy perspective, and we’ll conclude by discussing the epistemological significance of the asymmetry between dilating and contracting that we’re defending in this paper. 2 EUTPERMITSDILATING In recent literature, accuracy considerations have been brought to bear on questions about imprecise credences using epistemic utility theory (EUT).4 The central tool in EUT is that of ascoring rule,whichBUILES ET AL. 5 intuitively measures the “distance” between one’s credences and the truth at any given possible world. Scoring rules are also interpreted as measures of epistemic value: higher credences in truths are better than lower credences in truths, and lower credences in falsehoods are better than higher credences in falsehoods. By using an appropriate scoring rule, one can give all sorts of interesting decision-theoretic arguments for different rational requirements. EUT was developed in the context of precise probabilities. How should it be extended to account for imprecise probabilities? There are two ways one might measure the accuracy of an imprecise credal state {c1…cn}onEUT. First, one might assign an imprecise credal state (at a given world) a number, representing its accuracy at that world: let’s call that the “numerical approach”. Alternatively, one might assign an imprecise credal state some non-numerical score, for example a set of numbers: let’s call that the “nonnumerical approach”. A natural way to spell out the non-numerical approach would be to use the set {A(c1)…A(cn)}, where c1…cn are the precise credence functions in the agent’s representor, and A(c1)…A(cn) are the accuracy scores of those precise credence functions at a given world. In this section we’ll explain why we think both the numerical and non-numerical approaches permit dilation, and why we take this to be a problematic result. Let’s start with the numerical approach. Results discussed in Mayo-Wilson and Wheeler (2016), building on Seidenfeld, Schervish, and Kadane (2012), show that given certain plausible constraints onanaccuracy measure, for any precise state, p, there is an imprecise state M, such that, in every world, Mand p are equally accurate.5 This suggests that from an accuracy perspective, if you are in state p, there is no reason not to dilate to M. After all, if you dilate to M you’re guaranteed to do just as well as you would by staying at p, no matter what the world is like. So the numerical approach can’t motivate a prohibition on dilating. It also can’t motivate a prohibition on contraction, for a similar reason: for any imprecise state, there’s a precise state that’s guaranteed to be equally accurate. Can the non-numerical approach do better? We think not. If accuracy scores are given by nonnumerical objects, like sets, for example, then to figure out whether dilating is permitted, we need a more complicated account of how to compare the accuracy of different belief states. Here’s one proposal: extending the supervaluationist picture that infuses much of the literature on imprecise credences, we could say that the accuracy of A is less than the accuracy of B if and only if, for every ai in A and bj in B, the accuracy of ai is less than the accuracy of bj. On such a picture, there will never be accuracy-based reason to prefer adopting a precise credence function p over an imprecise state M, which contains p, or vice versa. For if M contains p, then there is some credence function in M (namely p!) which is no less accurate than p in every world. It follows, then, that if you have p, there is no accuracy-based reason to avoid dilating to some M that contains p as a member.6 (Thereisalsono reason, if you start off in M, to avoid contracting to p.) There are certainly other ways to think about and compare accuracy-scores if accuracy is measured non-numerically. But we are doubtful that other ways of measuring can solve the present problem; as Schoenfield (2017, “Imprecision-2”) and Berger and Das (2020, Proposition 4) show, for any such measure satisfying certain plausible constraints, at least some instances of dilating will be permitted.7 The supervaluationist, non-numerical method leads to especially worrisome consequences because of the following feature: on this approach, the accuracy of an imprecise credence is no better or worse than the accuracy of any of the precise probability functions it contains. This seems to permit believers to jump around between precise credal states willy-nilly. Consider: Delia: Delia’s credence that there is intelligent life on other planets is only 0.1. (This is supported by her evidence as well). But she thinks: “it sure would be fun to believe that there was life on other planets!” Alas, as somebody highly committed to rationality6 BUILES ET AL. and accuracy she knows she can’t just hop from one precise credence to another… But then Delia remembersthatshe hasanother option: imprecise credences! Consulting some results from epistemic utility theory, Delia decides that there is no accuracy-based reason not toadopt[0,1].Soshedoes.Then,consultingepistemicutilitytheoryagain,shedecides that there is no accuracy-based reason not to adopt a precise credence. She adopts a precise credence of 0.9 that there is intelligent alien life and enjoys her new belief state very much. On the non-numerical approach we’ve been sketching, each of these transitions is rational. But it’s preposterous to think that one can rationally move from 0.1 to 0.9 without a change in evidence, simply by stopping at [0,1] along the way!8 There is also a more general, unattractive consequence of any theory that permits dilating. That is, if dilating is permitted, there can be no requirement to conform to the Principal Principle. If we only care about accuracy, and if [0, 1] is no less accurate than 0.5 in every world, how could we be required to adopt 0.5 credence that a fair coin will land heads, rather than [0,1]?9 The Principal Principle is one of the least controversial principles governing the rationality of credences, and Delia’s reasoning seems like a paradigmatic instance of irrationality. But no version of EUT that meets widely accepted plausible constraints on an accuracy measure can explain why an agent interested in accuracy would want to maintain the credences recommended by the Principal Principle, and some otherwise attractive elaborations of EUT will vindicate Delia’s reasoning. The source of both of these problems is that EUT permits dilation. We take these considerations to pose a serious challenge to EUT, and to motivate considering an alternative framework for thinking about the accuracy of imprecise credal states. In what follows, we will introduce a new way of assessing the accuracy of imprecise credences, building on the guessing-based framework for accuracy developed in Horowitz (2019). According to this view, credences are accurate or inaccurate in virtue of the all-or-nothing ‘guesses’ that they license. Using this simple and intuitive framework, Horowitz argues that one’s credences should satisfy the axioms of probability, given some very natural norms governing educated guesses. Horowitz also derives a version of Immodesty, which is roughly the claim that a rational agent should expect her own (precise) credences do better accuracy-wise than any other (precise) credences.10 A major advantage of this approach, over EUT, is that it does not require us to attach accuracy values to credences, or to aggregate the values of an agent’s credences in different propositions. In other words, the educated guess framework (hereafter EGT)– unlike EUT– does not include a scoring rule. Although, like EUT, Horowitz’s framework is based on the thought that norms of rationality can be explained and motivated by the aim of accuracy, that aim is not captured by thinking of belief states as a whole as scoring better or worse on some accuracy scale. EUT, on the other hand, standardly does work with aggregate scores of an agent’s entire belief system. This aspect of the view is what seems to justify unsavory “tradeoffs” between true and false beliefs in different propositions. Some epistemologists have rejected EUT for this reason, and have taken this objection to EUT as a reason to reject the aim of accuracy altogether. However, if the real problem is aggregation, these epistemologists should feel free to embrace accuracy-first epistemology with EGT instead. Furthermore, the scoring rules commonly used in EUT build in controversial assumptions that some epistemologists deny.11 So avoiding scoring rules carries some significant advantages for EGT. In our expansion of EGT, we will preserve these advantages, as well as developing one more: its treatment of dilation. We’ll first show how the guessing framework can be generalized to apply to imprecise credences. Wewill then argue that, unlike EUT, EGT can explain the irrationality of dilation. Finally, we will go onto argue that, perhaps surprisingly, there are important differences between dilation and contraction.BUILES ET AL. 7 While there are good accuracy-based reasons to avoid dilation, there is no accuracy-based reason to avoid contraction. 3 EDUCATEDGUESSING AND ACCURACY First, some background onEGT:followingHorowitz,weassesstheaccuracyofcredencesbylookingat the educated guesses that our credences license. We can think of educated guesses as answers to forced choice questions, perhaps given various suppositions. An agent does well with respect to a guess if her guess is true, and poorly if her guess is false. (“Well” and “poorly” are not defined in numerical terms, here; using this approach, we are just meant to think of ourselves as desiring, for each question we might encounter, that we answer it correctly.) As one might imagine, different credal states license different guesses: If an agent is more confident in P than Q, for example, and asked to guess either P or Q, then she is licensed to guess P (and not licensed to guess Q).12 Forced choice questions can also be posed under various suppositions. One might be asked: Supposing it rains tomorrow, guess between: (A) Ali will bring an umbrella, and (B) Ben will wear rain boots. In response to this kind of question, if an agent is more confident of A than B conditional on its raining tomorrow, she is licensed to guess A (and not licensed to guess B). If she is not more confident of either A or B conditional on its raining tomorrow, she is in a state that licenses either guess. Since our guesses can be true or false, our credences can get things right or wrong by licensing true or false guesses. So, we can understand an agent’s credences as being accurate insofar as the guesses they license are true, and inaccurate insofar as the guesses they license are false. (Following Horowitz, we will assess the accuracy of suppositional guesses only if the supposition is true; otherwise, their accuracy is undefined.) 4 EDUCATEDGUESSES FOR IMPRECISE CREDENCES To use the guessing framework to motivate norms about imprecise credences, we need to first give an account of which guesses are licensed when our credences are imprecise. That’s the aim of this section. (Precise credences are easy: if you are asked to guess betweenvariousoptions, youarelicensed toguess exactly those options in which your credence is highest.) Which guesses are licensed if you are in an imprecise doxastic state? To answer that, it will be helpful to first step back and ask what an imprecise doxastic state is. All we have said so far is that an imprecise doxastic state is one in which it makes sense to ascribe a set of probability functions to you– a representor– rather than just a single probability function. But what is it about you that determines the need for more than one probability function in your representor– and what determines which probability functions are included? Our view is based on comparativism about subjective probability.13 Comparativism is based on the intuitive thought that while numerical probabilities represent belief states, there’s nothing about our belief states that mandates a unique numerical representation. In other words, there’s nothing “0.69-ish” about my degree of confidence in P, beyond the fact that .69 can serve as an adequate representation of my degree of confidence within a particular representational system. But 69, for example, or 732.6 for that matter, would work just as well, provided the system was structured in the right way. In other words, the comparativist thinks that what makes it the case that I have a credence of 0.69 in P is really a structural fact about how my degree of confidence in P is related to my degrees of confidence in other propositions.148 BUILES ET AL. In this spirit, we hold that the probability functions in an agent’s representor are all and only the ones that are compatible (in the sense defined below) with the agent’s comparative confidence judgments.15 So if, for example, an agent is more confident in P than she is in Q, every probability function in her representor will assign greater credence to P than to Q. If she is more confident in P given Q than she is in P given ∼Q, every probability function in her representor will assign greater conditional credence to (P|Q) than to (P|∼Q).
Leave a Reply