The absolute prioritarian’s claim that moral and prudential value come apart may raise additional worries for anyone motivated not just by the idea that only personal facts have intrinsic value (no distributional good) but also that all value must be personal: value must be value for someone. Egalitarianism holds that there is impersonal value, namely, the (dis)value of (in)equality. Absolute prioritarianism also appears to hold that there is impersonal value, since moral value isn’t value for anyone (this is what it is for v and u to come apart): there is something good about someone’s increase in welfare that is not good for her or anyone else.Footnote 32 By contrast, relative prioritarianism holds that all value is personal – the only thing that is valuable is a person’s well-being. (One might object: don’t the weights that attach to each person’s well-being imply a commitment to impersonal value, namely the contribution of each person’s well-being to the overall good? No. Weights are merely the extent to which each personal value figures into the overall good. All of the views mentioned here, even utilitarianism, is committed to weights – they are typically just equal weights – so relative prioritarianism is committed to impersonal value in virtue of giving more weight to the worse-off only if the other views are also committed to impersonal value in virtue of giving equal weight to the worse-off.)

7.2. Responses to worries about relative prioritarianism
Might relative prioritarianism suffer from different worries, from which absolute prioritarianism is immune? One worry is that giving more weight to the worse-off amounts to thinking that some people matter more than others; another is that failing to take account of absolute position is morally objectionable. I will consider each in turn.

The first worry is that by weighting some people’s interests more strongly than others, we thereby hold that some people matter more than others, contrarily to a commitment to treating people the same. One might think: to treat people the same is to weigh their interests the same, that is, to give facts about each person’s well-being identical weight in determining the overall good. In reply, there is a weaker sense of treating people the same which I think better captures the morally relevant concept: to take someone’s well-being into account in the same way whether that someone is Ann, Bob or Cecil. For example, the distribution {Ann, 200; Bob, 100; Cecil, 300} should be just as good as the distribution {Ann, 100; Bob, 300; Cecil, 200}. This sense of same-treatment is often referred to as anonymity – the identities of the individuals can be swapped without changing the overall value – and all of the views in this paper (including relative prioritarianism) adhere to it. I contend that anonymity, not the stronger identical-weight principle, actually captures the kind of same-treatment ethicists should be concerned with. If this is right, then how much weight to give to each individual cannot be settled by considerations about treating people the same: all of the views here treat people the same in this sense.

Let us move on to the second worry. Here is a standard sort of intuition that seems to support absolute prioritarianism. Improvements to the utility of someone who is at a very low absolute utility level because she is constantly hungry or in excruciating pain should weigh more heavily than improvements to the utility of someone who is at a much higher absolute level, both in the single-person case – we have more reason to help the person in the first scenario than in the second – and in a multi-person case – we have even stronger reason to help the person over others in the first scenario than we have to help the person over others in the second scenario, even if these others are all better-off.Footnote 33

These intuitions are compelling. However, we have to be careful to distinguish between the idea that some given change I can make in the world (say, giving someone a sandwich) provides more utility when the recipient is at a low utility level, and the idea that some given increment of utility provides more moral value. The relative prioritarian will, of course, say that it is more morally valuable to give a sandwich to someone when he suffers from constant hunger than it will be to give him a sandwich when he has enough to eat: the former increases his utility much more (it is more prudentially valuable).

To make trouble for the relative prioritarian, one must say that it is more morally valuable than it is prudentially valuable to improve the lot of the person at the low level. (Or that it is less morally valuable than it is prudentially valuable to improve the lot of the person at the high level.) Focus on the difference that having a daily sandwich makes to a person with constant hunger – it does a lot to improve his well-being. Now imagine what change would have to take place in the world to improve this person’s well-being by the same amount, if he started out in the position of having enough to eat but a low-paying job and a minor ailment: curing his ailment and drastically increasing his pay, perhaps? (To avoid confounding intuitions, it is important to make the morally motivated stranger better off than the subject in both cases.) However you answer this question, you must pick a change that would be equally prudentially motivating for the subject as the daily sandwich would be when he suffers from constant hunger. To report my own thoughts: when I arrive at a change that genuinely seems prudentially equivalent (which must be a very large change), it is hard to maintain that the change in the first case has more moral value than the change in the second. Indeed, if the morally motivated stranger defers completely to the subject’s own reasons for improving his well-being in each case, then he has by definition equal moral reason to help him in each case.

The situation doesn’t appear to be different if we add in a person who is better off than our subject in both cases. When I am deciding whether to help our subject or this better-off person by a given amount, then if I should give our subject a daily sandwich when he is constantly hungry (rather than help the better-off person by the given amount), I should also cure his minor ailment and drastically increase his pay when he is not food-insecure, or whatever the equivalent change is (rather than help the better-off person by the given amount).

What will make a difference, according to the relative prioritarian, is if there are two people, one who is food-insecure and one who has a low-paying job and a minor ailment, and I am choosing between giving the first person a daily sandwich and giving the second one the prudentially equivalent change. In this case, the relative prioritarian will say that we have to prioritize the first person – that helping him by a fixed amount contributes more to the total good.

So far, I claim, these verdicts aren’t counterintuitive. If we are considering a single-person case, we don’t have more reason to help a single badly-off person than he has to help himself, nor less reason to help a single better-off person than he has to help himself. But if we are considering multi-person cases, we have more reason to help a badly-off person by a given amount rather than a better-off person by that same amount. At the very least, these claims stem from a coherent way to aggregate the good.

If one is not convinced, the relative prioritarian may still have a leg to stand on. Recall that relative prioritarianism makes two claims: that moral value should be prudential value, and that the worse-off should get more moral weight. My particular concern has been to show that these two claims together provide a rival way to satisfy the prioritarian commitments, and that they constitute an independently compelling philosophical view. But there is actually a third alternative, which is to agree with the absolute prioritarian that moral value diminishes marginally in prudential value and to agree with the relative prioritarian that the worse-off get more moral weight; call this position hybrid prioritarianism. So if one is on board with giving the worse-off more weight, but finds the above verdicts counterintuitive on the grounds that we have stronger moral reasons to help the absolutely worse-off subject, then one could adopt hybrid prioritarianism.

It should be obvious how to define such a position: instead of taking a rank-weighted average of utility values (as in relative prioritarianism), take a rank-weighted average of moral values, where the latter is a concave function of utility (as in absolute prioritarianism).Footnote 34

Similarly, one can combine relative prioritarianism with a position such as sufficientarianism, the view that certain needs get lexical priority – for example, that if some people are below the poverty line, then it contributes nothing to the good to improve the lot of people who are above the poverty line. Sufficientarianism itself is incomplete, since one still needs to say, within the group of individuals whose needs get lexical priority, how an improvement for each contributes to the total good: is it better to improve the lot of worse-off individuals ‘below the line’ by some utility value or to improve the lot of better-off individuals ‘below the line’ by some different utility value?Footnote 35 Again, this is a substantive question, and relative prioritarianism may provide an attractive answer. In short: taking into account the kinds of ‘absolute’ considerations that other views are interested in needn’t preclude also taking into account considerations of relative priority.

  1. Whose good should we bring about?
    I already mentioned a key feature that distinguishes relative from absolute prioritarianism: relative prioritarianism denies strong separability, i.e. relative prioritarianism holds that whether the overall good is higher when Ann and Bob have some utility amounts or when they have some different utility amounts depends on what Cecil has. To make it concrete: {ANN, 125; BOB, 200; CECIL, 150} may be better than {ANN, 100; BOB, 300; CECIL, 150}, but {ANN, 100; BOB, 300; CECIL, 0} may be better than {ANN, 125; BOB, 200; CECIL, 0}, even though the two comparisons involve the same utilities for Ann and Bob. In the first comparison, Ann’s welfare makes a much higher contribution to overall good than Bob’s, because she is the worst-off and Bob is the best-off; but in the second comparison Ann’s welfare makes an only somewhat higher contribution, because she is the middle and Bob is the best-off. In this section, I will consider whether this should be thought of as a reason to favour absolute prioritarianism, and I will close with some remarks about how the good relates to what to do.

I’ve already explained why the relative prioritarian rejects strong separability: if Ann is worse-off than Bob, then whether she is the worst-off or the middle will make a difference to the relative contribution of Ann’s and Bob’s welfare to the total good, and Cecil’s welfare determines where Ann and Bob fall in the relative ordering. So there is nothing inconsistent about denying strong separability and still holding that individual welfare is the only thing of value – that relational properties aren’t themselves valuable. Nonetheless, let me raise a challenge to the denial of strong separability.

The challenge is a version of Parfit’s ‘divided world’ example (Parfit Reference Parfit1991: 87–88, 99–100). Imagine the world contains only Ann, Bob and Cecil, but that Ann and Bob do not know of Cecil’s existence, and vice versa. Whether Cecil is at 150 or 0 will make a difference to whether it is better for Ann to be at 125 and Bob 200 (option 1), or for Ann to be at 100 and Bob 300 (option 2). But how could this make a difference, if Cecil is wholly detached from Ann and Bob?

To see how the relative prioritarian would reply, notice first that overall good is the good of a particular set of people – overall good is indexed to the people in a distribution. Thus, there are two ways in which one or the other option could be better: it could be better for the group consisting of Ann and Bob, or it could be better for the group consisting of Ann, Bob and Cecil.

According to relative prioritarianism, there will be a single answer to the question of whether option 1 or option 2 is better – contains more overall good – for the group consisting of Ann and Bob. Cecil’s well-being does not make a difference to which option is better for Ann and Bob, as it should be. (One view about what to do is to hold that we should decide between alternatives by considering which alternative is relative-prioritarian better for the group consisting of only the people involved, e.g. that we should do what is better for Ann and Bob. As it turns out, this is equivalent to a view known as the competing claims view.Footnote 36 )

Cecil’s well-being will make a difference to which option is better for Ann and Bob and Cecil, that is, for the group consisting of all three. But I claim that there is nothing amiss here either, because Cecil is himself part of this group.

One might worry: how can Cecil’s well-being make a difference to which option is better for all three, when Cecil fares the same under each option? The answer is that his well-being makes a difference to the overall effect of Ann’s and Bob’s well-being on the whole. If we take seriously the idea that the overall good (of a group of people) is more strongly related to the well-being of its worse-off member than its best-off member, then it is unsurprising that changes in the well-being of one member can influence how the well-being of other members affects the overall good – a person whose well-being was at some point only weakly determinant of the overall good can become more strongly determinant of the overall good, in virtue of her now becoming the worst-off member.

The cost of relative prioritarianism, then, isn’t really that it gives the wrong verdict in divided world cases, since if one likes the verdict that Cecil’s well-being doesn’t make a difference to the good when she is unaffected, we can get this verdict by specifying that we are talking about the good for Ann and Bob. The real cost – if there is one – is that good (and hence the ordering of options) will always be relativized to a set of people. Unlike views that accept strong separability, the relative prioritarian view might give a different answer about which of two things increases the good more, when we are talking about the good of a smaller group or the good of a larger group which contains that smaller group. Thus, when we are answering the question what to do, we have to ask from the point of view of increasing the good of what set of people?

But this is just a general question we face in other ways in practical ethics. One is part of a family, a neighbourhood, a country, and humanity, and sometimes the good for each of these groups conflicts: what’s good for the members of my family might be bad for the citizens of my country, and what’s good for the citizens of my country might be bad for humanity. (Even though, in this case, there are no conflicts between various parts of a group – between Ann and Bob on the one hand and Cecil on the other – there is still a conflict between what is good for the smaller group and what is good for a larger group because the individuals occupy different positions in those groups and thus contribute differentially to their overall good.) And thus when making a moral decision, we must figure out whose good is the one to pay attention to.

We are left, then, with an additional question in distributive ethics: not just ‘what should I do to bring about the good?’, but ‘whose good should I be concerned with bringing about?’. We could hold that we should bring about the good of those whose interests are at stake in this particular choice; or the good of our present society; or the good of present and future people anywhere; or the good of humanity as a whole. I leave this question for further discussion.

  1. Conclusion
    Absolute prioritarianism holds that an individual’s utility depends only on how good things are for her; that relational properties have no intrinsic value; and that nonetheless the measure of goodness in a society is spread-averse. Absolute prioritarianism adheres to these three claims by holding that we maximize average or total moral value, where moral value comes apart from utility (and, indeed, diminishes marginally in utility). It therefore holds that the good of those who are worse-off in an absolute sense matters more than the good of those who are better-off in an absolute sense.

If one is convinced by these three claims, however, there is another alternative: we can hold that overall utility is a weighted average of utility, weighted towards those who are worse-off in a relative sense. Surprisingly, we can hold that the relatively worse-off matter more while still holding that an individual’s utility depends only on how good things are for her (it does not depend on relational properties) and that relational properties have no intrinsic value. In particular, we can hold that the overall good in a society is more sensitive to the good of the worse-off than to the good of the better-off.

There are thus three different reasons for holding that distributions in which utility is more spread out are worse, keeping average utility fixed: because inequality is bad in itself (egalitarianism), because differences in utility matter more the worse off an individual is in an absolute sense (absolute prioritarianism), or because the relatively worse off get more weight (relative prioritarianism). I think that relative prioritarianism captures how we should think about overall good; but whether or not I’ve convinced you that it is the correct ethical view, it should be clear that relative prioritarianism is a serious contender.

Acknowledgements
This work was greatly improved by comments from Marc Fleurbaey, Johann Frick, Liz Harman, Niko Kolodny, Sven Neth, Kian Mintz-Woo, Tom Parr, Philip Pettit, Stephen White, and three anonymous referees; and by discussions at Cal State LA, UC Berkeley, the Formal Epistemology Workshop, the Princeton Workshop in Normative Philosophy, the Princeton Workshop in Philosophy and Economics, Harvard University, the Princeton University Center for Human Values, the University of Pittsburgh, UNC Chapel Hill, Duke University, Wayne State University, and Rutgers University. This work was supported by a Laurance S. Rockefeller Fellowship from the Princeton University Center for Human Values.

Appendix 1. Aggregation Rules
Let a society consist of n individuals, and let X be the set of consequences. Define:

A population distribution e = {1, x 1; …; n, x n} maps individuals i to consequences x i.

u i(x i) maps consequences x i to real numbers, and represents the utility that individual i gets from x i (we will use u i as shorthand when the consequence is clear).

Let individuals be grouped into (mutually exclusive and exhaustive) groups G 1, …, G m such that every individual in group G j receives a consequence with utility value u Gj, and let p(G j) map groups G j to [0, 1]. p represents the proportion of the population that is in G j.

A proportion distribution d = {p(G 1), u G1; … p(G m), u Gm} maps each group of size p(G j) to consequences with utility u Gj.

A few things to note about this latter definition: (1) each individual in G j needn’t receive the same consequence, as long as each receives a consequence with the same utility value; (2) the groups needn’t be the same for different distributions; and (3) for each distribution, there will typically be many equivalent ways to group individuals – all of the rules here will give the same result for each grouping.

A population aggregation rule assigns a numerical value to each population distribution e, such that of two population distributions, the one with the greater numerical value is better.

A proportion aggregation rule assigns a numerical value to each proportion distribution d, such that of two proportion distributions, the one with the greater numerical value is better.

When we are dealing with a fixed population, as we are in this paper, every population rule gives rise to a proportion rule that will produce an equivalent ranking. The rules below are stated in terms of their ‘equivalent’ population and proportion versions.

  1. Utilitarianism
    The population version of utilitarianism is given by:

The proportion version of utilitarianism is given by:

  1. Egalitarianism
    The population version of egalitarianism is given by:

where Q total maps population distributions e to real numbers and is a measure of how spread out e is.

The proportion version of egalitarianism is given by:

where Q average maps proportion distributions d to real numbers and is a measure of how spread out d is. (To make these rules equivalent for a fixed population, define a suitable Q average for each Q total .)

  1. Prioritarianism (‘Absolute prioritarianism’)
    The population version of prioritarianism is given by: Footnote37

where v(u) maps real numbers (utility values) to real numbers (moral values).

The population version of prioritarianism is given by:

where, again, v(u) maps real numbers (utility values) to real numbers (moral values).

For (absolute) prioritarianism, v can be taken to be strictly concave, or weakly concave but not linear.

  1. Rank-weighted utilitarianism (‘Relative prioritarianism’)
    From a population distribution e, define an ordered population distribution that reorders individuals from worst-off according to that distribution (lowest utility value) to best-off according to that distribution (highest utility value):

Note that needn’t be unique, because there may be ties, but any derived from the same e will yield the same value for the rule below.

From a proportion distribution d, define an ordered proportion distribution that reorders groups from worst-off according to that distribution (lowest utility value) to best-off according to that distribution (highest utility value):

Again, needn’t be unique, but any derived from the same d will yield the same value for the rules below.

The population version of rank-weighted utilitarianism is given by:

where is a mapping from positive integers to (non-negative) real numbers, and represents the weight that the k th-worst individual gets (d’Aspremont and Gevers Reference d’Aspremont, Gevers, Arrow, Sen and Suzumura2002: 471).

When these weights are non-negative and non-increasing, i.e. and for , then Wtotal defines the generalized Gini family (d’Aspremont and Gevers (Reference d’Aspremont, Gevers, Arrow, Sen and Suzumura2002: 471)). Footnote38

The proportion version of rank-weighted utilitarianism is given by:

where I(p) measures the ‘importance’ or ‘weight’ of the interests of the top p-portion of individuals. In the main text, I’ve used w(p(G i )) to stand in for the expression in square brackets.

The proportion version of rank-weighted utilitarianism is equivalently given by:

where I(p) again measures the ‘importance’ or ‘weight’ of the interest of the top p-portion of individuals and u0 is defined to be 0 (Note that is the importance of the portion of individuals in groups G j or higher).

For relative prioritarianism, I can be taken to be strictly convex, or weakly convex but not linear.

For a fixed population, the proportion version of rank-weighted utilitarianism, with the constraint that I is weakly convex, gives the same ordering as the population version of rank-weighted utilitarianism with the generalized Gini constraints. (The same holds for strict convexity if the Gini inequalities are strict.) We can see this by setting .

Appendix 2. Aggregation and spread aversion
All of the definitions and results in this section pertain to proportion distributions.

We can define spread aversion in utility formally, in two equivalent ways.

Define a Pigou–Dalton transfer to be one that removes utility of size a from an individual (or a group of size p) and adds utility of size a to a worse-off individual (group of size p), such that the latter individual (group) remains no better off after the transfer. Footnote39 Then we are strictly spread-averse if we think that a Pigou–Dalton transfer always makes a distribution strictly better, and we are weakly spread-averse if we think that a Pigou–Dalton transfer always makes a distribution weakly better (as-good-as-or-strictly-better).

Define a mean-preserving spread of d to be a distribution with the same mean utility as d and which can be obtained by a series of steps which consist in taking some proportion from the centre of the distribution and adding it to each tail, while preserving its mean value. Footnote40 Then, we are strictly spread-averse if we think a mean-preserving spread always makes a distribution strictly worse (or if we think a mean-preserving ‘contraction’ – the inverse of a spread – always makes a distribution strictly better), and we are weakly spread-averse if we think a mean-preserving spread always makes a distribution weakly better (a mean-preserving contraction always makes a distribution weakly worse).

These definitions are equivalent. It is easy to see that a Pigou–Dalton transfer is a mean-preserving contraction. It is also easy to see that each step in a mean-preserving contraction is equivalent to a series of Pigou-Dalton transfers, and a series of such steps is thus equivalent to a series of Pigou-Dalton transfers. That our aggregation rule is spread averse in utility will be our formal interpretation of spread aversion.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *