This post arises out of a comment on the previous entry.
The following is excerpted from William MacAlill's dissertation 'Normative Uncertainty'.
Susan and the Medicine - II
Susan is a doctor, who faces three sick individuals, Greg, Harold and Harry.
Greg is a human patient, whereas Harold and Harry are chimpanzees. They all
suffer from the same condition.
She has a vial of a drug, D. If she administers all
of drug D to Greg, he will be completely cured, and if she administers all of drug
to the chimpanzees, they will both be completely cured (health 100%). If she
splits the drug between the three, then Greg will be almost completely cured
(health 99%), and Harold and Harry will be partially cured (health 50%). She is
unsure about the value of the welfare of non-human animals: she thinks it is
equally likely that chimpanzees’ welfare has no moral value and that
chimpanzees’ welfare has the same moral value as human welfare. And, let us
suppose, there is no way that she can improve her epistemic state with respect to
the relative value of humans and chimpanzees.
Using numbers to represent how good each outcome is: Sophie is certain that
completely curing Greg is of value 100 and that partially curing Greg is of value
99. If chimpanzee welfare is of moral value, then curing one of the chimpanzees
is of value 100, and partially curing one of the chimpanzees is of value 50.
Her three options are as follows:
A: Give all of the drug to Greg
B: Split the drug
C: Give all of the drug to Harold and Harry
Finally, suppose that, according to the true moral theory, chimpanzee welfare is
of the same moral value as human welfare and that therefore, she should give all
of the drug to Harold and Harry. What should she do?
According to (some ethical theory) both A and C are appropriate options,
but B is inappropriate. But that seems wrong. B seems like the appropriate option,
because, in choosing either A or C, Susan is risking grave wrongdoing. B seems like the
best hedge between the two theories in which she has credence. But if so, then any
metanormative theory according to which what it’s appropriate to do is always what it’s
maximally choice-worthy to do according to some theory in which one has credence
(including some ethical theory called MFT, MFO, and variants thereof) is false.
Moreover, this case shows that one understanding of the central metanormative
question that has been given in the literature is wrong. Jacob Ross seems to think that
the central metanormative question is “what ethical theories are worthy of acceptance
and what ethical theories should be rejected,” where Ross defines acceptance as
follows:' to accept a theory is to aim to choose whatever option this theory would
recommend, or in other words, to aim to choose the option that one would
regard as best on the assumption that this theory is true;. For example, to accept
19 (Ross 2006, 743). utilitarianism is to aim to act in such a way as to produce as much total welfare
as possible, to accept Kantianism is to aim to act only on maxims that one could
will as universal laws, and to accept the Mosaic Code is to aim to perform only
actions that conform to its Ten Commandments.
The above case shows that this cannot be the right way of thinking about things.
Option B is wrong according to all theories in which Susan has credence: she is certain
that it’s wrong. The central metanormative question is therefore not about which firstorder
normative theory to accept: indeed, in cases like Susan’s there’s no moral theory
that she should accept. Instead, it’s about which option it’s appropriate to choose.
What mistake is the author making here? He thinks people should maximize expected utility under uncertainty even if that uncertainty stretches to catastrophic consequences. This is not the case. What they should do, what portfolio managers do, indeed, what Evolution does, is 'minimize regret'.
The author is aware of this possibility, but dismisses it in a footnote- 'One could say that, in Susan’s case, she should accept a theory that represents a hedge between the
two theories in which she has credence. But why should she accept a theory that she knows to be false?
This seems to be an unintuitive way of describing the situation, for no additional benefit.' The answer here is that the first order normative theory which fulfills 'regret minimization' is the one which maximizes her welfare given her preferences- be they altruistic or otherwise. It also has a lot of other neat properties- for e.g. it can give rise to a Parrando game- a combination of losing games which is winning- because MUWA regret minimization strategies are higher entropy- as well as more effectively guarding against catastrophic risk.
A normative theory deals with things like guilt, remorse as well as the satisfaction of having done the right thing at high personal cost. Regret minimization is a desirable quality in a first order normative theory and, under the author's scheme, such a theory must always exist though it may not be known. Thus 'normative uncertainty' is a mere artifact. We have normative certainty about the regret minimizing first order normative theory- it represents the best we can do, all things considered- though we don't know its details. We may use some calculus- though not the one MacAskill prescribes, because it isn't Muth Rational- which uses other first order Normative Theories to arrive at an approximation to the true regret-minimizing theory, but this does not make it a second order theory. It is first order simply.
'Metanormativity' is a delusion. It is the sort of hysteresis effect that arises when a theory is not Muth Rational- i.e. when agents are constrained not to do what all would agree would be the best thing to do. It is not 'economic' because it is not ergodic.
In a future post, I hope to put flesh on the bare bones of the following intuition-
Regret minimization by means of the multiplicative weights update algorithm is Muth Rational because it preserves diversity. It can easily be incorporated into a first order theory such that 'overlapping consensus' prescriptivity is, so to speak, built in. There is absolutely no good reason why scarce resources should be diverted from doing good into studying false theories which mischievously claim that some people and organizations with expert knowledge who are doing good aren't as 'effective' as some hare-brained scheme invented by an ignorant academic without any expert knowledge.