Thursday 6 June 2019

Alan Gibbard & why 'intuitions', 'judgments', and 'coherence' are all worthless

There is a view, popularized by Parfit, that an ethical 'reason' is a non-naturalistic 'fact' known to us by intuition. If this is so, then it would seem that armchair philosophy might be useful. It could pick out a set of intuitions which are consistent with each other. Rational people may be persuaded 'to buy the whole set' because it represents a 'reflective equilibrium' and in this way Public Morality might be advanced and Normative considerations might achieve a coherent and prescriptive representation in Public Policy.

Allan Gibbard, in his Tanner lectures, unintentionally gives a good example of why 'intuitions' are delusive if we live under Knightian Uncertainty. It follows that 'coherence' and Public Justification theory are equally mischievous. Any 'reflective equilibrium' they jointly give rise to will be the unreflecting imbecility we take for granted from our great and good.

He says-
 Decision theorists in the classical Bayesian tradition work to formulate what it means to be consistent in one’s policies for action, and then derive surprisingly strong results from the conditions they lay down. This tradition stems from the work of, among others, L. J. Savage, who rediscovered a way of thinking that had been developed by F. P. Ramsey toward the end of his short life. If a way of making choices satisfies the Savage conditions (or conditions in a like vein), as it turns out, then it is as if one were maximizing an expectation of value. It is as if, that is to say, one had numerical degrees of credence and numerical evaluations of the possible outcomes, and acted to maximize expected value as reckoned in terms of these evaluations and degrees of credence.
In other words, if we are foolish enough to believe that all possible states of the world are known, then we will do very foolish things because our 'intuitions' would mislead us. We would be acting as if we could maximize 'expected value' when all that we can do- poor, ignorant, weak creatures that we are- is minimize the regret we feel for acting under the impulse of a delusive hubris.

Gibbard then given an example first formulated by Zueckhauser.
You are forced to play Russian roulette, but you can buy your way out. What is the most you would be willing to pay, the question is, to remove the bullet, reducing your chance of shooting yourself from one in six to zero?
Nothing, if you are a man of conviction. Russian roulette is wrong. If you are not a man of character and will suffer no remorse or regret for your cowardice, then the answer is everything you have or could borrow. Of course, you might start with a much lower offer. But the most you could offer is the most you could and would pay.
Or that’s the first question; once you answer it, we ask a more complex one. You are instead, it turns out, forced to play a worse version of Russian roulette, with four bullets in the six chambers. What’s the most you would pay, the question now is, to remove one of the four bullets?
Still everything you have.
In particular, is it more or less than before?
It's the same.
Most people answer less.
But the most you would pay is still the same. If a gangster is pointing a gun at you and playing mind games, you ultimately hand over everything you have and beg him to give you a nice clean shot to the head, rather than take his time with you.

This is the case even if you are a coward and a poltroon. You may feel some regret that you didn't play a manly part from the outset, but at least you die without illusions or stupid 'intuitions' grounded in a demonstrably false 'decision theory'.
But you should pay more, goes an argument from orthodox decision theory. This problem is equivalent, after all, to a two-stage problem, as follows: In the first stage, you are forced to play with three bullets and no chance to buy yourself out. In the second stage, if you survive, you are forced to play with two bullets, but you can pay to remove both. The amount to pay in the second case, then, is anything you would pay to remove both of two bullets if they re the only two bullets —surely more than to remove one sole bullet. This case and others like it have been staples of debate on the foundations of decision theory, and ways out of this conflict of intuitions have been proposed. The first thing to note, though, is that the intuitions in conflict are strong.
No. The 'intuitions' are silly. There's a guy with a gun playing mind games with you. He wants you to have 'intuitions'- that is illusions- so he can have some fun mentally torturing you before getting down to some Hannibal Lecter type physical nastiness.

Decision theory is misleading here because it assumes there is no Knightian Uncertainty. All possible states of the world are known. But this is not what is happening when you are confronted by a sociopathic sadist with a gun. You suddenly realize that your intuitions are useless. You can't begin to imagine the sort of mental and physical torture this nutter is going to put you through. You must give up 'Expected value maximization' for a 'Regret Minimizing' strategy.

In this case, you need to cheat the nutter out of his fun. So start praying loudly to the Great God Boozah saying 'Blessed be this vessel of thine, Great God Boozah, who will now torture and kill me in fulfillment of the prophesy of the Nicaraguan horcrux of my neigbhor's cat,  so that my sins are expiated in this life and I may join thee in Paradise for ever and ever.'

It is important that you pray to some God you just invented. If you mention any known Deity, there's a chance this guy believes he is acting on that Deity's orders. The last thing you need is to have to listen to his twisted theology as he tortures and kills you. Thus, the safer bet is to have a wholly idiosyncratic ontological dysphoria. Indeed, if ethical reasons are non-naturalistic then the safer course is to have your own unique variety of ontologically dysphoric reason. Thus when the nutter reveals he is the Archangel Michael, you nod and say 'exactly as was predicted. Thou art the Vessel by which the Great God Boozah gets to finally beat Yahweh at mahjong thus proving His superiority to all the Celestial Beings! All hail, Boozah- the eternal and merciless!'

 If the nutter lets you go- well and good. If he tortures and kills you- hang on to the belief that the pain you are suffering is punishment for all your sins on this Earth. Focus on the gates of paradise which are swinging open for you. Forgive your assailant with your last breath. Die without regretting your last action on earth. Your simulated 'ontological dysphoria' has spared you suffering and cheated your assailant out of his sadistic pleasure.

Gibbard takes a different view.
Is removing two bullets worth more than removing one, if in each case you thereby empty the pistol? Surely. Does anything matter, in these choices, but chance of surviving and how poor you will be if you do? Not much; those seem the predominant considerations. It  doesn’t matter, then, whether you must play the four-bullet game or the two-stage game, since they involve choice among the same chances of death. Does it matter if you choose at the start of the two-stage game what to pay if you survive the first stage, or decide once it turns out you have survived the first stage? Clearly not. Orthodox decision theory goes against intuition for this case, but any alternative to orthodoxy will violate one of the strong intuitions I just voiced.
Those 'strong intuitions' were utterly foolish. There's a guy with a gun who is playing mind-games with you.  He might deliberately let you win at Russian roulette and then, just when you are mopping your brow with relief, shoot you in the stomach so you die slowly and painfully.

You need to play a regret minimizing strategy. This may actually be 'Hannan Consistent' in a certain sense. Die like a man, with clear eyes and no silly 'intuitions' or illusions. It may the most valuable legacy you can leave- the one act which redeems a wasted life.
The constraints in classical decision theory that do the real work are all exemplified in the argument I just gave, and so at least for cases like this one, if the argument is good, then classical decision theory is pretty well vindicated.
But the argument is bad. Knightian Uncertainty exists. We can't maximize expected value because we don't know all possible states of the world, nor indeed can we fully specify any state of the world, nor can we calculate its likelihood. The best we can do is adhere to a, hopefully, eusocial, Regret-minimizing strategy which may feature unique 'ontological dysphoric' values.
I myself am convinced that what we gain in intuitiveness when we depart from the orthodox views in decision theory in cases like this is less than what we lose. 
 It may pay a guy who teaches this shite to hold this conviction. But, it is a waste of time for everybody else.
I’ll be assuming without further argument, then, that the constraints of decision theory are ones of consistency in action, or something close to it. Whether they are full-fledged matters of consistency is a tricky question, and so I’ll use the word coherence. Why, though, does coherence in plans for action matter—especially when they are plans for wild contingencies that we will never face, like being forced to play any of several versions of Russian roulette?
Hannan consistency is regret minimizing. It makes sense for us to look at catastrophic outcomes and consider how much regret we might experience if we blithely ignored them so as 'maximize' expected value. Furthermore, there is a social aspect to our actions. What if everybody did as we do? To give an example, suppose you work for a pension fund trading derivatives (i.e. some linear combination of Arrow-Debreu securities). Since these ignore Knightian Uncertainty, they could become 'weapons of financial mass destruction'. You may still want to maximize your earnings but if everybody does so, your own pension pot may be wiped out along with your job. Surely, it would be better to be mindful of the regret you might feel if hubris gets the better of you now? How will you feel about yourself if your 'intuitions' turn out to be sophomoric delusions?
With questions of fact, the problem with inconsistency is that when a set of beliefs is inconsistent, at least one of the beliefs is false.
Not necessarily. The set of beliefs may include both first order and second order, type theoretic, propositions. The inconsistency may be internally resolvable such that 'univalent foundations' obtain.
I’m suggesting that we think of ought questions, in the first instance, as planning questions.
But 'planning questions' are judged solely on the basis of actual outcomes. A coherent plan which is not carried out is worse than an incoherent plan which is carried out to some good effect.
Answers to them may in the end count as true or false, but we don’t start our treatment with talk of truth and falsehood and help ourselves to these notions in our initial theorizing.
Initial theorizing is ultimately useless unless some actual beneficial activity occurs. But critiquing that action is more fruitful then arguing the toss about what motivated it.
With incoherent plans, I accept, the oughts we accept in having those plans can’t all be true, but that isn’t at the root of what’s wrong.
At the root of what's wrong is that this talk of planning is not linked to actions and their outcomes. It is just empty verbiage.
So, indeed, what is wrong with incoherent plans?
The same thing as what is wrong with coherent plans- if they don't result in beneficial actions.
As a first approximation, I can say, incoherent plans can’t all be carried out. If I plan to be here today and also plan to be on top of Mount Kenya, believing that I can’t be both places on the same day, my beliefs and plans are inconsistent.
Nonsense! Suppose Elon Musk phones you and says 'I plan to transport you on my hypersonic jet to the top of Mount Kenya today' and you reply that you plan to be at home watching Netflix and put down the phone, you may as a prudential measure keep a warm jacket with your passport and credit cards handy just in case Musk goes through with his crazy plan.

When making plans, it is 'regret minimizing' to guard against eventualities you think are infeasible.
Either, then, my belief that I can’t be in both places is false, or one of my plans I won’t carry out no matter what choices I make.
Suppose you didn't know it was Elon Musk on the phone. Further suppose that he did not mention his hypersonic jet. All he says 'you will be standing on top of Mount Kenya this very day.' You may still keep you warm jacket handy- just in case. You genuinely believe that the thing is impossible but this belief is not wholly untrue. Rather, your true belief is that no one would think it worthwhile to transport you by hypersonic jet to an African mountaintop.

Speaking generally our beliefs are not fully specified and correspond to statements of likelihood which are seldom wholly false.
Some of the plans I’ll be talking about in the next lecture, though, are wild contingency plans that I’ll never be in a position to carry out anyway. I might talk about such wild plans as a plan for what to prefer for the contingency of being Brutus on the Ides of March.
Why is this a wild plan? It might be a question you are asked at an audition. 'You are Brutus on the Ides of March. Which type of phony baloney British accent will you choose to sport?' If your preference is for Dick Van Dyke Mockney, you don't get cast. On the other hand, if your preference is for Dame Edna Everage- a star is born.
And some of the states of mind that can be coherent or not with others won’t be simple plans but constraints on plans and beliefs—that, for instance, I plan to pay more, if forced to play Russian roulette, to empty the pistol of two bullets than of one. The problem with inconsistent plans is that there is no way they can be realized in a complete contingency plan for living.
There is also no way for a consistent plan to be realized as a 'complete contingency plan for living' in a world where Knightian Uncertainty obtains because the state of the world differs in multiple ways from what you previsioned.
For each full contingency plan one might have, something in the set will rule it out.
But this also true of states of the world. You will have to improvise.
Or more completely, we’d have to talk about inconsistent beliefs, plans, and constraints. If a set of these is inconsistent, there’s no combination of a full contingency plan for living and a full way that world might be that fits. And judgments get their content from what they are consistent with and what not.
But these types of judgments are as worthless as 'intuitions'.  They are armchair garrulity nothing more. True, if there was no Knightian Universe and we were omniscient creatures, then this would not be the case. However, there would be no need for Language or Philosophy or Judgments or Intuitions. We would be as Leibnizian monads synchronized according to a pre-established harmony.

No comments: