Robert Aumann's startling 'Agreeing to disagree' paper started thus-
As a student, hearing of Aumann's result a couple of years after this paper was published, I was greatly intrigued by the questions it raised. Would it mean two people with the same priors- relating to arson and perjury- would end up with equally roasted posteriors because their pants were on fire?
Strangely, the answer our Professor gave us was- no, Aumann agreement has no such implication.But, since you brought up the subject- certain students might wish to invest in a pair of trousers rather than display their hairy legs in fishnet stockings. You are not a Law student. Kindly take off that horse hair wig. Nobody finds it amusing. Rag Week ended long ago.
So what does sharing Bayesian priors actually involve?
The answer is that both parties have exactly the same attitude regarding what they regard as being informative about a specific event which is assumed to be random. This may happen by magic or by the intervention of the God of Occassionalism. It can't happen for any species which evolved by natural selection.
If dogs are cats then fish are bicycles is a true statement not because fish are bicycles but because dogs are not cats. Ex falso quodlibet. If something false is true then anything at all is equally true.
Posteriors are common knowledge if two people trust each other completely and feel that they think in exactly the same way. But this is not knowledge, it is a Belief. It has no link to Public Justification. If I believe cats are dogs and you believe cats are dogs and we see a cat and we both say simultaneously- what a cute little doggie! then our posteriors are equal but we are still both wrong. If one of our Mums intervenes and says- 'that is not a doggie. It is a cat. It says miaow. Dogs say woof woof!'- then our posteriors change. Our attitude to what we consider informative changes. My reaction may be 'Mummy is a liar. I will never trust her again. Oooh there's a big red doggie coming towards me. It wants to play. I will run out to meet it'- after which I get run over by the big red bus.
Your reaction may be quite different. You may say 'This Mummy is not a liar. Not everything is a doggie. It may be a cat. It may be a big red bus which will run me over and kill me. Fuck having Bayesian priors. I'll just outsource information acquisition to smart people. Maybe I'll even go to school and learn stuff. That way I won't get run over by big red doggies which are actually buses.'
Hanson & Cowen take a different view-
A robust result is that honest truth-seeking agents with common priors should not knowingly disagree. Typical disagreement seems explainable by a combination of random belief influences and by priors that tell each person that he reasons better than others. When criticizing others, however, people seem to uphold rationality standards that disapprove of such self favoring priors. This suggests that typical disagreements are dishonest. We conclude by briefly considering how one might try to become more honest when disagreeing.A very robust result is that cats are dogs and big red doggies won't run you over if you run towards them because they are actually buses. Unfortunately, human beings aren't particularly robust after they have been run over. They tend to leave large chunks of their viscera sticking to the road.
Honest truth-seeking agents, if the product of evolution, should never believe they have the same priors. A married couple, on honeymoon may think they share common priors with respect to a particular type of event and lo! their expectations create reality. After a few years go by, however, they will soon be disabused that their agreement to get married was based on common priors with respect to a certain type of event. Their 'common knowledge' regarding it will soon turn out to be a case of mutual fraud. 'I thought you liked that thing I do?!' 'I only put up with it so as to get the thing over with!' Such is the unravelling of 'common knowledge'. Indeed, evolution has a good reason for ensuring this outcome.
What is true of marriage is true of any sort of partnership or joint enterprise. 'Common Knowledge' or uttermost good faith and trust are a useful delusion. Long-run they are bound to unravel. There is a hadith to the effect that 'If there is Perfect Agreement amongst you regarding His Vision, you will see His Visage as you see the full moon in the sky.' Precisely for this reason a permanent Veil between us and the Creator must exist. Agreement can't be, oughtn't to be, perfect. Else were Apocalypse.
Hanson and Tyler are aware that assuming common priors is problematic. Yet they write-
One of Aumann’s assumptions, however, does make a big difference. This is the assumption of common priors, i.e., that agents with the same information must have the same beliefs. While some people do take the extreme position that that priors must be common to be rational, others take the opposite extreme position, that any possible prior is rational. In between these extremes are positions that say that while some kinds of priors and prior differences are rational, other kinds are not. Are typical human disagreements rational? Unfortunately, to answer this question we would have to settle this controversial question of which prior differences are rational. So in this paper, we consider an easier question: are typical human disagreements honest? To consider this question, we do not need to know what sorts of differing priors are actually rational, but only what sorts of differences people seem to think are rational
So the Aumann agreement theorem is silly. It is like Arrow's theorem. It says 'if dogs are cats then this a really cool theorem'.
Is there a way of demarcating beliefs from information? Of course there is! provided dogs are cats.
Suppose you see two people quarreling. If their dispute is connected to their economic interest or psychological well being, you are likely to consider the dispute to have some rational basis. Furthermore, there may be a mutually beneficial cooperative solution.
However if they are arguing over something which doesn't matter in the slightest to them personally, your best bet would be to try to change the conversation. Get them talking about a mutual enemy.
It would be utterly pointless to try to get two people, who are in violent disagreement about something which doesn't matter a jot to either, to agree to some protocol bound type of rational argument. What would be the point? You have increased the dimensionality of the decision space. Agenda control gains salience. Instead of just arguing about something pointless the two are now arguing about how arguments should proceed. This will soon cause them to doubt each others good faith.
Consider the following- I agree to sell my comic book collection to you for a certain price. While conducting this transaction we get into an argument about who would win in a fight between Dracula and Spiderman. You say Dracula will hypnotise Spiderman and sodomise him. I point out that Renfield likes eating spiders and so his gnawing at our hero will cause Spidey to be distracted from Dracula's hypnotic gaze. Anyway, Spiderman does not get sodomised in Marvel comics. D.C maybe coz Batman & Robin were totally AC/DC, but not on Stan Lee's watch. No way. No how.
This silly argument should be terminated by any rational observer. He should mention the Green Lantern movie and we can agree it sucked ass big time. What would be wholly irrational would be to start arguing about a proper rational basis to decide the underlying issue. Why? I will soon enough accuse you of making a bad faith argument. This will cause me to cast aspersions on your character. You will call me a wanker and withdraw from the sale we had previously agreed to on the grounds that I'd probably jizzed over half of the panels in my comic book collection. Instead of a mutually beneficial deal going through, we now shun each other and decline to transact business. At the margin, Social Welfare decline. A Pareto gain has been lost and this may have deleterious dynamic effects.
Rationality isn't about arguing about how rational discussion should proceed. Even less is it about junk social science regarding what people think is or isn't rational. Rationality is about doing sensible things. It isn't about saying 'if dogs are cats then my cool new theorem must be true.'
Hanson & Tyler offer this simple parable. Will it reveal Aumann's theorem to be meaningful or will it show that Academics got shite for brains?
Let us see-
Why would John taint the testimony of a fellow witness in this way? If he suspects a crime he reports the matter to the police. He may mention that Mary was a possible witness but that he has not spoken to her so as not to taint her testimony. The Police are happy. John and Mary- even if they differ- will make credible witnesses on the stand.
Imagine that John hears a noise, looks out his window and sees a car speeding away. Mary also hears the same noise, looks out a nearby window, and sees the same car. If there was a shooting, or a hit-and-run accident, it might be important to identify the car as accurately as possible. John and Mary’s immediate impressions about the car will differ, due both to differences in what they saw and how they interpreted their sense impressions. John’s first impression is that the car was an old tan Ford, and he tells Mary this.
Mary’s first impression is that the car was a newer brown Chevy, but she updates her beliefs upon hearing from John.John should say 'I thought it was an old tan Ford'. Mary should say 'it looked like a newish brown Chevy'. The prosecutor can then show a picture of a car which is neither a Chevy nor a Ford and which appears tan to John and brown to Mary. Both can confirm that this car they saw. The Defence will try to challenge their testimony to create 'reasonable doubt'- but they may also pressure their client, if guilty, to cop a plea.
That's how the Justice system works. John and Mary comparing notes till they both decide they saw a large black man driving a BMW helps nobody at all.
Hanson and Tyler, however, have got the bit between their teeth and are galloping off into never-never land.
Upon hearing Mary’s opinion, John also updates his beliefs. They then continue back and forth, trading their opinions about the likelihood of various possible car features. (Note that they may also, but need not, trade evidence in support of those opinions.) If Mary sees John as an honest truth-seeker who would believe the same things as Mary given the same information then Mary should treat John’s differing opinion as indicating things that he knows but she does not. Mary should realize that they are both capable of mistaken first impressions. If her goal is to predict the truth, she has no good reason to give her own observation greater weight, simply because it was hers. Of course, if Mary has 20/20 eyesight, while John is nearsighted, then Mary might reasonably give more weight to her own observation. But then John should give her observation greater weight as well. If they can agree on the relative weight to give their two observations, they can agree on their estimates regarding the car. Of course John and Mary might be unsure who has the better eyesight. But this is just another topic where they should want to combine their information, such as knowing who wears glasses, to form a common judgment. If John and Mary repeatedly exchange their opinions with each other, their opinions should eventually stop changing, at which point they should become mutually aware (i.e., have “common knowledge”) of their opinions (Geanakoplos and Polemarchakis 1982). They will each know their opinions, know that they know those opinions, and so on. We can now see how agreeing to disagree is problematic, given such mutual awareness.Would John and Mary really be so callous and irresponsible as to stand up in Court to affirm they saw something they didn't see at all? If the Defence attorney is any good, he will tear into them until they breakdown and admit they saw Donald Trump's upper body protuding from the sunroof of his limo and Stormy Daniels had her twat firmly wedged around his head and she was giving him a golden shower and then Dracula swooped down and sodomised Spiderman.
People capable of agreeing they saw something they didn't see will also agree that they saw anything else you care to stipulate provided you badger them enough and discover contradictions and inconsistencies in their testimony.
Consider the “common” set of all possible states of the world where John and Mary are mutually aware that John estimates the car age to be (i.e., has an “expected value” of) X, while Mary estimates it to be Y. John and Mary will typically each know many things, and so will know much more than just the fact that the real world is somewhere in this common set.So John and Mary think they saw two different cars in two different possible worlds. They can agree that there is a real world where some wholly different car drove past their respective windows. For some unknown reason, this car in this 'Real world' they have just invented must correspond to whatever they agree.
Wonderful! Stuff like that goes down all the time. I walk into your office and start urinating. You protest vigorously. I say- 'sorry, in my universe, this is where the Men's room is.' You reply, 'in my Universe your cock is a lot bigger.' I say 'fuck! Where I come from, 5.5 inches is considered ginormous. I'd better get back to my Universe pronto.'
But they do each know this fact, and so they can each consider, counterfactually, what their estimate would be if their information were reduced to just knowing this one fact. (Given the usual conception of information as sets of possible worlds, they would then each know only that they were somewhere in this common set of states.) 5 For more on common knowledge, see Geanakoplos (1994), Bonnanno and Nehring (1999) and Feinberg (2000). For a critical view, see Koppl and Rosser (2002). For the related literature on "no-trade" theorems, see Milgrom and Stokey (1982).Okay, you and me can now consider, counterfactually, what our estimate of this shite would be if our information were reduced to knowing just one fact- viz. 5.5 inches is Porn Star dimensions.
Among the various possible states contained within the common set, the actual John may have very different reasons for his estimate of X. In some states he may believe that he had an especially clear view, while in others he may be especially confident in his knowledge of cars. But whatever the reason, everywhere in the common set John’s estimate has the same value X. Thus if a counterfactual John knew only that he was somewhere in this common set, this John would know that he has some good reason to estimate X, even if he does not know exactly what that reason is. Thus counterfactual John’s estimate should be X. Similarly, if a counterfactual Mary knew only that she was somewhere in the common set, her estimate should be Y. But if counterfactual John and Mary each knew only that the real world is somewhere in this common set of possible worlds, they would each have exactly the same information, and thus should each have the same estimate of the age of the car. If John estimates the car to be five years old, then so should Mary. This is Aumann's (1976) original result, that mutual awareness of opinions requires identical opinions.Only if priors are common. Thus John and Mary had exactly the same brain and upbringing and education and neural chemistry and physical location and so on. Then the only possible counterfactual is that they are not one and the same person.
The same argument applies to any dispute about a claim, such as whether the car is a Ford, which is true in some possible worlds and false in others. As long as disputants can imagine self-consistent possible worlds in which each side is right or wrong, and agree on what would be true in each world, then it should not matter whether the disputed claim is specific or general, hard or easy to verify, or about physical objects, politics, or morality.Nobody can imagine self-consistent possible worlds where 5.5 inches is ginormous because they would soon have wanked themselves to death or gotten run over by a bus.
Why are Hanson & Tyler saying a human can imagine a self-consistent possible world? If the thing can be imagined it can be made. If it can't be made it isn't possible. But, if it is possible, then if we imagine the right thing hard enough, it can make a path to our world and take us somewhere 5.5 inches will excite universal awe and delight. Why the fuck should we bother with Aumann agreement? A better world awaits us.
Differing priors can clearly explain some kinds of disagreements. But how different can rational priors be? One extreme position is that no differences are rational (Harsanyi 1983, Aumann 1998). The most common argument given for this common prior position is that differences in beliefs should depend only on differences in information.Can the information set change undetectably and endogenously? If not, why not? How do we rule it out? If we can't rule out this possibility then we may disagree with ourselves instantaneously and ceteris paribus. This means 'Common Knowledge' would have no principle of induction. We could never say that 'what you know I know you know I know' is the same as the next term in the sequence.
If John and Mary were witnesses to a crime, or jurors deciding guilt or innocence, it would be disturbing if their honest rational beliefs -- the best we might hope to obtain from them -- were influenced by personal characteristics unrelated to their information about the crime.This would not be disturbing at all. We can compensate for bias. I may be biased against big black men and a psychologist may discover that I view small white men differently if I become fearful. My testimony is not wholly useless because some fact can be established- viz. I saw a man not a beautiful and enticing woman.
What would be highly disturbing and constitute grounds for a motion for mistrial is if it were discovered that John and Mary had tainted each others testimony and, in effect, concocted a story between them.
They should usually have no good reason to believe that the non-informational inputs into their beliefs have superior predictive value over the non-informational inputs into the beliefs of others.They should however believe that they can better serve their own interests and thus should mind their own business till required by a competent authority to do otherwise.
Another extreme position is that a prior is much like a utility function: an ex post reconstruction of what happens, rather than a real entity subject to independent scrutiny. According to this view, one prior is no more rational than another than one utility function is more rational than another.10 If we think we are questioning a prior, we are confused; what we are questioning is not a prior, but some sort of evidence. In this view priors, and the disagreements they produce, are by definition unquestionable.In so far as a prior arises out of haecceity, this is the only reasonable view. One can't dissolve existence by talking about it. If we could, I wouldn't live in a world where 5.5 inches is the new definition of needle-dickdom.
What finally is Tyler & Hanson's conclusion? Is it stupider than anything in Aumann? Let us see-
We can, however, use the rationality standards that people seem to uphold to find out whether typical disagreements are honest, i.e., are in accord with the rationality standards people uphold. We have suggested that when criticizing the opinions of others, people seem to consistently disapprove of self-favoring priors, such as priors that violate indexical independence. Yet people also seem to consistently use such priors, though they are not inclined to admit this to themselves or others. We have therefore hypothesized that most disagreement is due to most people not being meta-rational, i.e., honest truth-seekers who understand disagreement theory and abide by the rationality standards that most people uphold. We have suggested that this is at root due to people fundamentally not being truth-seeking. This in turn suggests that most disagreement is dishonest.
Why is this nonsense? The answer is that it is silly to uphold a rationality standard because it leaves you more vulnerable to a predator or parasite. Insufficient Reason is the way to go.
To appear to uphold such a standard in the discharge of a particular office may however protect you from personal liability. But, in this case there is already a protocol bound, 'buck stopped' decision procedure which may not be rational at all. Rather, it may represent a system of 'artificial reason' wholly unconnected with the 'natural' one. Sir Edward Coke pointed out to 'the wisest fool in Christendom' that the Common Law was such a creature. Coke's institutes are the foundation of American democracy. Hanson & Cowan can merely micturate upon, not sap or otherwise undermine, those foundations no matter what misology they absurdly indulge in.
No comments:
Post a Comment