Friday, 10 July 2015

How to make Effective Altruism bulletproof without rendering it Silly- Part 1 of Zero

Effective Altruism (E.A) appears the easiest Ethical Theory to shoot down since, for any theory of Justified True Belief (J.T.B) necessary for its implementation, every possible course of action can be proven to be consistent with E.A provided it holds at least one course of action to be unambiguously prescriptive at that particular point in time and space.
This is because, if an action is prescriptive, it must be the case that an associated set of compatible Theories of Justified True Belief has instrumental value. Thus it would be rational to devote scarce resources to promoting interest in and research into that set of J.T.Bs which are consistent with the E.A prescription we have posited as occurring.
However, since, in any prescriptive course of action, a sub-optimal variation in a single step can have greater imperative force than its optimal completion- for e.g. in saving a drowning child, we may not complete the step of combing its hair, so that when the T.V cameras arrive, it presents a more piteous aspect, thus giving greater imperative force to whatever E.A prescription we are urging- it therefore follows that either
1) Any course of action is consistent with E.A if it is merely a sub-routine.
2) All E.A sub-routines have the same property as its prescriptive 'courses of action'. For example, they must pass the test of protecting the life of another. Thus, no sub-routine could arise such that you let a child drown no matter how much attention and resources doing so might draw to the good cause.

The issue here is that a sorites type problem occurs in demarcating the last safe moment when a particular sub-routine becomes prescriptive.
If no such sorites problem arises then E.A can't be disambiguated from J.T.B. There is no supervenience or multiple realizability with respect to them. Either E.A is just a 'Turing Oracle' for J.T.B's halting problem or it is its own metalanguage and thus can't prove its own consistency or completeness- i.e. it can't show some of its propositions are prescriptive provided at least one isn't. In other words, it can exist, but only outside 'Public Reason' as an apophatic practice of a Mystic or Pietistic type.
This renders it bulletproof but silly.
In my next post I'll outline a method of saving E.A from silliness.

Edit- Well, I would have outlined such a method, for sure, but Waitrose has some real nice Rum marked down so fill your glass and take a shufti at this instead.

The basic problem with E.A is that whereas an agent probably knows what yields utility to himself and, moreover, is likely to devote a lot of cognitive power to improving outcomes for himself, he does not have as good information about others. Moreover there are Preference Revelation and Preference Falsification problems.
At worst, E.A would cause a person to secure a pure economic rent to maximise income and this rent would be associated with a dead-weight loss. He may distribute much of this rent to individuals who are faking or schemes which are incentive incompatible or have a design flaw. For example, a person who can either be a teacher (with little economic rent) may choose to be a monopolist or monopsonist (causing deadweight loss to the economy) and give this money to handicapped children who have actually been maimed by a beggar-king.
Thus, for Hayekian reasons, EA can't be an optimal information aggregation mechanism. However, it may actually yield more happiness to its practioners than mindless consumerism.

Vivek Iyer highlights an important point - the difficulty calculating longterm consequences, that others flagged as well. But the nice thing about altruism is that your competition to do pure good isn't exactly intense, so it isn't THAT hard to find some genuine good to do. (Although I've heard from travellers that free mosquito nets tend to get used as fish nets, and quickly destroyed, I must admit.) The trouble with Hayekian reasoning is that the market, especially in developed nations, is largely devoted to meeting the requirements of human sexual selection (see Veblen.) So that's not a great way even to find personal happiness. Great way to piss away forests, though.

Joe Thorpe raises a very important point re. sexual selection. It could be argued that mimetic consumption of positional goods raises reproductive success which in turn entails the notion that one has a duty to your descendants to evade and avoid taxes. It is a short step to Social Darwinism- red in tooth and claw!
Fortunately, Zahavi's theory re. the handicap principle is actually eusocial across species because it sends a signal which allows 'Aumann correllated equlibria'. Thus when birds engage in flocking behaviour against a predator everybody individual benefits- including the predator which is discouraged from a costly attack.
Both hunter-gatherer and agricultural societies saw the need for egalitarian distribution of surpluses. A cynic might say Charity is good because it means you aim for a surplus so in a bad year it is the 'poor' or marginally connected who starve first. However, aiming for a surplus has positive 'externalities' and Knowledge effects. Furthermore, egalitarian rules for surplus distribution change the fitness landscape for a lot of co-evolved public goods- i.e. you get a better golden path. Interestingly, the Japanese Sage, Ninomiya thought of 'savings' not as a hedge or 'consumption smoothing' or in terms of 'time preference' (which is important because Mathematical General Eqbm theory soon becomes 'anything goes' once hedging or Knightian uncertainty etc are introduced- i.e. the maths doesn't support Ayn Rand type silliness) instead Ninomiya saw savings as a voluntary foregoing of luxuries so as to allow others to eat. However, since all humans must be treated as equal, there is an obligation to 'repay virtue'- i.e. the system still needs to be doing golden path savings and building Capital- though this can be discharged collecttively.
Ken Binmore's evolutionary game theory approach seems to be moving in this direction. If effective altruism makes people happier then we don't need to commit to either consequentialism or worry that the underlying deontics are probably apophatic- i.e. the rule set has no representation.
For the moment the argument from low hanging fruit makes sense. Of course, one has to be sensitive to the dangers posed by what Timur Kuran calls Preference Falsification & Availability Cascades. However that's the sort of thing co-evolved systems- as opposed to some substantivist Super Compurter- are good at doing.

Forgive my narcissim in replying to my own comment! I wanted to draw attention to different ways to ground E.A and, if not make it 'bullet-proof', motivate useful reflection
1) Mike Munger's notion of the 'euvoluntary' and the evolution of eusocial behavior. Here, mimetics- neglected in Anglo-America- with its cheap 'out of control' computational solutions to co-ordination and matching problems gains salience and ground a unification with both Continental theories as well as 'mirror neuron' type research.
2) Baumol 'superfairness' should be re-examined in the light of Binmore's evolutionary approach. Interestingly, this opens 'Western' discourse to 'Eastern' thought. The Bhagvad Gita, a sacred text for Hindus, for example, is part of a bigger text which stresses the need for the 'Just King' or  morally autonomous 'Principal' (as opposed to Agent) to learn Statistical Game Theory to make good decisions.
3) 'Euvoluntary' commitments  need to be universalisable such that Hannan consistency- i.e. regret minimization- obtains in a Muth Rational manner.
All 3 avenues, quoted above, are currently neglected in E.A discourse though they provide better solutions and better ways forward than anything I've come across in salient apologetics.
Moreover, they are eminently unifiable under the rubric of Co-Evolution and generate powerful 'regulative' concepts or paradigmatic metaphors

No comments:

Post a Comment