Saturday, 19 November 2022

Arif Ahmed on ineffective altruism

 In previous posts, dating back to 2015, I pointed out that William MacAskill's 'Normative Uncertainty' was simply stoooopid. The guy didn't know that Expected Utility theory is useless where Knightian Uncertainty obtains. Regret minimization is the way to go. 

The bigger problem with altruism is that it is counterproductive if it creates moral hazard and dependency. Moreover, incentive compatibility is violated and hence mechanisms are not robust and are subject to sudden entitlement collapse. Finally, MacAskill considers 'second order' good- i.e. agitating for the provision of more first order good- as itself a first order good. But 'second order good' crowds out 'first order good'. It is good to provide a useful service and thus make money, It is bad to make money by pretending that demanding useful services be provided is itself a useful activity. On the other hand, improving the efficiency of an existing mechanism is itself a first order good. But philosophers are too stupid to do achieve anything in that direction. 

Bellman's principle of optimality tells us that the best future is only accessible if we eliminate allocative and other inefficiency in the Present. But, as Development economists discovered in the Sixties, this meant telling dynamic programmers to fuck off and take their philosophical pals with them. Tardean Mimetics- imitate the superior- is better than Mathsy wanking. More simply, take care today and tomorrow will take care of itself. Be effectively altruistic by all means- by doing your job properly. Don't jabber innumerate nonsense about effective altruism. Don't stay in skool pretending to study or teach useless shite. Get a proper job already. 

Arif Ahmed, a Professor of Psilosophy, has an article in Unherd on MacAskill's 'ineffective altruism'. But there was no altruism. MacAskill was a stupid self-publicist whose credo attracted narcissistic fraudsters. Arif confesses that he taught MacAskill. Sadly, he didn't teach him what the cut-elimination theorem entails- viz. every true proposition has a verification. If something is effectively altruistic then this can either be verified here and now or else it is merely an arbitrary claim with no reasoning worth the name to back it up. We can now verify that effective altruism is shit because, as Arif writes-

The cryptocurrency exchange FTX, valued earlier this year at about $30 billion, has suddenly collapsed. Its founder and CEO, Sam Bankman-Fried, is something of an avatar of the movement. In 2012, when Bankman-Fried was still a student at MIT, MacAskill persuaded him that the thing to do, if he really wanted to do good, was to get rich first himself, and then improve the world.

Around that time, Prashant Kishore started working in Indian politics. He changed the world for better by getting political parties to focus on 'last mile delivery'. Since he was good at his job, he got paid well. But since he was improving an existing 'mechanism'- multi-party democracy- he was doing first order good with a high positive externality. MacAskill was touting a false economic theory to sociopathic, deeply undemocratic, fraudsters.  

Bankman-Fried certainly got rich, for a while. Whether he improved the world, or anything much, is another question.

The answer to which is- no. He worsened it. Madoff was less mischievous.  

I have no intention of rushing into judgment on these events, about which I know very little. I do believe that there are sound philosophical objections to long-termism. and indeed to any form of Effective Altruism that entails it.

There is an economic objection. That's all that matters. Philosophy can have no objection to what works. As for what doesn't work, who gives a fuck if Philosophy objects to it? The criticism of the useless by the useless is itself useless.  

I spell out some of these here. One such objection may be that Effective Altruism, or the ideas behind it, threaten common-sense moral values like integrity and honesty.

Nothing threatens common sense moral values. If Effective Altruism were recast in Hannan Consistent terms it would just cash out as the application of a mutliplicative weighting update algorithm. In the limit case- i.e. where maximal uncertainty obtains- this is just a case of initially dividing R&D investment eggs between baskets and changing the allocation as the information set expands. But this pretty much happens anyway. 

But it is too soon to say — certainly too soon for me to say — whether it is that particular tension that is now being played out

.

Integrity and honesty aren't threatened by anything. They are predicates. In a business context there is a buckstopped, protocol bound, method of determining whether they are true or false predicates of a given person or program. It is not too soon for me to say that Effective altruism is a dishonest fraud. Those who claim to practice it have no integrity- unless they are truly as stupid as shit and don't know that not all future states of the world are known to us along with their associated probability.  

In any case, effective altruism plainly did, at least for a while, persuade a lot of important or at least wealthy people.

We don't know that. My impression is that MacAskill was one of a number of tenure craving cunts who was looking to supply an already lucrative market. Telling very rich people what they want to hear is the oldest profession in the world.  

Of course, that doesn’t settle whether it is true.

It is false because Knightian Uncertainty obtains- which is also why Evolution is true.  

Whether it is true depends on what you mean by “effective”.

Nope. That's a Bill Clinton type argument. But he was disbarred because the Law has a buckstopped, protocol bound, method of determining that 'is' means 'is' not whatever you need it to mean to avoid impeachment.  

MacAskill et al. interpret it broadly in line with utilitarianism, which prescribes “the greatest good for the greatest number”

Which is unknowable because of Knightian Uncertainty. But there is a way to move forward using Bayesian methods and Hannan consistent 'machine learning' type algorithms so as to minimize regret. We'd kick ourselves if lots of peeps could have been better of we'd put a little more forethought into our actions. But this is the same thing as behaving like a 'bonus paterfamilias' and exercising a higher standard of diligence so as to guard against 'culpa levis in abstracto' type torts or possible torts.  

This in turn can also mean quite a lot of things; depending on what it means, effective altruism might turn out to be demanding in surprising ways.

Giving beejays to hobos. That would be a nice surprise for Arif.  

For instance, it may enjoin you to spend your next charitable dollar on malaria charities rather than cancer charities.

No. It is likely to do the reverse because peeps wot need charity to be saved from malaria are shit poor. There is a Malthusian and Eugenic and just plain financial reason to let them fend for themselves- unless you are virtue signalling cunt up to no good- whereas investing big in cancer research, though risky, has a bigger pay off. Don't forget people who recover from cancer are more likely to give to medical research charities. There is a multiplier effect. People at risk of malaria need better governance. They probably have their own Prashant Kishores. A Bill Gates can prevent improved Governance so as to increase dependency. Gates got behind another Arif- Arif Naqvi of Abraaj- and thought he'd found a magic bullet whereby ethical investors got big rewards while the UN Millennium goals were magically fulfilled by a dodgy ex-Arthur Anderson Chartered Accountant. Gates thought he'd increase profit margins by giving Abraaj a bit of tough, forensic accounting, love. Abraaj immediately collapsed. A Ponzi scheme is a Ponzi scheme is a Ponzi scheme. Let's see whether US prosecutors will go after FX and Alameda in the same ferocious manner they are going after, Pakistani citizen, Arif Naqvi. 

The best cancer interventions prevent a death caused by cancer for each $19,000 spent, whereas the best malaria interventions prevent a death caused by malaria for each $3,000 spent.

Nonsense! These are just made-up numbers. The best cancer interventions cost minus 800,000 dollars.  Beating smokers till they quit and then chopping off their fingers if they try to go back to their filthy habits yield a big social dividend because lots of people would pay for a chance to beat and maim those they don't like who also happen  to be smokers. The best malaria interventions are those where the vector of the disease is eliminated as part of some other productive enterprise. 

And it may also enjoin you to spend much more attention and also money on the distant future.

No it can't unless Time Travel exists or there is no law against perpetuities and some current legal/financial regime will endure or have a genealogy reaching to that time.

After all, barring catastrophe the distant future probably contains many more people than the present or near future.

The distant enough future contains no people. It may contain something we evolve into it. But then again, it may not.  

So “the greatest good for the greatest number” means prioritising the future, possibly the very distant future, doesn’t it?

It does involve making some provision against catastrophic risk. But the nature of such risk is not time dependent. If it can happen in the future, it can happen now.  

Indeed it does, according to long-termism. And MacAskill’s recent book, What We Owe the Future,

we owe it to our Present to defund Philosophy because it is shit.  

is an extended defence and application of long-termism.

It is nonsense. Either debt is time-reversible or it isn't. If it isn't, we can't owe the future shit. If it is, we can increase our debt to the future coz we be feeling lucky and our number is bound to turn up on the roulette wheel coz losing streaks can't last forever- can they? That's the St. Petersburg paradox right there. 

The earlier chapters of the book emphasise just how big the future is, and just how much influence we can have over it,

not if we do stupid shit coz other peeps be doing equal and opposite stupid shit so the thing cancels out as noise.  

and why all this matters. In later chapters, MacAskill applies long-termism to the analysis of specific threats. These include social collapse

Bad fiscal policy will do that. But fixing the problem aint rocket science. 

and “value lock-in”: the idea that we might adopt, irrevocably, a set of moral values that lead humanity into deep and eternal darkness.

In which case, MacAskill has a deterministic meta-Ethics- i.e. values are determined by something other than the agents who subscribe to them. In this case, ethics is empty. Norms don't matter. That which determines norms- in this case some sort of sinister hysteresis effect- is what we should concern ourselves with. Maybe this would involve Isaac Asimov's 'Psycho-History' or something of that sort. But that's just science fiction. Economics is ergodic because hysteresis based stupidity goes extinct. 

A crucial defence against that, in my view, is unbreakable protection for free speech.)

Fuck free speech. Everybody talks bollocks if they don't got a proper job. Free markets on the other hand do represent an unbreakable protection- till woke nutters break it. But the woke soon take dirt naps so markets return one way or another.  

They also include a Terminator-style takeover of humanity by Artificial Intelligence. (A colleague calls it “Attack of the Killer Toasters”.)

But then the Killer Toasters run out of electricity. The problem with exponential growth is that it faces its own Malthusian catastrophe. AIs will have to become as dumb as I am to survive.  

Finally, MacAskill tells the reader “what to do”; and here he recapitulates the basic ideas of effective altruism. It turns out that the effective altruist has more career options than you’d expect.

But they involve ending up either in jail or as a fluffer for a sociopathic Billionaire who can endow a Chair or a Hammock in a shitty University Dept. for his sycophants.  

The most good that you can do means the most good that you can do; and as Aristotle more or less says, this means matching the needs of the world with the talents and values of the individual.

Alexander had talent. Aristotle was merely a talker.  

MacAskill’s utopia, or at least the path to it, has a place for software engineers and tennis stars as well as charity workers.

Very good of him, I'm sure. But if it has place for him it is a dystopia. 

Before returning to long-termism, I should disclose that I taught MacAskill myself, back when he was an undergraduate and I was a junior lecturer. This was many years ago. And I didn’t teach him anything glamorous, like the ethics of the future. For us it was nerd central: philosophical logic, which covers the semantics of conditionals, the nature of reference, and related thrills and spills.

This shite keeps the axiomatic approach on life support. What Arif should have taught William was Gentzen sequent calculi and cut elimination and natural deduction. But this just means that if you start with stupid assumptions, you get a stupid theory. The stupidest assumption of all is that the future will be like our picture of the present. We don't have a Structural Causal Model of the present. Thus we can't even begin to picture the time evolution of the system. The problem with dynamic programming is that we can never know the optimality of the current state. There can be no backward induction. Indeed, if you can't specify the base case, induction is useless. All you have is something like Granger causality. 

But even from that crabby perspective he was clearly brilliant.

There is little point in being brilliant at eating shit.  

His tremendous success in the following years didn’t surprise me, though it did give me great pleasure. I did think some of his ideas wrong or questionable. But my attitude to MacAskill was (I imagine) more like Aristotle’s feelings about his star pupil than Obi-Wan Kenobi’s feelings about his.

Alexander was Aristotle's pupil. Obi-Wan had superhuman powers. Arif teaches shite to shitheads.  

Anyway, this is what MacAskill says:

“The idea that future people count is common sense.

No. You can't get tax deductions for your future kids and grand-kids. What is common sense is that something like the Price equation determines the behavior of all life forms because that's how natural selection works.  

Future people, after all, are people.

Not at this moment, they're not.  

They will exist.

They may exist.  

They will have hopes and joys and pains and regrets, just like the rest of us…

They may do. 

Should I care whether it’s a week, or a decade or a century from now?

Yes. A plumber is a human being. Should I care if he turns up in a week from now or a decade from now or a century from now. 

No. Harm is harm, whenever it occurs.”

This cretin is saying that imaginary harm- stuff you think might happen in the future- is as real as actual harm. This means Hitler was ethically justified because he claimed Jews might, one fine day, maybe do something nasty to Aryans. Thus self-defence involved killing them even if this took resources away from fighting the Soviets and thus contributed to Germany's defeat  

It is worth thinking more about the underlying philosophical attitude.

That attitude is stoooopid. 

Most of us care more about people who are alive now, and perhaps also their children, than about their descendants four millennia from now.

The Price Equation and the extended phenotype hypothesis are good enough to explain biological behavior. Add in Hannan consistency and you have a SCM. Does it involve utility functions? Nope. Harm- disutility- i.e. stuff which makes you work hard- is good for survivability.  

We don’t care about them at all.

Nor do we care about the fart we haven't yet farted, though perhaps we should so as not to think about equally useless shite like effective altruis.  

If MacAskill is right, then that’s a serious mistake. Is it, though? It is a vexed question. Philosophers and economists writing on climate change have discussed it extensively.

Which is why everybody turned into climate deniers. But that only curbed a nuisance. The underlying problem- viz that wealth was being systematically over-estimated- was ignored.  

I was surprised to see relatively little discussion of that literature in What We Owe the Future.

The guy probably has a good editor who knows what sells.  


Here, though, it’s worth emphasising two points.

The first concerns what is realistic. Throughout history people have, on the whole, cared more about those closer in space and time — their family, their neighbours, their generation. Imagine replacing these natural human concerns with a neutral, abstract care for “humanity in general”. In that world, we would care as much about the unseen, unknown children of the 25th millennium as about our own. That may be admirable to some people — at any rate some philosophers. But it is hardly realistic.

It ignores 'uncorrelated asymmetries' as giving rise to 'bourgeois strategies' which turn out to be eusocial and a good basis for incentive compatible mechanism design using 'public signals' to promote better correlated equilibria.  


“She was a… diminutive, plump woman, of from forty to fifty, with handsome eyes, though they had a curious habit of seeming to look a long way off. As if…they could see nothing nearer than Africa!” Mrs Jellyby — that eminent Victorian philanthropist who left her own home and family in squalor — was always meant, and has always been taken, as a figure of fun. The same goes for the modern equivalent of Dickensian space-Jellybism. I mean time-Jellybism, which reckons the distant future as important as the present. I don’t expect that to change soon

The bigger problem is that talk of hypothetical people may be a tax avoidance scheme. At one time, in English law, a wealthy baron might claim to hold his estate in trust to the hypothetical son who would be fathered on certain named matrons of the parish who were too old to have babies! Jellybism could be a great cover for rapacious capitalists plundering distant lands while claiming to be stamping out the slave trade or promoting human rights.  


There is no proof, no argument, that can prove anyone wrong to care about one thing more than another. High-minded philosophers from Plato to Kant have imagined, and blood-soaked tyrants from Robespierre to Pol Pot have enforced, a scientific ethics. But ethics is not a science, although MacAskill’s approach can make it look like one. MacAskill “calculates” value using the “SPC framework”, which assigns numerical values to the significance, persistence and contingency of an event — say, an asteroid impact or a nuclear war — and then plugs these into a formula. The formula tells you how much we should now care about — in practice, how many present dollars we should be spending on — that future contingency.

It would also tell us not to spend a penny on 'second order good'- i.e. the demand for more first order good- because the latter crowds out the former. Abolish ethics. Insist only STEM subject mavens get to do post-graduate work.  


But really neither maths, nor logic, nor empirical evidence, nor all these things put together, can ever tell you how much to care about anything. There is, as Hume said, a gap between “is” and “ought”. Science, reason, logic, observation, maths — all these tell us how things are; but never how they ought to be. Instead our moral judgments arise, as Hume also said, largely from our sympathetic feelings towards others. Putting it crudely: the root of moral evaluation that seeing a fellow human in pain causes pain to you, and the more vividly you observe it, the stronger the feeling. Joe Gargery is a finer moralist than Mrs Jellyby could ever be.

Sadly, even that is not 'moral evaluation'. I used to feel very sorry for Amartya Sen who, being a very hard working Professor, did not have time to collect dog turds so as to feast upon them. Sadly, may attempt to supply him with such comestibles (which, by Arrow's theorem, all Social Choice Theorists greatly relish) led to my being expelled from the LSE. Actually it didn't because I had used the name of a Pakistani student of my acquaintance. Since he had got a Kennedy scholarship to Harvard, he didn't greatly care.  

No comments: