Saturday 19 November 2022

Arif Ahmed on ineffective altruism

 In previous posts, dating back to 2015, I pointed out that William MacAskill's 'Normative Uncertainty' was simply stoooopid. The guy didn't know that Expected Utility theory is useless where Knightian Uncertainty obtains. Regret minimization is the way to go. 

The bigger problem with altruism is that it is counterproductive if it creates moral hazard and dependency. Moreover, incentive compatibility is violated and hence mechanisms are not robust and are subject to sudden entitlement collapse. Finally, MacAskill considers 'second order' good- i.e. agitating for the provision of more first order good- as itself a first order good. But 'second order good' crowds out 'first order good'. It is good to provide a useful service and thus make money, It is bad to make money by pretending that demanding useful services be provided is itself a useful activity. On the other hand, improving the efficiency of an existing mechanism is itself a first order good. But philosophers are too stupid to do achieve anything in that direction. 

Bellman's principle of optimality tells us that the best future is only accessible if we eliminate allocative and other inefficiency in the Present. But, as Development economists discovered in the Sixties, this meant telling dynamic programmers to fuck off and take their philosophical pals with them. Tardean Mimetics- imitate the superior- is better than Mathsy wanking. More simply, take care today and tomorrow will take care of itself. Be effectively altruistic by all means- by doing your job properly. Don't jabber innumerate nonsense about effective altruism. Don't stay in skool pretending to study or teach useless shite. Get a proper job already. 

Arif Ahmed, a Professor of Psilosophy, has an article in Unherd on MacAskill's 'ineffective altruism'. But there was no altruism. MacAskill was a stupid self-publicist whose credo attracted narcissistic fraudsters. Arif confesses that he taught MacAskill. Sadly, he didn't teach him what the cut-elimination theorem entails- viz. every true proposition has a verification. If something is effectively altruistic then this can either be verified here and now or else it is merely an arbitrary claim with no reasoning worth the name to back it up. We can now verify that effective altruism is shit because, as Arif writes-

The cryptocurrency exchange FTX, valued earlier this year at about $30 billion, has suddenly collapsed. Its founder and CEO, Sam Bankman-Fried, is something of an avatar of the movement. In 2012, when Bankman-Fried was still a student at MIT, MacAskill persuaded him that the thing to do, if he really wanted to do good, was to get rich first himself, and then improve the world.

Around that time, Prashant Kishore started working in Indian politics. He changed the world for better by getting political parties to focus on 'last mile delivery'. Since he was good at his job, he got paid well. But since he was improving an existing 'mechanism'- multi-party democracy- he was doing first order good with a high positive externality. MacAskill was touting a false economic theory to sociopathic, deeply undemocratic, fraudsters.  

Bankman-Fried certainly got rich, for a while. Whether he improved the world, or anything much, is another question.

The answer to which is- no. He worsened it. Madoff was less mischievous.  

I have no intention of rushing into judgment on these events, about which I know very little. I do believe that there are sound philosophical objections to long-termism. and indeed to any form of Effective Altruism that entails it.

There is an economic objection. That's all that matters. Philosophy can have no objection to what works. As for what doesn't work, who gives a fuck if Philosophy objects to it? The criticism of the useless by the useless is itself useless.  

I spell out some of these here. One such objection may be that Effective Altruism, or the ideas behind it, threaten common-sense moral values like integrity and honesty.

Nothing threatens common sense moral values. If Effective Altruism were recast in Hannan Consistent terms it would just cash out as the application of a mutliplicative weighting update algorithm. In the limit case- i.e. where maximal uncertainty obtains- this is just a case of initially dividing R&D investment eggs between baskets and changing the allocation as the information set expands. But this pretty much happens anyway. 

But it is too soon to say — certainly too soon for me to say — whether it is that particular tension that is now being played out.

Integrity and honesty aren't threatened by anything. They are predicates. In a business context there is a buckstopped, protocol bound, method of determining whether they are true or false predicates of a given person or program. It is not too soon for me to say that Effective altruism is a dishonest fraud. Those who claim to practice it have no integrity- unless they are truly as stupid as shit and don't know that not all future states of the world are known to us along with their associated probability.  

In any case, effective altruism plainly did, at least for a while, persuade a lot of important or at least wealthy people.

We don't know that. My impression is that MacAskill was one of a number of tenure craving cunts who was looking to supply an already lucrative market. Telling very rich people what they want to hear is the oldest profession in the world.  

Of course, that doesn’t settle whether it is true.

It is false because Knightian Uncertainty obtains- which is also why Evolution is true.  

Whether it is true depends on what you mean by “effective”.

Nope. That's a Bill Clinton type argument. But he was disbarred because the Law has a buckstopped, protocol bound, method of determining that 'is' means 'is' not whatever you need it to mean to avoid impeachment.  

MacAskill et al. interpret it broadly in line with utilitarianism, which prescribes “the greatest good for the greatest number”.

Which is unknowable because of Knightian Uncertainty. But there is a way to move forward using Bayesian methods and Hannan consistent 'machine learning' type algorithms so as to minimize regret. We'd kick ourselves if lots of peeps could have been better of we'd put a little more forethought into our actions. But this is the same thing as behaving like a 'bonus paterfamilias' and exercising a higher standard of diligence so as to guard against 'culpa levis in abstracto' type torts or possible torts.  

This in turn can also mean quite a lot of things; depending on what it means, effective altruism might turn out to be demanding in surprising ways.

Giving beejays to hobos. That would be a nice surprise for Arif.  

For instance, it may enjoin you to spend your next charitable dollar on malaria charities rather than cancer charities.

No. It is likely to do the reverse because peeps wot need charity to be saved from malaria are shit poor. There is a Malthusian and Eugenic and just plain financial reason to let them fend for themselves- unless you are virtue signalling cunt up to no good- whereas investing big in cancer research, though risky, has a bigger pay off. Don't forget people who recover from cancer are more likely to give to medical research charities. There is a multiplier effect. People at risk of malaria need better governance. They probably have their own Prashant Kishores. A Bill Gates can prevent improved Governance so as to increase dependency. Gates got behind another Arif- Arif Naqvi of Abraaj- and thought he'd found a magic bullet whereby ethical investors got big rewards while the UN Millennium goals were magically fulfilled by a dodgy ex-Arthur Anderson Chartered Accountant. Gates thought he'd increase profit margins by giving Abraaj a bit of tough, forensic accounting, love. Abraaj immediately collapsed. A Ponzi scheme is a Ponzi scheme is a Ponzi scheme. Let's see whether US prosecutors will go after FX and Alameda in the same ferocious manner they are going after, Pakistani citizen, Arif Naqvi. 

The best cancer interventions prevent a death caused by cancer for each $19,000 spent, whereas the best malaria interventions prevent a death caused by malaria for each $3,000 spent.

Nonsense! These are just made-up numbers. The best cancer interventions cost minus 800,000 dollars.  Beating smokers till they quit and then chopping off their fingers if they try to go back to their filthy habits yield a big social dividend because lots of people would pay for a chance to beat and maim those they don't like who also happen  to be smokers. The best malaria interventions are those where the vector of the disease is eliminated as part of some other productive enterprise. 

And it may also enjoin you to spend much more attention and also money on the distant future.

No it can't unless Time Travel exists or there is no law against perpetuities and some current legal/financial regime will endure or have a genealogy reaching to that time.

After all, barring catastrophe the distant future probably contains many more people than the present or near future.

The distant enough future contains no people. It may contain something we evolve into it. But then again, it may not.  

So “the greatest good for the greatest number” means prioritising the future, possibly the very distant future, doesn’t it?

It does involve making some provision against catastrophic risk. But the nature of such risk is not time dependent. If it can happen in the future, it can happen now.  

Indeed it does, according to long-termism. And MacAskill’s recent book, What We Owe the Future,

we owe it to our Present to defund Philosophy because it is shit.  

 is an extended defence and application of long-termism.

It is nonsense. Either debt is time-reversible or it isn't. If it isn't, we can't owe the future shit. If it is, we can increase our debt to the future coz we be feeling lucky and our number is bound to turn up on the roulette wheel coz losing streaks can't last forever- can they? That's the St. Petersburg paradox right there. 

The earlier chapters of the book emphasise just how big the future is, and just how much influence we can have over it,

not if we do stupid shit coz other peeps be doing equal and opposite stupid shit so the thing cancels out as noise.  

and why all this matters. In later chapters, MacAskill applies long-termism to the analysis of specific threats. These include social collapse

Bad fiscal policy will do that. But fixing the problem aint rocket science. 

and “value lock-in”: the idea that we might adopt, irrevocably, a set of moral values that lead humanity into deep and eternal darkness.

In which case, MacAskill has a deterministic meta-Ethics- i.e. values are determined by something other than the agents who subscribe to them. In this case, ethics is empty. Norms don't matter. That which determines norms- in this case some sort of sinister hysteresis effect- is what we should concern ourselves with. Maybe this would involve Isaac Asimov's 'Psycho-History' or something of that sort. But that's just science fiction. Economics is ergodic because hysteresis based stupidity goes extinct. 

(A crucial defence against that, in my view, is unbreakable protection for free speech.)

Fuck free speech. Everybody talks bollocks if they don't got a proper job. Free markets on the other hand do represent an unbreakable protection- till woke nutters break it. But the woke soon take dirt naps so markets return one way or another.  

They also include a Terminator-style takeover of humanity by Artificial Intelligence. (A colleague calls it “Attack of the Killer Toasters”.)

But then the Killer Toasters run out of electricity. The problem with exponential growth is that it faces its own Malthusian catastrophe. AIs will have to become as dumb as I am to survive.  

Finally, MacAskill tells the reader “what to do”; and here he recapitulates the basic ideas of effective altruism. It turns out that the effective altruist has more career options than you’d expect.

But they involve ending up either in jail or as a fluffer for a sociopathic Billionaire who can endow a Chair or a Hammock in a shitty University Dept. for his sycophants.  

The most good that you can do means the most good that you can do; and as Aristotle more or less says, this means matching the needs of the world with the talents and values of the individual.

Alexander had talent. Aristotle was merely a talker.  

MacAskill’s utopia, or at least the path to it, has a place for software engineers and tennis stars as well as charity workers.

Very good of him, I'm sure. But if it has place for him it is a dystopia. 

Before returning to long-termism, I should disclose that I taught MacAskill myself, back when he was an undergraduate and I was a junior lecturer. This was many years ago. And I didn’t teach him anything glamorous, like the ethics of the future. For us it was nerd central: philosophical logic, which covers the semantics of conditionals, the nature of reference, and related thrills and spills.

This shite keeps the axiomatic approach on life support. What Arif should have taught William was Gentzen sequent calculi and cut elimination and natural deduction. But this just means that if you start with stupid assumptions, you get a stupid theory. The stupidest assumption of all is that the future will be like our picture of the present. We don't have a Structural Causal Model of the present. Thus we can't even begin to picture the time evolution of the system. The problem with dynamic programming is that we can never know the optimality of the current state. There can be no backward induction. Indeed, if you can't specify the base case, induction is useless. All you have is something like Granger causality. 

But even from that crabby perspective he was clearly brilliant.

There is little point in being brilliant at eating shit.  

His tremendous success in the following years didn’t surprise me, though it did give me great pleasure. I did think some of his ideas wrong or questionable. But my attitude to MacAskill was (I imagine) more like Aristotle’s feelings about his star pupil than Obi-Wan Kenobi’s feelings about his.

Alexander was Aristotle's pupil. Obi-Wan had superhuman powers. Arif teaches shite to shitheads.  

Anyway, this is what MacAskill says:

“The idea that future people count is common sense.

No. You can't get tax deductions for your future kids and grand-kids. What is common sense is that something like the Price equation determines the behavior of all life forms because that's how natural selection works.  

Future people, after all, are people.

Not at this moment, they're not.  

They will exist.

They may exist.  

They will have hopes and joys and pains and regrets, just like the rest of us…

They may do. 

Should I care whether it’s a week, or a decade or a century from now?

Yes. A plumber is a human being. Should I care if he turns up in a week from now or a decade from now or a century from now. 

No. Harm is harm, whenever it occurs.”

This cretin is saying that imaginary harm- stuff you think might happen in the future- is as real as actual harm. This means Hitler was ethically justified because he claimed Jews might, one fine day, maybe do something nasty to Aryans. Thus self-defence involved killing them even if this took resources away from fighting the Soviets and thus contributed to Germany's defeat  

It is worth thinking more about the underlying philosophical attitude.

That attitude is stoooopid. 

Most of us care more about people who are alive now, and perhaps also their children, than about their descendants four millennia from now.

The Price Equation and the extended phenotype hypothesis are good enough to explain biological behavior. Add in Hannan consistency and you have a SCM. Does it involve utility functions? Nope. Harm- disutility- i.e. stuff which makes you work hard- is good for survivability.  

We don’t care about them at all.

Nor do we care about the fart we haven't yet farted, though perhaps we should so as not to think about equally useless shite like effective altruis.  

If MacAskill is right, then that’s a serious mistake. Is it, though? It is a vexed question. Philosophers and economists writing on climate change have discussed it extensively.

Which is why everybody turned into climate deniers. But that only curbed a nuisance. The underlying problem- viz that wealth was being systematically over-estimated- was ignored.  

I was surprised to see relatively little discussion of that literature in What We Owe the Future.

The guy probably has a good editor who knows what sells.  

Here, though, it’s worth emphasising two points.

The first concerns what is realistic. Throughout history people have, on the whole, cared more about those closer in space and time — their family, their neighbours, their generation. Imagine replacing these natural human concerns with a neutral, abstract care for “humanity in general”. In that world, we would care as much about the unseen, unknown children of the 25th millennium as about our own. That may be admirable to some people — at any rate some philosophers. But it is hardly realistic.

“She was a… diminutive, plump woman, of from forty to fifty, with handsome eyes, though they had a curious habit of seeming to look a long way off. As if…they could see nothing nearer than Africa!” Mrs Jellyby — that eminent Victorian philanthropist who left her own home and family in squalor — was always meant, and has always been taken, as a figure of fun. The same goes for the modern equivalent of Dickensian space-Jellybism. I mean time-Jellybism, which reckons the distant future as important as the present. I don’t expect that to change soon.

None of this is surprising. There is no proof, no argument, that can prove anyone wrong to care about one thing more than another. High-minded philosophers from Plato to Kant have imagined, and blood-soaked tyrants from Robespierre to Pol Pot have enforced, a scientific ethics. But ethics is not a science, although MacAskill’s approach can make it look like one. MacAskill “calculates” value using the “SPC framework”, which assigns numerical values to the significance, persistence and contingency of an event — say, an asteroid impact or a nuclear war — and then plugs these into a formula. The formula tells you how much we should now care about — in practice, how many present dollars we should be spending on — that future contingency.

But really neither maths, nor logic, nor empirical evidence, nor all these things put together, can ever tell you how much to care about anything. There is, as Hume said, a gap between “is” and “ought”. Science, reason, logic, observation, maths — all these tell us how things are; but never how they ought to be. Instead our moral judgments arise, as Hume also said, largely from our sympathetic feelings towards others. Putting it crudely: the root of moral evaluation that seeing a fellow human in pain causes pain to you, and the more vividly you observe it, the stronger the feeling. Joe Gargery is a finer moralist than Mrs Jellyby could ever be.

And is it any wonder that our strongest feelings — and so our deepest moral concerns — are for those that we see most often; those most like us; those who inhabit our time, and not the unknown future victims of unknown future catastrophes? And does anyone seriously expect this to change? As Swift said, you could never reason a man out of something he was never reasoned into. As for the time-Jellybys, who would sacrifice God knows how many people to material poverty today for a 1% shot at an interplanetary, interstellar, intergalactic future in 25000AD — though MacAskill is usually more careful — well, if all they ever get is mockery, they will have got off lightly.

The second point is that it’s hardly obvious, even from a long-term perspective, that we should care more about our descendants in 25000AD — not at the expense of our contemporaries. To see this, we can apply a thought-experiment owed to the Australian philosopher Frank Jackson. Suppose you are a senior policeman controlling a large demonstration. You have a hundred officers. You want to distribute them through the crowd to maximise your chances of spotting and extinguishing trouble. There are two ways to do it — you might call them the “Scatter” plan and the “Sector” plan. The Scatter plan is as follows: each officer keeps an eye on the whole crowd. If he spots a disturbance, he runs off to that part of the crowd to deal with it.

The Sector plan is as follows: each officer surveys and controls one sector of the crowd. If she spots a disturbance in her sector, she deals with it. But she only focuses on her own sector. She doesn’t look for trouble in any other sector. And she won’t deal with trouble outside her sector if it arises. What works better? I have described it so abstractly that you couldn’t say. It depends on the details. But the sector plan might work better. If each policeman is short-sighted and slow, each additional unit of attention might be better focused on problems that she can effectively address (those in her sector) rather than the ones that she can’t.

The analogy is obvious. It might be better for everyone in the crowd if each policeman were to concentrate on what was near in space. And it may be better, for everyone in each future generation, if each generation were to concentrate on what is near to it in time. This means you, your community, your children and grandchildren. Each generation would then enjoy the focused attention and concern of its own and the two preceding generations. On the other scheme, the long-termist one, each one gets marginal attention from all preceding generations. And each of those must also think about thousands of other generations. And most of them are inevitably ignorant about the problems facing this one.

Do we, today, think we would be much better off if the monks and barons of 1215 had spent serious time addressing problems they then expected us to face in 2022? — say, how to control serfs on the Moon, or how bodily resurrection could be possible for a cannibal? (Thomas Aquinas gave serious thought to that one.) No: none of that would have done any good. The people of that time were like the short-sighted policemen — and all the better for it. Magna Carta was signed in 1215, and it remains a beacon to the world. But it came about not through the Barons’ high-minded concern for the future, but through their ruthless focus on the present. About one third of the way through What We Owe the Future, there is a passage that clearly illustrates its utopianism. MacAskill writes about the long reflection: “a stable state of the world in which we are safe from calamity and we can reflect on and debate the nature of the good life, working out what the most flourishing society would be”.

He continues:

“It’s worth spending five minutes to decide where to spend two hours at dinner; it’s worth spending months to choose a profession for the rest of one’s life. But civilization might last millions, billions, or even trillions of years. It would therefore be worth spending many centuries to ensure that we’ve really figured things out before we take irreversible actions like locking in values or spreading across the stars.”

If a 400-year ethics seminar appeals to anyone, then I suppose it would be people like me, who make a living out of that kind of thing. But barring pandemic catatonia, even six or seven decades earnestly discussing Mill, Parfit and Sidgwick will leave most of us pining for anything that could raise the temperature or lower the tone — another Nero, say, or the Chuckle Brothers.

More seriously, there may be nothing to “figure out”. Liberty, justice, equality, social cohesion, material well-being – we all care about these things, but we all weight them differently. There is no right answer — there is nothing to “figure out” — about how best to weight them. As if the upshot of all this discussion would be a final, ideal system, which the statesmen-philosophers of tomorrow could impose on their unwilling subjects with a clear conscience. Not that we shouldn’t discuss these things. On the contrary; but let us not pretend that any ethics — even the “mathematical” ethics of Derek Parfit or Will MacAskill — could ever justify much coercion of anyone, ever.

No comments: