Monday, 11 August 2025

The imbecility of 'Moral Mathematics'.

Mathematics has had a big impact on Philosophy and 'Moral Science' (which included Economics or 'Political Arithmetic'). Equally, Mathematics fed upon ideas from the Law and Politics- e.g. kategoria meant 'accusation' and then the category of offense alleged and then became the mathematical category. 

We may distinguish between the impact of

1) Pure Mathematics- e.g. Euclid's axiomatic method as inspiring the attempt to reduce morality to a deontic logic, or Umaswati's theory of karmic obstructors which arose out of the mathematics of Permutations and Combinations. 

2) Applied mathematics- e.g. game theory, decision theory, modelling, forecasting etc. Sadly, even if people got the math right, they might mis-specify the problem. Nobody cared because it was a great excuse to write pseudo-mathsy papers of a meaningless or mischievous kind. 

I suppose, we might also speak of the ethical value of mathematical training in that it makes for greater rigour in thought and clarity in expression. Some might say that mathematics is the gateway into a Platonic world of 'forms' or ideas. The practice of mathematics involves letting go of the quotidian, self-interested, world and entering a transcendental realm. 

Is this what Elad Uzan is getting at in his essay for Aeon magazine titled- Moral mathematics-  Subjecting the problems of ethics to the cool quantifications of logic and probability can help us to be better people?

Sadly no. He is merely talking about some stupid shit non-mathematicians wrote because they were teaching nonsense to imbeciles. 

The relationship between mathematics and morality is easy to think about but hard to understand.

It is difficult to think about math because math is difficult. But mathematical methods are easy to understand. Moreover, we get that you need to be scrupulous and open to criticism if you want to be accepted in the mathematical community. If you tell lies and refuse to admit your mistakes, you will be shunned. There is little point proving the Reimann hypothesis, as D.D Kosambi was wont to do, if everybody else is laughing at you. 

Suppose Jane sees five people drowning on one side of a lake and one person drowning on the other side. There are life-preservers on both sides of the lake. She can either save the five or the one, but not both; she clearly needs to save the five.

Why? I suppose the motive for saving people is the reputational benefit or reward. Save five people and five are grateful to you. The reward is bigger than if you save just one- unless the dude is a billionaire.  

This is a simple example of the use of mathematics to make a moral decision – five is greater than one, so Jane should save the five.

It isn't a moral decision. There is an expectation of a reward- which may only be reputational or the feeling of having made a big difference. 

Moral mathematics is the application of mathematical methods, such as formal logic

which isn't necessarily mathematical. 

and probability,

such calculations can be 'outsourced'. It is not intrinsic to the activity.  

to moral problems. Morality involves moral concepts such as good and bad, right and wrong.

It may do. It may not. One can have a morality which says 'nothing is good or bad, save thinking makes it so'. It may be this leads to superior conduct or a higher ethical state.  

But morality also involves quantitative concepts, such as harming more or fewer persons,

it may do. It may not. All one can say is that decision theory involves quantitative concepts. A morality may have an associated decision theory. But then again, it may not. Either way, decision theory should be left to smart people who are good at math. They may know or care nothing about morality.  

and taking actions that have a higher or lower probability of creating benefit or causing harm. Mathematical tools are helpful for making such quantitative comparisons.

Mathematical tools are helpful in mathematics. Morality or Economics or Political Science can 'outsource' quantitative decisions to mathematicians.  It is foolish to pretend you are doing 'moral mathematics' when you proudly announce that you have discovered five is a bigger number than one. That's not the sort of thing for which you get the Fields Medal. 

They are also helpful in the innumerable contexts where we are unsure what the consequences of our actions will be.

Mathematics can't help you there. You need to ask somebody with domain expertise.  

Such scenarios require us to engage in probabilistic thinking,

outsource it. You are too stupid to do it yourself.  

and to evaluate the likelihood of particular outcomes. Intuitive reasoning is notoriously fallible in such cases,

because you are stupid. Anyway, if the decision is important enough, the best thing to do is to ask an expert.  

and, as we shall see, the use of mathematical tools brings precision to our reasoning and helps us eliminate error and confusion.

it really doesn't. That's why stupid people remain stupid even if they study a lot of math.  

Moral mathematics employs numbers and equations to represent relations between human lives, obligations and constraints.

 Whereas as Immoral mathematics employs hookers. 

Some might find this objectionable.

Hookers?  

The philosopher Bernard Williams once wrote that moral mathematics ‘will have something to say even on the difference between massacring 7 million, and massacring 7 million and one.’

It will say the latter number is higher. Then it will ask about hookers. It will be told politely but firmly that moral mathematics does not supply any such things. Try down the hall.  

Williams expresses the common sentiment that moral mathematics ignores what is truly important about morality: concern for human life, people’s characters, their actions, and their relationships with each other.

That isn't important at all. Indeed, the thing is a fucking nuisance. I used to show great concern for Bernard Williams and frequently wrote to him to ask whether Shirley was raping him with a rolled up copy of Ayn Rand's 'the Fountainhead'. His lawyer sent me a cease and desist letter.  

However, this does not mean mathematical reasoning has no role in ethics. Ethical theories judge whether an act is morally better or worse than another act.

No. They say they might get round to doing so but won't because everything turns out to be a 'wedge issue'. You get called a fucking Fascist whichever side of the argument you come down on.  

But they also judge by how much one act is better or worse than another.

No. That would be vulgar- the sort of thing Accountants do. Better just mumble generalities and claim to be really really concerned about the plight of people living far far away.  

Morality cannot be reduced to mere numbers, but, as we shall see, without moral mathematics, ethics is stunted.

No. Ethics is about gaining a better ethos. If you are good at math, do math by all means and you'll be a better person for it. If your big discovery is that five is a bigger number than one, concentrate on learning to tie your own shoe laces.


In this essay, I will discuss various ways in which moral mathematics can be used to tackle questions and problems in ethics, concentrating primarily on the relationship between morality, probability and uncertainty.

It is the same relationship as that between ethics and plumbing. If the toilet is spewing up shit rather than disposing of it, the ethical thing to do is to call in a plumber. Don't start counting the turds and saying 'five turds are more than one turd! I'm just as smart as Terence Tao!'  

Moral mathematics has limitations, and I discuss decision-making concerning the very far future as a demonstrative case study for its circumscribed applicability.

Let us begin by considering how ethics should not use mathematics.

e.g. writing essays as shitty as this one.  

In his influential book Reasons and Persons (1984), the philosopher Derek Parfit

who didn't know any math. He started off as a historian but was too stupid to stick it out. 

considers several misguided principles of moral mathematics. One is share-of-the-total, where the goodness or badness of one’s act is determined by one’s share in causing good or evil. According to this view, joining four other people in saving 100 trapped miners is better than going elsewhere and saving 10 similarly trapped miners – even if the 100 miners might be saved by the four people alone. This is because one person’s share of the total goodness would be to save 20 people (100/5), twice that of saving 10 people. But this allows 10 people to die needlessly. The share-of-the-total principle ignores that joining the four people in saving 100 miners does not causally contribute to saving them, while going elsewhere to save 10 miners does.

This is implausible. The four guys say- we can manage this rescue. Why not go save those 10 miners over there?  The decision problem has been mis-specified by a cretin. Why not state the obvious and be done with the matter

Another misguided principle is ignoring small chances.

In other words, it is immoral not to buy a lottery ticket.  Equally, it is immoral to cross the road just in case you get hit by a bus and the sight of your mangled body causes trauma to passers-by. 

Many acts we regularly perform have a small chance of doing great good or great harm. Yet we typically ignore highly improbable outcomes in our moral calculations. Ignoring small chances may be a failure of rationality, not of morality, but it nevertheless may lead to the wrong moral conclusions when employed in moral mathematics. When the outcome may affect very many people, as in the case of elections, the small chance of making a difference may be significant enough to offset the cost of voting.

There is a benefit in voting. It is the warm glow of feeling you have discharged a civic duty. Once again, the decision problem has been mis-specified by a fucking moron.  

Yet another misguided principle is ignoring imperceptible effects. One example is when the imperceptible harm or benefits are part of a collective action. Suppose 1,000 wounded men need water, and it is to be distributed to them from a 1,000-pint barrel to which 1,000 people each add a pint of water. While each pint added gives each wounded man only 1/1,000th of a pint, it is nonetheless morally important to add one’s pint, since 1,000 people adding a pint collectively provides the 1,000 wounded men a pint each. Conversely, suppose each of 1,000 people administer an imperceptibly small electric shock to an innocent person; the combined shock would result in that person’s death. It is very wrong for them to administer their small shocks, despite their individual imperceptibility.

This too is a case of mis-specification. An added pint gives one man a pint or x men 1/x pints if 1/x is the smallest utensil available for dispensing the water ration. Moreover, since whether water is being added or shocks are being administered can be 'common knowledge', the one action is good and the other is bad because the effect is perceptible.

The moral dilemma here arises where we have information that a particular product has some non-zero probability of catastrophic consequences. The Utilitarian consequentialist will be prepared to trade-off 'consumer surplus' against that catastrophic cost. A rigid deontologist might not. Thankfully, both can be told to fuck off. Let grown-ups make the decision. 

So moral mathematics must be attentive to, and seek to avoid, such common pitfalls of practical reasoning.

None have been mentioned. All Elan has shown is that there were British philosophers as stupid as himself. But, they were British and Brits think Professors who aren't 'boffins' should be crazy and useless.  

Moral mathematics must be sensitive to circumstances. Often, it needs to consider extremely small probabilities, benefits or harms.

Welfare Econ can embrace 'regret minimization' rather than expected utility maximization. It hasn't bothered to do so because it doesn't really make any difference to anything.  

But this conclusion is in tension with another requirement from moral mathematics: it must be practical, which requires that it be tolerant of error and capable of responding to uncertainty.

Welfare Economists do get jobs doing C.B.A, project appraisal, etc. 'Moral Mathematicians' don't even if they have made the amazing discovery that five is a bigger number than one.  


During the Second World War, allied bombers 
used the Norden bombsight

Nope. The US refused to share them. The RAF's Mark 14 was smaller and easier to use.  

, an analogue computer that calculated where a plane’s bombs would strike, based on altitude, velocity and other variables. But the bombsight was intolerant of error.

They all were. Norden however had been built up as the US's most valuable secret weapon. There was a 'political' aspect to this.  

Despite entering the same (or very similar) data into the Norden bombsight, two bombardiers on the same bomb run might be instructed to drop their bombs at very different times.

They could talk to each other. The fact is, either a crew mastered the tech and hit good targets or they just dropped their bombs and scooted.  The novel 'Catch-22' describes this. 

This was due to small variations in data they entered, or because the two bombsights’ components were not absolutely identical. Like a bombsight, moral mathematics must be sensitive to circumstances, but not too sensitive.

Bombsights genuinely existed. Moral mathematics doesn't exist. There is decision theory. There is O.R. There is welfare econ. But there aint no moral or immoral mathematics.  

This can be achieved if we remember that not all small probabilities, harms and benefits are created equal.

This cretin wasn't created equal to anybody with half a brain.  

According to statistical mechanics, there is an unimaginably small probability that subatomic particles in a state of thermodynamic equilibrium will spontaneously rearrange themselves in the form of a living person. Call them a Boltzmann Person – a variation on the ‘Boltzmann Brain’, suggested by the English astronomer Arthur Eddington in a 1931 thought experiment, intended to illustrate a problem with Ludwig Boltzmann’s solution to a puzzle in statistical mechanics. I can ignore the risk of such a person suddenly materialising right in front of my car. It does not justify my driving at 5 mph the entire trip to the store. But I cannot drive recklessly, ignoring the risk of running over a pedestrian. The probability of running over a pedestrian is low, but not infinitesimally low. Such events, while rare, happen every day. There is, in the words of the American philosopher Charles Sanders Peirce, a ‘living doubt’ whether I will run over a pedestrian while driving, so I must drive carefully to minimise that probability. There is no such doubt about a person materialising in front of my car. The possibility may be safely ignored.

This essay may be safely ignored. There are rules you have to learn before you get a licence to drive. Observing those rules is required by law. No driver is required to calculate probabilities.  

Moral mathematics also helps to explain why events with imperceptible effects, which are significant in one situation, can be insignificant in another. Adding 1/1,000th of a pint of water to a vessel for a wounded person is

impossible. There is no such utensil. You add a pint. The dude gets a pint. True, it may be half a pint or a third of a pint. It will never be a thousandth of a pint. Does Moral Mathematics explain why Elad has written nonsense? No. What explains it is the fact that he studied and now teaches useless, stupid, shite.  

significant if many others also add their share, so the total benefits are significant. But in isolation, when the total amount of water given to a wounded person is 1/1,000th of a pint, this benefit is so small that almost any other action – say, calling an ambulance a minute sooner

Field ambulances transport wounded soldiers to the Red Cross tent. Fetch them water by all means. Don't try to call an Uber for them because Ubers don't drive out to the battlefield.

– is likely to produce a greater total benefit. Conversely, it is very wrong to administer an imperceptibly small electric shock to a person because it contributes to a total harm of torturing a person to death.

What this nutter means is 'administering a small shock is only wrong if it contributes to killing a person'.  

But administering a small electric shock as a prank, as with a novelty electric handshake buzzer, is much less serious, as the total harm is very small. 

Unless the dude smashes in your head.  

Moral mathematics also helps us determine the required level of accuracy for a particular set of circumstances.

No. That is 'exogenous' and depends on technology or available resources.  

Beth is threatened by an armed robber, so she is permitted to use necessary and proportionate force to stop the robbery. Suppose she shoots the robber in the leg to stop him. Even if she uses significantly more force – say, shooting her assailant in both legs – it may be permissible because she is very uncertain about the exact force needed to stop the robber. The risk she faces is very high, so she is plausibly justified in using significantly more force to protect herself, even if it will end up being excessive. By quantifying the risk Beth faces, moral mathematics allows her to also quantify how much force she can permissibly use.

No. Beth's immunity to use force is jurisdiction dependent. In a 'stand your ground' State she can blow the fucker's head off. Not so, sadly, in green and pleasant England. Even if Beth is capable of 'quantifying risk' under such circumstances, nothing is added or subtracted to her level of Hohfeldian immunity. Public policy, however, may be guided by relevant statistics because there probability is not subjective but 'frequentist'.  

Moral mathematics, then, must be sensitive to circumstances and tolerant of errors grounded in uncertainty, such as Beth’s potentially excessive, but justifiable, use of force.

That is jurisdiction dependent.  

The application of moral mathematics, and indeed of all moral decision-making, is always clouded by uncertainty.

Knightian Uncertainty is ubiquitous- that's why 'regret minimization' is the way to go.  

As Bertrand Russell wrote in his History of Western Philosophy (1945): ‘Uncertainty, in the presence of vivid hopes and fears, is painful, but must be endured if we wish to live without the support of comforting fairy tales.’

He meant God. Godel & Von Neumann had no problem with the big dude upstairs.  


To respond to uncertainty, many fields, such as public policy, actuarial calculations and effective altruism, use expected utility theory, which is one of the most powerful tools of moral mathematics.

It is suboptimal under Knightian Uncertainty.  

In its normative application, expected utility theory explains how people should respond when the outcomes of their actions are not known with certainty.

But all possible states of the world and their probability is known. Knightian Uncertainty is when neither are fully known.  

It assigns an amount of ‘utility’ to each outcome – a number indicating how much an outcome is preferred or preferable – and proposes that the best option is that with the highest expected utility, determined by the calculation of probabilities.

Economists know all this. Philosophers don't.  

In standard expected utility theory, the utility of outcomes is subjective.

No. It is objectively 'revealed preference' 

Suppose there are two options: winning £1 million for certain, or winning £3 million with a 50 per cent probability. Our intuitions about such cases are unclear.

No. They are clear enough if you know your own degree of risk-aversion.  

A guaranteed payout sounds great, but a 50 per cent chance of an even bigger win is very tempting. Expected utility theory cuts through this potential confusion. Winning £1 million has a utility of 100 for Bob. Winning £3 million has a utility of only 150, since Bob can live almost as well on £1 million as with £3 million. This specification of the diminishing marginal utility of additional resources is the kind of precision that intuitive reasoning struggles with.

Why? Bob sounds like a cautious kind of bloke. He maintains pretty much the same life-style whether he has a million or three million. Our intuition is he will go for the sure thing.  

On the other hand, winning nothing has a negative utility of -50. Not only will Bob win no money, but he will deeply regret not getting the guaranteed £1 million. For Bob, the expected utility of the first option is 100 × 1 = 100. The expected utility of the second option is 150 × 0.5 + (-50) × 0. 5 = 50. The guaranteed £1 million is the better option.

So, he is minimizing regret on the basis of what he knows about himself. Charles is not like Bob. He chooses to take the risk of ending up with nothing. He would regret to his dying day not doing so because once you have a million in the bank all you can think about is what a paltry sum it is compared to three million.  

But this has nothing to do with morality. It is merely a matter of individual psychology. We may say 'Bob is risk averse. He gets disutility from being in suspense. Charles is the opposite. He gets utility from gambling.' 

Conversely, suppose Alice has a life-threatening medical condition. An operation to save her life would cost £2 million. For Alice, the utilities of £0 and of £1 million are both 0; neither outcome would save her. But the utility of £3 million is 500 because it will save her life – and make her a millionaire. For Alice, the expected utility of the first option is 0 × 1 = 0. The expected utility of the second option is 500 × 0.5 + 0 × 0.5 = 250. For Alice, a 50 per cent chance of £3 million is better than a guaranteed £1 million. This shows how moral mathematics adds useful precision to our potentially confused intuitive reasoning.

Nonsense! The decision here is between having a chance of living rather than dying. Alice picks life. That has nothing to do with morality. It is simply a fact that people prefer to be alive than dead. 

Many believe that morality is objective.

Because there are actual 'moral clauses' in contracts. The thing is justiciable.  

For this reason, moral mathematics often employs expected value theory, in which the moral utilities of outcomes are objective.

Econ does that. Some philosophers want to emigrate to Econ-land because they think Econ is less boring and useless than their own subject.  

Moral value, in the simplest terms, is how objectively morally good or bad an act is.

to a moron.  

Suppose the millionaires Alice and Bob consider donating £1 million each to either saving the rainforest in the Amazon basin or to reducing global poverty. Expected value theory recommends choosing the option with the highest objective expected moral utility.

No. What Alice or Bob do with their own money is predicted, by that theory, to maximize their expected welfare gain. Alice gives money to save the rain-forest because she gets to meet George Clooney who is the patron of the relevant charity. Bob gives money to the Gates Foundation because he wants to meet Bill Gates and sell him on his new crypto venture.  

Which charity has a higher objective expected moral utility is difficult to determine. But, once determined, Alice and Bob should both donate to it. One of the two options is simply morally better.

This is the 'effective altruism' line. Sadly, it was and is ignorant shite. Charitable donations are part of 'discovery'. Some money gets wasted but lessons are learned. 

Expected utility theory, like expected value theory, is a powerful moral mathematical tool for responding to uncertainty.

No. This shite attracts stupid windbags. Powerful mathematical tools can only be wielded by smart people. Discovering that five is a bigger number than one may be considered a genius move by 'moral mathematicians'. But it isn't really. 

But both theories risk being misapplied due to their reliance on probabilities. Humans are notoriously bad at probabilistic reasoning.

Compared to whom? Dogs? The truth is we outsource important decisions to experts. Any type of reasoning can be crap. What matters is gaining more and more independent sources of verification that a particular strategy will yield a particular outcome.  

There is a tiny probability of winning certain lotteries or of dying in a shark attack, estimated at approximately 1 in a few millions. Yet we tend to overestimate the probabilities of such rare events, because our perception of their probabilities is distorted by things like wishful thinking and fear.

Nonsense! I swim everyday and don't fear shark attacks. Nor do I buy lottery tickets. Most people are like me. We don't calculate probabilities for the same reason we don't fix the plumbing. If the thing is worth doing, it is worth outsourcing.  

We tend to overestimate the probability of very good and very bad things happening.

We don't estimate probabilities. We may ask what odds are being offered or consult an expert. Probability is calculated by people trained in that field just as plumbing is done by people trained as plumbers.  

A further mistake is to assign a high probability to an outcome that actually has a lower probability.

What an amazing discovery! It is a mistake to assign a higher number to that which should have a lower number. Did you know that 5 is a bigger number than 1? Do a PhD in Moral Mathematics and you too can learn such amazing facts! 

An example is the gambler’s fallacy: the gambler reasons that if ‘black’ comes up in a game of roulette 10 times in a row, then ‘red’ is bound to be next.

No he doesn't. Don't be silly. 

The gambler wrongly assigns too high a probability to ‘red’. Another mistake is the principle of indifference: in the absence of evidence, we should assign an equal probability to all outcomes. Heads and tails should be assigned a 0.5 probability if one knows nothing about the coin being tossed; these probabilities should be adjusted only if one discovers that the coin is imbalanced.

Does this moron imagine that actual actuaries or accountants or FinTech mavens behave in this manner? The fact is, if the decision is important we outsource it to experts who look at independent information streams. What protocols a professional uses are a matter for his professional body. Some nutter teaching shite can't add value by explaining that 5 is a bigger number than 1.  

Yet another type of mistake is to assign definite values when the values are unclear.

i.e. don't tell lies. Another amazing discovery! 

Consider a very high-value event that has a very low probability. Suppose a commando mission has a very small probability of winning a battle. Whether winning the battle will save 1 million or 10 million lives and whether the probability of the mission’s success is 0.0001 or 0.00000001 are matters of conjecture. The expected moral value of the mission varies by a factor of 10,000: from 0.1 lives saved (0.0000001 × 1,000,000) to 1,000 lives (0.0001 × 10,000,000). So expected value theory might recommend aborting the mission as not worth the life of even one soldier (since 1 > 0.1) or undertaking it even if it will certainly cost 999 lives (since 1,000 > 999).

Sheer nonsense! Commando missions serve a 'discovery' purpose and also have tactical uses- e.g. getting the enemy to waste manpower or get logistically overstretched. There is also the question of morale and P.R. Some British commando operations in the early years of the Second World War served no military purpose.  

Expected value theory is of little help in this case.

No. A smart officer proposing a raid could use this type of calculation if that's what the top brass want. A theory is only as good as the guy using it. Elad is a moron. Not everybody else is.  

It is useful in responding to uncertainty only when the probabilities are grounded in the available data.

No. Uncertainty is itself a driver for coevolved processes which increase functional information in a robust manner. The problem here is that 'naturality' is far to seek. It doesn't matter whether people in this line of work understand this or not. Competition in the field will cause convergence to the regret minimizing solution (which is like the machine learning solution).  



No comments: