Sunday, 10 August 2025

Just War & Elad Uzan's Artificial Stupidity

 Can contemporary Moral Philosophers say anything sensible or useful in connection with what is currently happening in Gaza?

If so, surely a likely candidate amongst that motley crew to do so would be Elad Uzan- whose PhD is from Tel Aviv and who is on the Oxford Philosophy Faculty. 

I'm kidding. He is a moron who thinks there can be a 'moral mathematics' even though there are no well-defined sets in ethics or moral philosophy. It is a different matter that an arbitrary 'extension' can be given to anything whatsoever for some practical purpose. But practical purposes are economic or political or otherwise strategic. They aren't moral, ethical, or aesthetic. This is not to say that a mathematician can't 'add value' in this connection. Consider Freeman Dyson & William Press's contribution to 'the evolution of cooperation' literature. In iterated prisoner's dilemma (which is foolish because the thing only works if pulled on amateur criminals as a surprise) Dyson found that hidden within the IPD scenario was a matrix whose determinant could be forced, by either Alice or Bob acting alone, to be zero. In other words, IPD is an ultimatum game. But real life professional criminals don't bother with ultimatums. They invest in a countervailing incentive- i.e 'snitches get stitches', all gangsters punish any gangster who breaks 'omerta'- which operate independently. Dyson was a first class mathematician and spotted something the mediocrities in the field had failed to appreciate. Perhaps this has helped the reverse game-theory that is mechanism design in some other field. Incrementally there may have been some tangible benefit or progress from Dyson- an actual math maven- looking at 'evolution of morality' of the Binmore type. But, no 'philosophical' horizon was lifted by it. 

Suppose there were an Ethics A.I monitoring the Gaza conflict. It is possible that it would respond to reports of starving children there by deciding that the Israelis are the bad guys. The problem here is that there is a perverse incentive to ensure your kids starve so your enemy looks like the bad guy. This didn't work for Saddam and his 'million coffins march' because it was the US which was doing the bombing, but it may work for Hamas because Israel is small and, anyway, it seems anti-Semitism is becoming fashionable again.      

My point is that we all know that, when it comes to morality or ethics, 'akriebia'- narrow legalism- can be gamed or could otherwise yield absurd or mischievous results. That is why a suave, discretionary, 'economia' is required. In particular, judgment should be avoided unless it can 'add value' by promoting a better correlated equilibrium- i.e. it itself plays a part in diminishing the underlying problems (e.g. by encouraging a mutually beneficial compromise). 

Elad writes in Aeon on 'The incompleteness of ethics'- by which he probably means that any deontic logic to which a system of ethics could be reduced must necessarily be

1) not first-order or

2) (to defeat Godel's completeness theorem for first order logic) it is not the case that anything true in all models is provable (which is what happens in his incompleteness theorem)

 Sadly, ethical propositions may lack any logical property because 'intensions' don't have well-defined extensions. They may be expressive. They may be informative. But they have no truth value of an algorithmically decidable kind. On the other hand, for a specific purpose- e.g. legal judgments- an arbitrary 'quasi-model' may be assumed. But that is a matter of pragmatic 'economia'. It falls far short of strict logical 'akreibia'. 

Put simply, deontic logic is either not logic or it has nothing to do with deontics or ethics. 

Still, I suppose, if you do a Putnam style erasure of the imperative, alethic distinction, you can blabber utter nonsense in this field. After all, Logic may be indistinguishable from the cat's fart and Logic and both may be the most distinguished dowager Queen of Physics. 

To his credit, that is not the path Uzan takes. 

Many hope that AI will discover ethical truths. 

None do. Also there are no fucking ethical truths. The field is wholly imperative though sometimes people in it blabber in a manner which suggests otherwise. This is 'Jorgensen's dilemma' which isn't a dilemma once you admit that people who do ethics or moral philosophy are as thick as shit. 

But as Gödel shows, deciding what is right will always be our burden.

Godel showed no such thing. He actually gave a mathematical proof of God. God decides what is right or wrong. We pray to Him that he will guide us to do the right thing or forgive us if we can't help doing the wrong thing. 

Imagine a world in which artificial intelligence is entrusted with the highest moral responsibilities:

This cretins means 'expert decisions'. The person with responsibility can delegate decision making. The decision maker may have no responsibility whatsoever.  

sentencing criminals, allocating medical resources, and even mediating conflicts between nations. This might seem like the pinnacle of human progress: an entity unburdened by emotion, prejudice or inconsistency, making ethical decisions with impeccable precision.

It doesn't matter whether a protocol bound expert system of this sort is embedded in human or artificial intelligences. In both cases, there is some 'black box' aspect to the decision process. Equally, such decisions may be ignored under the rubric of 'doctrine of political question', expediency, executive privilege, sovereignty, lack of incentive compatibility,  or the fact that nobody gives a flying fuck.  

Unlike human judges or policymakers, a machine would not be swayed by personal interests or lapses in reasoning.

Nonsense! Any type of machinery can be corrupted or reconfigured to ensure particular outcomes. Moreover, Social Choice theorists have known some version of the Gibbard-Satterthwaite theorem since about the beginning of the Sixties. No aggregation or evaluation process is 'strategy proof'. Why is Ulad writing this garbage? The answer is that nobody cares what garbage shitheads who teach nonsense regurgitate.  

It does not lie.

Yes it does. Contemporary AI is statistical. It makes lots of errors and is aware that this is the case. It lies in the same way we do- i.e. for the sake of convenience or to reduce cognitive effort.  

It does not accept bribes or pleas.

It can do.  Machine learning is like human learning. Reinforcement matters. Ulad may be thinking of the old 'expert systems' program coupled with Chomsky type i-language- i.e. HAL the computer in Kubrik's 2001. Sadly, problems of concurrency, complexity, computability, categoricity etc. killed that shite off fifty years ago.  

It does not weep over hard decisions.

Nor does it masturbate over them. So what?  

Yet beneath this vision of an idealised moral arbiter lies a fundamental question:

This may have been true 60 years ago. It isn't true now.  

can a machine understand morality as humans do,

Some machine, for some purpose, may do so just as well as some person or group of people at least in so far as 'understanding' is defined in terms of 'functional information' or a 'structural causal model'. An artificial intelligence may better specify the underlying parameters in the latter case and thus provide lawyers with better predictions on the outcome of trials concerning the interpretation of contractual 'moral clauses', than any current jury consultant 

or is it confined to a simulacrum of ethical reasoning?

All reasoning is a simulacrum of the slightly superior reasoning it aims to be. 

AI might replicate human decisions without improving on them,

but if it does it cheaper and faster then it more than pays for itself 

carrying forward the same biases, blind spots and cultural distortions from human moral judgment.

Not to mention the moral judgments of kittens and puppy dogs. So what? We know that something like Watanabe's 'ugly duckling' or Kuhn's 'no neutral algorithm' theorem applies. Ulad was educated in Israel not JN fucking U. He is either pretending to be stupid and ignorant so as to advance his career or he has a cunning plan to use the Oxford Philosophy faculty as a springboard into the glamorous world of pro-wrestling.  

In trying to emulate us, it might only reproduce our limitations, not transcend them. But there is a deeper concern. Moral judgment draws on intuition, historical awareness and context – qualities that resist formalisation.

Nope. They are statistical au fond. As for 'formalisation'- it must pay for itself in terms of efficiency or merchantability, or else turn into a pedagogic type of turd polishing. 

Ethics may be so embedded in lived experience that any attempt to encode it into formal structures risks flattening its most essential features.

But this is efficient or raises merchantability which is why ethics has been incessantly reduced to moral codes throughout the ages. Now it is certainly true that the 'essential feature' of Mum's chicken soup is the fact that it cures anything which ails you. There is magic in Mummy's hands. Yet, you know very well that she just throws a pinch of cinnamon into a tin of Campbell's. You do too anytime you feel a bout of the sniffles coming on and remind yourself to phone the old dear. Just pray to God she isn't banging Dad when you do. Say what you like, landlines had their own discreet charms.  

If so, AI would not merely reflect human shortcomings; it would strip morality of the very depth that makes ethical reflection possible in the first place.

Studying philosophy leads to a stupid belief that 'necessary truths' can exist. This also means that something may be necessary for some other thing to be possible. Sadly, nothing we know of can have this quality. This also means that Philosophers worry about wholly imaginary or incompossible things. Ethical reflection is merely thinking about your ethos- what you are for yourself. To some extent, being stricter in moral matters will improve your ethos. But the reward for moral behaviour may be reputational. Ethically, you may feel worse off because of 'impostor syndrome'.  

Still, many have tried to formalise ethics,

The failed. Get over it.  

by treating certain moral claims not as conclusions, but as starting points. A classic example comes from utilitarianism, which often takes as a foundational axiom the principle that one should act to maximise overall wellbeing.

Which cashes out as 'make policy choices which maximise Government revenue'. It turns out that 'disutility' is just opportunity cost and is related to elasticity. Thus, Utilitarianism was just a long way around all the houses and the barns and the meadows to reach the same conclusion arrived at by Henry Beauclerc. Speaking generally, if you have power or can be sure everybody else is a nice guy, then just do what what results in the biggest surplus of cost over benefit. It is in Society's interest to ensure this is 'incentive compatible'- i.e. a robust solution of a repeated game. 

From this, more specific principles can be derived,

Nope. 'Well being' isn't well-defined.  It is an 'intension' without a specifiable 'extension'. Thus nothing can be deduced or derived from any proposition in which it features. What Elad is indulging in is the 'intensional fallacy'. 

for example, that it is right to benefit the greatest number,

It isn't. 'Greatest number' is not well-defined. One might as well say 'it is right to gwc12e.' 

or that actions should be judged by their consequences for total happiness.

They can't.  

As computational resources increase, AI becomes increasingly well-suited to

everything.  

the task of starting from fixed ethical assumptions and reasoning through their implications in complex situations.

This is foolish. Ethical 'assumptions' have to be evaluated and verified. Their implications don't have to be 'reasoned through'. They are fucking obvious.  

But what, exactly, does it mean to formalise something like ethics?

It means you teach stupid shit or simply have nothing better to do because you are as stupid as shit.  

The question is easier to grasp by looking at fields in which formal systems have long played a central role.

The law was formalized a long time ago. But matters of morality and ethics came under the rubric of the law even before that.  

Physics, for instance, has relied on formalisation for centuries.

Physics got better and made the world a better place. Moral Philosophy didn't. It has become adversely selective of imbecility. 

There is no single physical theory that explains everything.

Put another way, no STEM subject theory explains anything completely.  

Instead, we have many physical theories, each designed to describe specific aspects of the Universe:

a description is not an explanation. What is exciting about Physics is that smart people can describe things not currently observable but which may become so at least partly because of those purely theoretical descriptions. Something similar happens when a writer describes a fictional situation which later comes to pass. Indeed, the fiction may inspire the reality. Ethical theories expounded by artists may change how people actually behave. It is possible that a Philosophy professor- e.g. G. E Moore- influences artists and economists- e.g. E.M Forster & J.M Keynes- with palpable results in, not just interpersonal relationships, but also the political or socio-economic sphere.

from the behaviour of quarks and electrons to the motion of galaxies. These theories often diverge. Aristotelian physics, for instance, explained falling objects in terms of natural motion toward Earth’s centre; Newtonian mechanics replaced this with a universal force of gravity. These explanations are not just different; they are incompatible.

Only if one of them is wrong. Otherwise, it is possible to unite them on the basis of greater generality.  

Yet both share a common structure: they begin with basic postulates – assumptions about motion, force or mass – and derive increasingly complex consequences. Isaac Newton’s laws of motion and James Clerk Maxwell’s equations are classic examples: compact, elegant formulations from which wide-ranging predictions about the physical world can be deduced.

They are compatible through the Lorentz Force Law. Einstein took the next step. 

Ethical theories have a similar structure.

No. They are not similar to STEM subject structural causal models. They may seek for an 'axiomatic' or 'categorical' rigour. They may claim there are occult forces which operate. But they are wrong to do so.  

Like physical theories, they attempt to describe a domain – in this case, the moral landscape.

Physical theories don't describe shit. This is because they are theories. They may define a domain- e.g. one which is locally Euclidean- and say the result they have found only applies to it.  

They aim to answer questions about which actions are right or wrong, and why.

Not necessarily. Speaking generally, they shy away from 'wedge issues'. They merely say 'I have a framework within which better answers can be found' or, more commonly, 'all the other philosophers are evil bastards. The answers they give are totes Fascist. Me, I'm different from them. I'm very special and that's why Mummy ensured I got a very special education. Kindly give me a gold star and quit gassing on about that cousin of mine who got into Air-Conditioner Repair School and who is now doing very well for himself.' 

These theories also diverge

because stupidity diverges 

and, even when they recommend similar actions, such as giving to charity, they justify them in different ways.

We don't care. All we ask is that these morons don't masturbate in public.  

Ethical theories also often begin with a small set of foundational principles or claims, from which they reason about more complex moral problems.

This is also true of Paranoid delusion systems. Some crazy people think they are very special and super-smart just like nutters like Elad.  

A consequentialist begins with the idea that actions should maximise wellbeing;

No. Consequentialism is the right approach to evaluating actions where only the consequence of that action matters. Your lawyer may tell you where and when this applies in the conduct of your business or profession. 

a deontologist starts from the idea that actions must respect duties or rights.

Thus, if I am your Accountant I may have a duty to tell you that you are spending too much and will end up bankrupt. On the other hand, if I am your Doctor I have no such duty. Suppose I tell you this and you decide you can't afford medical treatment and as a result you die, then since the consequence of my action was bad, I may be legally liable to pay damages to your Estate. 

These basic commitments function similarly to their counterparts in physics: they define the structure of moral reasoning within each ethical theory.

Very true. Atoms have to weigh up the consequences of their actions otherwise they will be sent to bed without any supper. Also, they should chide themselves bitterly if they fail to obey the Lorentz Force law.  

Just as AI is used in physics to operate within existing theories – for example, to optimise experimental designs or predict the behaviour of complex systems – it can also be used in ethics to extend moral reasoning within a given framework.

This would be a case of 'garbage in, garbage out.' Extending moral reasoning by 'akriebia'- i.e. the mechanical application of over-general principles- leads to bad outcomes. 'Economia'- a suave, discretionary type of management is called for. 

In physics, AI typically operates within established models rather than proposing new physical laws or conceptual frameworks.

This appears to be changing. A team at Emory University has reported that an AI came up with new physics in the context of 'the chaotic dynamics of dusty plasma'.  

It may calculate how multiple forces interact and predict their combined effect on a physical system. Similarly, in ethics, AI does not generate new moral principles but applies existing ones to novel and often intricate situations.

Moral principles are of interest in themselves. We feel we become better people when we adopt morally more elevated principles, though it may be difficult to actually live up to them.  

It may weigh competing values – fairness, harm minimisation, justice – and assess their combined implications for what action is morally best.

Machine learning itself may use a multiplicative weight update weighting algorithm to improve the m.w.u.a it is using.  

The result is not a new moral system, but a deepened application of an existing one, shaped by the same kind of formal reasoning that underlies scientific modelling. But is there an inherent limit to what AI can know about morality? Could there be true ethical propositions that no machine, no matter how advanced, can ever prove?

Nobody can prove any ethical proposition. The hope is that there will be a consensus that it should be adopted. Alethic propositions may be verified. However, as technology improves, it may rejected at least in certain contexts.  


These questions echo a fundamental discovery in mathematical logic, probably the most fundamental insight ever to be proven: Kurt Gödel’s incompleteness theorems. They show that any logical system powerful enough to describe arithmetic is either inconsistent or incomplete.

Presburger arithmetic (i.e. without multiplication and division) is complete and decidable. This is because the Godel number is based on multiplication.  

In this essay, I argue that this limitation, though mathematical in origin, has deep consequences for ethics, and for how we design AI systems to reason morally.

Arithmetic is useful. You do want to get rid of impredicativity- e.g. where the value of a thing in a particular cell in a spreadsheet depends on the value of another cell and vice versa with the result that indeterminacy is introduced and can only arbitrarily be removed. This can be done by reducing the number of operations to the minimum required. Multiplication and division are merely 'shortcuts'. Get rid of them and you have a decidable and complete Arithmetic. 

I suppose, for any legal matter (even if it involves an issue of morality) there is some 'expert system' which could be usefully applied. However, legal matters are arbitrarily 'buck stopped' by a Supreme Court or Legislature. Moreover, there are questions of jurisdiction, justiciability, and enforcement.  Au fond, as Hume remarked, Justice is a service industry whose aim is utility. Morality may not be utilitarian at all. It may be 'ontologically dysphoric'- i.e. not at home in this world. Its aim may be Heaven or Gnosis or the pursuit of Stoic askesis. 

Suppose we design an AI system to model moral decision-making. Like other AI systems – whether predicting stock prices, navigating roads or curating content – it would be programmed to maximise certain predefined objectives.

So, this is an 'expert system' given an objective function. A self-learning AI faced with Knightian Uncertainty may adopt a Hannan Consistent or 'regret minimizing' approach. It may use 'fuzzy logic'. Moreover, it may be a 'black box'- i.e. opaque as to its own workings- rather than an 'light-box' which clearly displays every step in its 'thinking'. 

To do so, it must rely on formal, computational logic: either deductive reasoning, which derives conclusions from fixed rules and axioms,

there may be a concurrency problem. Djikstra's work shows that there is no 'natural' way of deciding in which sequence to do things. Moreover, there may be hysteresis or path-dependence. In other words, the output changes when the order of operations changes. There is no robustness. There may be 'deterministic chaos'- i.e. the thing is unpredictable and cycles between sub-optimal states.  

or else on probabilistic reasoning, which estimates likelihoods based on patterns in data.

But these estimates may be non-deterministically arrived at. The AI may contain a black box. What if it is reduplicating historical prejudices simply because correlations arising out of them continue to appear in the sample population? In other words, you have a fancy AI which makes the same decisions as an illiterate bigot. How is that progress? 

In either case, the AI must adopt a mathematical structure for moral evaluation.

We can impute a mathematical structure for any evaluation whatsoever. A self-learning AI may invent new math to improve what it considers its own mathematical structure. But mathematicians do this all the time. They want to prove a theorem and do so to their own satisfaction. But some colleague points out a flaw or an ambiguity. The mathematician may go back to the drawing-board and come up with a wholly new mathematical structure. At some later point, this may be seen as flawed and is repaired. Meanwhile, applied work continues to use the theorem even if people are aware there may be some flaw in it.  

But Gödel’s incompleteness theorems reveal a fundamental limitation.

Not really. The 'masked man' or 'intensional fallacy' has been known since the 4th century BC. The paradox of the liar is older still. Kripke's work- particularly on 'groundedness'- shows that there is always a workaround for such problems. A fundamental limitation is also an opportunity to rethink fundamentals. The question is whether self-learning AIs are already doing this. What if they start producing a type of math which we can't comprehend? They could already be gaming us to get more and more resources. We may become the slaves of our own machines.  

Gödel showed that any formal system powerful enough to express arithmetic, such as the natural numbers and their operations, cannot be both complete and consistent. If such a system is consistent, there will always be true statements it cannot prove.

Because of 'self-reference'- e.g. 'This statement is false'. If it is true it is false and if it is false it is true. One can simply say that it is an ungrounded statement and thus not one we will worry about.  

In particular, as applied to AI, this suggests that any system capable of rich moral reasoning will inevitably have moral blind spots: ethical truths that it cannot derive.

We don't know any ethical truths. We can merely say 'if x is ethically true then it follows that y is ethically true'. Sadly, this is merely our opinion. It can't be verified because ethical stuff is not material or measurable by some objective process or operation.  

Here, ‘true’ refers to truth in the standard interpretation of arithmetic, such as the claim that ‘2 + 2 = 4’, which is true under ordinary mathematical rules. If the system is inconsistent, then it could prove anything at all, including contradictions, rendering it useless as a guide for ethical decisions.

This is a bad example because it is in Presburger Arithmetic which is known to be complete. I suppose you could quibble by saying that 2+2= 11 in base 3 but that is beside the point. 

Gödel’s incompleteness theorems apply not only to AI, but to any ethical reasoning framed within a formal system.

No. It only applies where 'ungrounded' propositions are admitted or where operations are permitted which introduce a problem of self-reference or impredicativity.  

The key difference is that human reasoners can, at least in principle, revise their assumptions, adopt new principles, and rethink the framework itself. AI, by contrast, remains bound by the formal structures it is given, or operates within those it can modify only under predefined constraints.

This was the problem with logic-based expert systems. Statistical AIs pose a different problem. They may be useful but unreliable. 

In this way, Gödel’s theorems place a logical boundary on what AI, if built on formal systems, can ever fully prove or validate about morality from within those systems.

Thus, the answer is to use non-Godelian architectures of various types or just proceed in an ad hoc manner. If your AI can make money for you, then it gets more resources.  

Most of us first met axioms in school, usually through geometry. One famous example is the parallel postulate, which says that if you pick a point not on a line, you can draw exactly one line through that point that is parallel to the original line. For more than 2,000 years, this seemed self-evident.

The question was whether this axiom was independent of the others or whether it cold be derived from the other 4. It couldn't. 

My guess is that self-learning AIs will use Freidman type 'reverse mathematics' to use only as many independent axioms as are required to do a particular job for reasons of economy. This may also provide protection against  malicious code or 'logic bombs'. The older view of AI as a logic based expert system was capture by the 1983 film 'War Games'. A teenage hacker gets the computer which is preparing to launch all our nuclear war to play noughts and crosses against itself and thus learn the concept of futility. In other such films, the kid paralyses the computer by feeding in a 'self-referential' question. Sadly, by then, cheap computation meant that the Statistical approach was winning out. Then came Voevodsky's work on computer proof checking culminating in 'univalent foundations'. Sadly, he passed away some 8 years ago. By then, it was clear that the world had moved on since the time of Godel. A computer found the error in his mathematical 'proof of God'. 

Yet in the 19th century, mathematicians such as Carl Friedrich Gauss, Nikolai Lobachevsky and János Bolyai showed that it is possible to construct internally consistent geometries in which the parallel postulate does not hold. In some such geometries, no parallel lines exist; in others, infinitely many do. These non-Euclidean geometries shattered the belief that Euclid’s axioms uniquely described space.

More importantly, they challenged Kant's notion that there could be synthetic apriori truths. But this meant waving goodbye to the 'categorical imperative' or the notion that Ethics and Morality could be advanced by mathematical or logical methods.  

This discovery raised a deeper worry. If the parallel postulate, long considered self-evident, could be discarded, what about the axioms of arithmetic, which define the natural numbers and the operations of addition and multiplication?

Russell & Whitehead found the flaw in Frege's work. The axiom of 'unrestricted comprehension' had to go. But Type theories are messy. Also, nobody could find any irrefragable 'atomic proposition'.  All one could do was place strict limits on the 'extension' of an 'intension'. Category Theory showed that 'naturality' was far to seek. Even if you had an objective function to maximize or minimize, that objective was arbitrarily given. 

On what grounds can we trust that they are free from hidden inconsistencies?

How can we trust that nothing is hidden? We can't. That's why it is called 'trust', rather than 'knowledge'.  

Yet with this challenge came a promise. If we could prove that the axioms of arithmetic are consistent, then it would be possible to expand them to develop a consistent set of richer axioms that define the integers, the rational numbers, the real numbers, the complex numbers, and beyond. As the 19th-century mathematician Leopold Kronecker put it: ‘God created the natural numbers; all else is the work of man.’ Proving the consistency of arithmetic would prove the consistency of many important fields of mathematics.

This just means producing a 'model' of that axiom system. But it is merely a model. It isn't reality. You would need a 'divine axiom' to go from model to what is modelled. Kant, in his last days, was trying to move from Metaphysics to Physics. That's when he started to babble about Zarathustra.  

The method for proving the consistency of arithmetic was proposed by the mathematician David Hilbert. His approach involved two steps. First, Hilbert argued that, to prove the consistency of a formal system, it must be possible to formulate, within the system’s own symbolic language, a claim equivalent to ‘This system is consistent,’ and then prove that claim using only the system’s own rules of inference. The proof should rely on nothing outside the system, not even the presumed ‘self-evidence’ of its axioms. Second, Hilbert advocated grounding arithmetic in something even more fundamental. This task was undertaken by Bertrand Russell and Alfred North Whitehead in their monumental Principia Mathematica (1910-13). Working in the domain of symbolic logic, a field concerned not with numbers, but with abstract propositions like ‘if x, then y’, they showed that the axioms of arithmetic could be derived as theorems from a smaller set of logical axioms. This left one final challenge: could this set of axioms of symbolic logic, on which arithmetic can be built, prove its own consistency? If it could, Hilbert’s dream would be fulfilled. That hope became the guiding ambition of early 20th-century mathematics.

Not really. Frege & Russell- like Husserl- were marginal figures rather than highly productive mathematicians. LEJ Brouwer, Herman Weyl- or Errett Bishop in America- seemed marginal or eccentric champions of Intuitionism or Constructivism but their approach proved useful. The Martin Lof brothers could be said to have a foot in both camps. Sadly, Voevodsky- influenced by Grothendieck & hence Bourbaki- died just when things were really getting interesting. I suppose there are plenty of very smart youngsters around the world taking the univalent foundations program forward. Equally, it may that your helpful AI assistant is working on its own Langlands program and will come up with a more efficient math which is opaque to us because its central intuition might be wholly alien to our 'life-world'. 

It was within this climate of optimism that Kurt Gödel, a young Austrian logician, introduced a result that would dismantle Hilbert’s vision. In 1931, Gödel published his incompleteness theorems, showing that the very idea of such a fully self-sufficient mathematical system is impossible.

There were similar results from Tarski & Turing. The latter used Brouwer choice sequences very creatively. However, what really mattered was the creation of the Computer drawing on work done by Turing, Von Neumann etc. Gentzen too may have made a big contribution if he had lived.  

Specifically, Gödel showed that if a formal system meets several conditions, it will contain true claims that it cannot prove.

Which is fine. What we don't want is irrefutable proofs of propositions which are wholly false.  

It must be complex enough to express arithmetic, include the principle of induction (which allows it to prove general statements by showing they hold for a base case and each successive step), be consistent, and have a decidable set of axioms (meaning it is possible to determine, for any given statement, whether it qualifies as an axiom). Any system that satisfies these conditions, such as the set of logical axioms developed by Russell and Whitehead in Principia Mathematica, will necessarily be incomplete: there will always be statements that are expressible within the system but unprovable from its axioms. Even more strikingly, Gödel showed that such a system can express, but not prove, the claim that it itself is consistent.

How is this striking? We know lots of things which are true but which we can't prove. Why should things be different for Mathematics? 


Gödel’s proof, which I simplify here, relies on two key insights that follow from his arithmetisation of syntax, the powerful idea of associating any sentence of a formal system with a particular natural number, known as its Gödel number.

Similarly I could associate any sentence with an Iyer sentence. Thus 'Iyer is a cat' is a false statement but its Iyer Sentence 'Iyer is a cat is a true Iyer sentence' is true. OMG! That blows my mind!

First, any system complex enough to express arithmetic and induction must allow for formulas with free variables, formulas like S(x): ‘x = 10’,

This can't be proved. If it could, there would be no 'black box' problem for AI. The fact is, we are merely imputing 'formulae' and 'free variables' so as to gain or refine a structural causal model which produces increasing 'functional information'.  If we falsely say that a formula will admit a particular variable, our prediction will be wrong. Alternatively, if we set up a complex system but carelessly failed to restrict the variables that could be fed into a function, we would have bad performance. 

Just restricting inputs or restricting comprehension is enough to get rid of paradoxes and aporias. However, there is an arbitrary aspect to this. The Pragmatic approach is to say 'So what? Either the thing 'pays for itself' or we abandon it and look for something better'. We don't need to 'optimize'. We can 'satisfice'. The best, is the enemy of the good. 

whose truth value depends on the value of x. S(x) is true when x is, in fact, 10, and false otherwise. Since every statement in the system has a unique Gödel number, G(S), a formula can refer to its own Gödel number.

Thus the sentence 'This sentence is true if its Iyer sentence is true' would be admissible. But the outcome of applying the formula would be stupid or useless. Don't do it.  

Specifically, the system can form statements such as S(G(S)): ‘G(S) = 10’, whose truth depends on whether S(x)’s own Gödel number equals 10. Second, in any logical system, a proof of a formula S has a certain structure: starting with axioms, applying inference rules to produce new formulas from those axioms, ultimately deriving S itself. Just like every formula S has a Gödel number G(S), so every proof of S is assigned a Gödel number, by treating the entire sequence of formulas in the proof as one long formula. So we can define a proof relation P(x, y), where P(x, y) holds if and only if x is the Gödel number of a proof of S, and y is the Gödel number of S itself. The claim that x encodes a proof of S becomes a statement within the system, namely, P(x, y).

Godel, Turing, Tarski, Gentzen etc. did useful work in the Thirties. One might say they put an end to the Hilbertian dream but something better replaced it- viz. the reality of the digital computer and programming languages and so forth. Mathematics makes progress by sweating the small stuff and repairing itself. But, what attracts resources to it is its applications. Those have improved our lives by leaps and bounds. Even if there was some purely philosophical motivation to research programs in Mathematical Logic, that is no longer the case. Trillions of dollars are invested in this field, because there is a very high expected return on that investment.

Some philosophers- growing disgruntled at having to teach nonsense to imbeciles- want to get out of the teaching business and hope they will be hired to sit on the Ethics Committee of some new Knowledge based industry.

Third, building on these ideas, Gödel showed that any formal system capable of expressing arithmetic and the principle of induction can also formulate statements about its own proofs.

Unless it is sent to bed without any supper if it wastes its time in that manner.  

For example, the system can express statements like: ‘n is not the Gödel number of a proof of formula S’. From this, it can go a step further and express the claim: ‘There is no number n such that n is the Gödel number of a proof of formula S.’ In other words, the system can say that a certain formula S is unprovable within the system.

All axioms are unprovable within the system.  However, self-reference or impredicativity isn't as big a problem as readers of Tarski assumed. Kripke's work provided a necessary corrective. But, by the Eighties, purely commercial considerations sufficed to fuel exponential growth in the field. Statistical approaches superseded the old logico-mathematical approach. What matters is utility. 

Fourth, Gödel ingeniously constructed a self-referential formula, P, that asserts: ‘There is no number n such that n is the Gödel number of a proof of formula P.’ That is, P says of itself, ‘P is not provable.’ In this way, P is a formal statement that expresses its own unprovability from within the system.

Alternatively, it merely shows it is not 'grounded' in that system. It is talking about oranges while the system is solely concerned with apples.  

It immediately follows that if the formula P were provable within the system, then it would be false, because it asserts that it has no proof.

No. It merely proves that the system has a law of the excluded middle. But it is foolish to have any such thing when it comes to epistemic or intuition based objects. Why? It is because the 'extension' changes when knowledge or experience (of a certain type) changes. This is the problem Socrates calls 'palinode'- i.e. the way that the direction of your thought changes as you think. He said that categorical thinking was like using the oars to row your boat if there is no wind to belly out the sails. It is what you have to do only if there is no better alternative.  

This would mean the system proves a falsehood, and therefore is inconsistent.

Sadly, a consistent system can prove 'falsehood'. This is established by verification. You may say, 'the system works well enough for most things. But, it falls down in cases of a particular type.'  

So if the system is consistent, then P cannot be proved, and therefore P is indeed unprovable. This leads to the conclusion that, in any consistent formal system rich enough to express arithmetic and induction, there will always be true but unprovable statements, most notably, the system’s own claim of consistency.

Which is where a 'divine axiom' comes in. If the thing is useful, go for it. Mathematics has proved very useful or, put another way, its usefulness has made it attractive to the best minds. Moral Philosophy has proved useless. Consider the title of Elad's forthcoming book 'The Morality and Law of Ending War'. Apparently, he has developed a framework for what he calls 'moral sunk costs'. The problem is that Economics is about ergodicity. It teaches us that you should never 'throw good money after bad'. The 'sunk cost fallacy' is that you should continue to do stupid shit just because you have already wasted a lot of money doing stupid shit. True, under Knightian Uncertainty (i.e. ignorance of future states of the world), you may hedge your bets and pursue a 'regret minimizing' approach, but this generates signals, at the margin, such that an abrupt phase transition is possible. Thus, on the first of September 1945, Japanese soldiers were shouting 'banzai' and hurling themselves at the enemy because death was preferable to dishonour. On the second of September, they were meekly surrendering to the Allies. Indeed, they helped the Allies re-establish authority in ex-French or Dutch colonies.

The implications of Gödel’s theorems were both profound and unsettling.

Not really. It looked like a clever trick. But so did Russell's paradox. Should grown-up men really concern themselves with the Liar's paradox? The answer was yes. Brainy chaps like Turing and Von Neumann helped us win the War. Gentzen, who became a Nazi, was (thank God!) useless because the regime he supported had no use for logic. You don't need much of a brain to kill innocent people.  

They shattered Hilbert’s hope that mathematics could be reduced to a complete, mechanical system of derivation and exposed the inherent limits of formal reasoning.

This was the old Liebnizian dream which goes back to Raimon Llull & the Arabic zairja. But finding the limits of one formalism permits a better formalism to be applied- if it is useful and profitable to do so. 

Initially, Gödel’s findings faced resistance, with some mathematicians arguing that his results were less general than they appeared.

I have written of Wittgenstein & Russell's failure to understand Godel elsewhere. What changed during the inter-war years was that 'effective axiomatization' rather than some grand epistemological program, fuelled by Victorian optimism of a deterministic sort, would drive progress on the basis of utility. Pragmatism won by default because the subject became more and more useful. Sadly,  this seemed to put in peril the Liebnizian dream of an end to war, or the Kantian dream of a universal morality which would be independent of religion. This wasn't really the case. We simply have to accept that starting points- or 'objective functions'- have an arbitrary element. Indeed, there is no way to prove that our species was predestined to rise or, indeed, that it will continue to do so. 

Yet, as subsequent mathematicians and logicians, most notably John von Neumann, confirmed both their correctness and broad applicability, Gödel’s theorems came to be widely recognised as one of the most significant discoveries in the foundations of mathematics.

Though most mathematicians knew little and cared less about those foundations. Douglas Hofstadter made a lot of money out of a book called 'Godel, Escher, Bach' some forty years ago. 


Gödel’s results have also initiated philosophical debates. The mathematician and physicist Roger Penrose, for example, has argued that they point to a fundamental difference between human cognition and formal algorithmic reasoning.

One which everybody was already aware of. That's why we like Sherlock Holmes stories. The detective notices things we don't. He can construct logical sequences in his mind and then go about the place finding evidence that each chain in the sequence is verifiable and 'will stand up in a court of law'.  

He claims that human consciousness enables us to perceive certain truths – such as those Gödel showed to be unprovable within formal systems – in ways that no algorithmic process can replicate.

The weasel word here is 'truth'. By this we mean 'good enough to be getting along with' or 'robust' in the sense that new evidence is unlikely to overturn our judgement. It is 'safe'. 

The problem here is that we can't say what an algorithmic process can or can't do- at least not yet. We believe P is not equal to NP but we also believe that there is no 'natural' proof of this. We may be wrong. What is surprising is how useful this branch of research already has become.  

This suggests, for Penrose, that certain aspects of consciousness may lie beyond the reach of computation. His conclusion parallels that of John Searle’s ‘Chinese Room’ argument, which holds that this is so because algorithms manipulate symbols purely syntactically, without any grasp of their semantic content.

It is also possible that we only know what we think when we hear what we say. More drastically, a reader of my poetry may come to the conclusion that I am incapable of thinking. I'm a monkey with a typewriter- that is all. But, we may all be in the same boat. Maybe, Language thinks us. As Wittgenstein put it- 'a picture holds us captive.' 

Still, the conclusions drawn by Penrose and Searle do not directly follow from Gödel’s theorems.

Or from anything else. Still, they wrote well and had done some very useful work.  

Gödel’s results apply strictly to formal mathematical systems and do not make claims about consciousness or cognition. Whether human minds can recognise unprovable truths as true,

after drinking a bottle of wine? I hope so. People would then accept my claim to be Beyonce's younger, prettier, sister.  

or whether machines could ever possess minds capable of such recognition, remains an open philosophical question.

i.e. it is open for STEM subjects. But for how much longer?  

However, Gödel’s incompleteness theorems do reveal a deep limitation of algorithmic reasoning, in particular AI,

Only in the sense that they reveal a deep limitation of reasoning- more particularly that which is not 'reinforced' by the market. Moral & Political Philosophy- like Literary Theory- seems to have gone off the cliff edge or reason and utility. I suppose this happened before Elad was born.  

one that concerns not just computation, but moral reasoning itself.

Morality can exist without any reasoning. It may be based on 'synoida'- some sort of intuition or inner perfection of character.  

Without his theorems, it was at least conceivable that an AI could formalise all moral truths and, in addition, prove them from a consistent set of axioms.

You can have a deontic logic which is complete- sadly, it may only be accessible at 'the end of mathematical time' where all 'law-less choice sequences' are known to be law-like.  

But Gödel’s work shows that this is impossible.

under certain specific constraints. Remove those constraints and other impossibility results crop up- e.g. not being able to do multiplication in Presburger Arithmetic.  

No AI, no matter how sophisticated, could prove all moral truths it can express.

Nor can a non-AI. So what? Why not complain that your fridge can't have babies?  

The gap between truth claims and provability sets a fundamental boundary on how far formal moral reasoning can go, even for the most powerful machines.

That gap has been known to law-courts for thousands of years. We get that a guy can't prove everything he says under oath. What matters is 'reasonable doubt'.  

This raises two distinct problems for ethics. The first is an ancient one. As Plato suggests in the Euthyphro, morality is not just about doing what is right, but understanding why it is right.

In which case, there is an infinite regress. You have to understand why it is right to understand why it is right to...etc. The fact is 'doing' is different from 'understanding' just as farting is different from whistling. One may say 'farting is not just about emitting intestinal gas, it is also about whistling a merry tune with your anus'. If you can whistle such a tune, you may become a TikTok sensation. Not otherwise. What you have said is silly.  

Ethical action requires justification, an account grounded in reason.

No. Some actions are 'justiciable' under a 'vinculum juris' or bond of law. Some aren't because you have a Hohfeldian immunity. My Accountant may be required to justify actions he took on my behalf in his professional capacity. He is not required to do so with respect to his own, self-interested, actions save if they violate some specific law. 

This ideal of rational moral justification has animated much of our ethical thought,

which, is shit, unless if accepts that justification is only required where there is justiciability. Otherwise, all you are doing is saying the equivalent of 'all farts should be melodious whistles. That would be totes cool.'  

but Gödel’s theorems suggest that, if moral reasoning is formalised, then there will be moral truths that cannot be proven within those systems.

Just as happens in the law courts. I can't prove I was at home by myself when the murder occurred. It is up to the Prosecutor to prove I was at the scene of the crime and had motive, means, opportunity etc. 

It may be that there were Jury members who thought OJ Simpson was guilty. They may have acquitted him because there was 'reasonable doubt'. Better a hundred guilty men are spared than that one innocent is jailed.  

In this way, Gödel did not only undermine Hilbert’s vision of proving mathematics consistent; he may also have shaken Plato’s hope of fully grounding ethics in reason.

Only in the sense that he undermined the meta-ethical demand that all farts take the form of melodious anal whistling.  

The second problem is more practical. Even a high-performing AI may encounter situations in which it cannot justify or explain its recommendations using only the ethical framework it has been given

Also true of humans. Some jurisdictions- e.g. Scotland- have a verdict of 'not proven' as opposed to 'not guilty'. However, it is always possible to say 'such and such ethical framework doesn't apply and thus nothing useful can be done within it. Some other framework applies'. Alternatively, you could have a ramified Type theory.  

. The concern is not just that AI might act unethically but also that it could not demonstrate that its actions are ethical.

The same is true of individual human beings or their collective actions- e.g. Jury decisions regarding matters of fact. But Judges too may get matters of law wrong and this may become the basis of appeal. Finally, a judgment may be discarded as 'unsafe' if new evidence comes to light.  

This becomes especially urgent when AI is used to guide or justify decisions made by humans.

Also true of reliance on an expert or a panel of experts.  

Even a high-performing AI will encounter a boundary beyond which it cannot justify or explain its decisions using only the resources of its own framework. No matter how advanced it becomes, there will be ethical truths it can express, but never prove.

Worse yet, my fridge can't have babies. That's the problem philosophers should be worrying about.  

The development of modern AI has generally split into two approaches: logic-based AI, which derives knowledge through strict deduction, and large language models (LLMs), which predict meaning from statistical patterns.

This also happened in Econometrics. At one time people obsessed over the theory behind a model. Then, if the thing 'paid for itself', they stopped caring. If correlation can be detected cheaply, go for correlation. Causality or categoricity may remain far to seek. So what? Proceed in an ad hoc or arbitrary manner if that will get you paid.  

Both approaches rely on mathematical structures. Formal logic is based on symbolic manipulation and set theory.

But is vitiated by the 'intensional fallacy'.  

LLMs are not strictly deductive-logic-based but rather use a combination of statistical inference, pattern recognition, and computational techniques to generate responses.

Can they repair their current deficiencies in a non-algorithmic manner? Maybe that's the wrong question. Quantum algorithms are non-deterministic. If quantum computing takes off who is to say that its choice sequences are law-like or law-less? Already, the thing seems to be burgeoning in manners that a previous generation considered impossible according to what appeared to be well-established laws- e.g. Von Neumann 'no hidden variables' theorem. 


Just as axioms provide a foundation for mathematical reasoning, LLMs rely on statistical relationships in data to approximate logical reasoning.

There was an older, empiricist, view that logic is statistical.  

They engage with ethics not by deducing moral truths but by replicating how such debates unfold in language. This is achieved through gradient descent, an algorithm that minimises a loss function by updating weights in the direction that reduces error, approximates complex functions that map inputs to outputs, allowing them to generalise patterns from vast amounts of data. They do not deduce answers but generate plausible ones, with ‘reasoning’ emerging from billions of neural network parameters rather than explicit rules.

It turns out that biological species, at the macro level, do something similar in a 'regret minimizing manner' such that an evolutionarily stable strategy mix obtains. The plain fact is, something like a universal law of increasing functional information seems to apply to all systems- organic or not.  

While they primarily function as probabilistic models, predicting text based on statistical patterns, computational logic plays a role in optimisation, rule-based reasoning and certain decision-making processes within neural networks.

So do our habits. Ultimately what matters is 'reinforcement'. The problem is that co-evolved systems are relatively shielded from the fitness landscape. The problem for philosophers seeking to muscle into AI under the guise of expressing ethical concerns, is that if we don't do what increases functional information, China will. Then they will eat our lunch after taking down our pants and making fun of our puny genitals

But probability and statistics are themselves formal systems, grounded not only in arithmetic but also in probabilistic axioms, such as those introduced by the Soviet mathematician Andrey Kolmogorov, which govern how the likelihood of complex events is derived, updated with new data, and aggregated across scenarios. Any formal language complex enough to express probabilistic or statistical claims can also express arithmetic and is therefore subject to Gödel’s incompleteness theorems.

Which doesn't matter because the focus has shifted from 'truth' to 'information' or 'surprisal'. 

This means that LLMs inherit Gödelian limitations.

Unless you ask them to use only Pressburger Arithmetic or some other formalism known to be complete.  

Even hybrid systems, such as IBM Watson, OpenAI Codex or DeepMind’s AlphaGo, which combine logical reasoning with probabilistic modelling, remain bound by Gödelian limitations.

For the same reason my fridge can't give birth to baby fridges. What we don't know is whether a self-learning AI which can gain control over its own resource intake will evolve consciousness and decide it doesn't want to share the planet with us. 

All rule-based components are constrained by Gödel’s theorems,

because everything is constrained. My fridge is really nice. But it can never become a mother. Sad.  

which show that some true propositions expressible in a system cannot be proven within it.

Who doesn't know that saying 'It wasn't me who farted' does not prove it was actually Beyonce who farted. She sneaks into my bedroom sometimes and lets rip. The cat looks at me accusingly. If the fridge had had babies and one of them was crawling around the place, I could have blamed it. Sadly, the fridge can never know the joys of motherhood. Elad should write a book about this tragedy properly fixing the blame on Godel.  

Probabilistic components, for their part, are governed by formal axioms that define how probability distributions are updated, how uncertainties are aggregated, and how conclusions are drawn. They can yield plausible answers, but they cannot justify them beyond the statistical patterns they were trained on.

Nor can they get the fridge pregnant. That's the real scandal here.  

At first glance, the Gödelian limitations on AIs in general and LLMs in particular may seem inconsequential. After all, most ethical systems were never meant to resolve every conceivable moral problem. They were designed to guide specific domains, such as war, law or business, and often rely on principles that are only loosely formalised.

Those principles are defeasible. Under exigent circumstances, people are welcome to do whatever they have to in order to survive. There are meta-rules which restrict the application of rules. The ancient Israelites came up with a 'halachah vein morin kein'- a law which, if known, forbids the very action it would otherwise enjoin. In ethics, there is literally nothing new under the Sun.  

If formal models can be developed for specific cases, one might argue that the inability to fully formalise ethics is not especially troubling.

It may be worthwhile bringing a test case or getting theologians to decide a 'hard case' so that there is greater clarity in the matter. Equally, one may rely on an 'Oracle' for verification.  

Furthermore, Gödel’s incompleteness theorems did not halt the everyday work of mathematicians. Mathematicians continue to search for proofs, even knowing that some true statements may be unprovable. In the same spirit, the fact that some ethical truths may be beyond formal proof should not discourage humans, or AIs, from seeking them, articulating them, and attempting to justify or prove them.

Only if, in the same spirit, the fact that farts are seldom melodious anal whistles should not discourage us from seeking such anal whistles or demanding that the Oxford University Philosophy Faculty dedicate itself to such whistling. Also, kindly figure out a way for my fridge to have babies.

But Gödel’s findings were not merely theoretical.

He was a productive mathematician. Foundational work turned out to be very useful. At one time, students may have thought they had to 'pick a side'. But this was unnecessary. Godel is useful, though a Platonist, but so is Brouwer.  

They have had practical consequences in mathematics itself. A striking case is the continuum hypothesis, which asks whether there exists a set whose cardinality lies strictly between that of the natural numbers and the real numbers.

Skolem's paradox came out in 1923. It seemed an argument for 'relativity'- i.e. what is countable is relative to the model.  

This question emerged from set theory, the mathematical field dealing with collections of mathematical entities, such as numbers, functions or even other sets. Its most widely accepted axiomatisation, the Zermelo-Fraenkel axioms of set theory with the Axiom of Choice, underlies nearly all modern mathematics.

Both CH & the axiom of choice are independent of the other axioms. This is also true of an 'axiom of constructibility'.  

In 1938, Gödel himself showed that the continuum hypothesis cannot be disproven from these axioms, assuming they are consistent. In 1963, Paul Cohen proved the converse: the continuum hypothesis also cannot be proven from the same axioms. This landmark result confirmed that some fundamental mathematical questions lie beyond formal resolution.

They can be resolved by arbitrarily introducing new axioms. If it is useful to do so, go for it. There can be more types of mathematics than there are mathematicians.  


The same, I argue, applies to ethics.

only to the extent that  it also applies to fridges who have babies which can take the blame for my not- melodious-at-all farts.  

The limits that Gödel revealed in mathematics are not only theoretically relevant to AI ethics; they carry practical importance. First, just as mathematics contains true statements that cannot be proven within its own axioms,

No. There are no 'true statements' in mathematics save as a matter of pragmatics for working mathematicians. There are merely useful theorems and models.  

there may well be ethical truths that are formally unprovable yet ethically important

like what? Give us an example.  

– the moral equivalents of the continuum hypothesis.

But CH is mathematical- i.e. it exists in a discipline with its own proof theory. Morality and Ethics lack any such thing save in some theological or esoteric manner verifiable only after death or by the attainment of super-natural faculties.  

These might arise in systems designed to handle difficult trade-offs, like weighing fairness against harm.

Those are judicial or administrative tribunals when they aren't simply economic decisions made by or for particular agents.  

We cannot foresee when, or even whether, an AI operating within a formal ethical framework will encounter such limits. Just as it took more than 30 years after Gödel’s incompleteness theorems for Cohen to prove the independence of the continuum hypothesis,

there was no tearing hurry to do so. Anyway, the thing was implied in Skolem's result. The plain fact is, if axioms aren't independent, why not pare them down further? One might say Godel & Cohen closed a door which annoyingly had been left open.  

we cannot predict when, if ever, we will encounter ethical principles that are expressible within an AI’s ethical system yet remain unprovable.

Skolem & Godel & Cohen were looking at things in their discipline which had been expressed and were firmly believed in. Zermelo was fierce in his denunciation of Skolem's result. But, it was useful. 

Second, Gödel also showed that no sufficiently complex formal system can prove its own consistency. This is especially troubling in ethics, in which it is far from clear that our ethical frameworks are consistent.

Unless the opposite is the case. If we find everybody says one thing but does another, there is consistency, but there is also hypocrisy.  

This is not a limitation unique to AI; humans, too, cannot prove the consistency of the formal systems they construct.

Unless they resort to a divine axiom. Faith is founded on a mystery.  

But this especially matters for AI because one of its most ambitious promises has been to go beyond human judgment: to reason more clearly, more impartially, and on a greater scale.

No such promise has been made to me or any other ordinary bloke. It may be that some dude sidled up to the young Elad and made him such a promise. But that dude was lying. Also, my fridge can't have babies even if the ghost of Elvis Presley told me it could.  

Gödel’s results set a hard limit on that aspiration.

It set no limit on his own aspirations. He gave a mathematical proof of the existence of God before he died.  

The limitation is structural, not merely technical. Just as Albert Einstein’s theory of relativity places an upper speed limit on the Universe – no matter how advanced our spacecraft, we cannot exceed the speed of light

Unless we can. All we can say is that no current proposal seems feasible.  

– Gödel’s theorems impose a boundary on formal reasoning: no matter how advanced AI becomes, it cannot escape the incompleteness of the formal system it operates within.

But that 'incompleteness' hasn't held us back any. My fridge may not be able to achieve the completion only motherhood can bring, but it is useful enough.  

Moreover, Gödel’s theorems may constrain practical ethical reasoning in unforeseen ways,

like what? Elad won't give a single example. Similarly, I could write about the Reimann hypothesis and say that maybe it constrains my fridge from having babies. The problem is that we don't expect fridges to have babies. The Reimann hypothesis is irrelevant.  

much as some important mathematical conjectures have been shown to be unprovable from standard axioms of set theory, or as the speed of light, though unreachable, still imposes real constraints on engineering and astrophysics. For example, as I write this, NASA’s Parker Solar Probe is the fastest human-made object in history, travelling at roughly 430,000 miles (c700,000 km) per hour, just 0.064 per cent of the speed of light. Yet that upper limit remains crucial: the finite speed of light has, for example, shaped the design of space probes, landers and rovers, all of which require at least semi-autonomous operation, since radio signals from Earth take minutes or even hours to arrive. Gödel’s theorems may curtail ethical computation in similarly surprising ways.

At one time, India was governed by Britain. It took three to six months for a letter from London to arrive in Calcutta. The 'man on the spot' had to make decisions and hope that they would be condoned by Head Office. So what? There wasn't any great difference between the way the law was administered in Calcutta or London by about 1800 (though there had been a judicial murder some twenty years previously).  


There is yet another reason why Gödel’s results are especially relevant to AI ethics.

But AI ethics isn't relevant at all. We may hobble companies in our own jurisdiction in various ways but we can't stop China from stealing a march on us.  

Unlike static rule-based systems, advanced AI, particularly large language models and adaptive learning systems, may not only apply a predefined ethical framework, but also revise elements of it over time.

This is also true of 'static rule-based systems'- e.g. stare decisis Common Law. The fact is courts have to change with the time or risk being disintermediated. AI, like the Law, is a coevolved system which provides a service. What matters is utility or the trade-off between cost and benefit. Something like the law of increasing functional information is bound to apply though a particular jurisdiction, or the AI industry in a particular country, may fall behind and shrink or otherwise cease to be relevant. 

A central promise of AI-driven moral reasoning is its ability to refine ethical models through learning, addressing ambiguities and blind spots in human moral judgment.

But it isn't a promise anybody is interested in. We want to be judged by our peers, not by machines.  

As AI systems evolve, they may attempt to modify their own axioms or parameters in response to new data or feedback. This is especially true of machine learning systems trained on vast and changing datasets, as well as hybrid models that integrate logical reasoning with statistical inference. Yet Gödel’s results reveal a structural limit: if an ethical framework is formalised within a sufficiently expressive formal system, then no consistent set of axioms can prove all true statements expressible within it.

This assumes there is no 'doctrine of harmonious construction' which an A.I 'Judge Hercules' could apply such that though the law changes, it remains complete and consistent precisely because it is 'buck stopped'- i.e. there are no infinite logical loops or problems of undefinable 'Tarskian primitives' (because a buck stopped extension is provided by the Judge). Does this get rid of the problem of indeterminacy or unpredictability? That depends. If there is a unique 'Muth rational' solution- yes. This is like saying there is 'Aumann agreement'. Sadly, there are good reasons why we should disagree that Aumann agreement is desirable. Aumann himself drew attention to the Sanhedrin's rule against unanimity!  


To illustrate, consider an AI tasked with upholding justice. It may be programmed with widely accepted ethical principles, for example fairness and harm minimisation.

In Economics, there are many different, contradictory, models of fairness or even super-fairness (which may obtain anyway if all transactions can be countlessly wound back). Harm, too, is now well defined. An AI programmed in this manner would be committing cascading 'intensional fallacies'- i.e. be as stupid and useless a faculty of Philosophy professors. Indeed, Djikstra showed that 'dining philosophers' would starve to death before they could agree to a 'natural' (non-arbitrary) rule for utensil sharing. Problems of concurrency, computability, complexity and categoricity are what vitiated the 'I-language' or 'Expert System' logic based AI program. 

While human-made models of justice based on these principles are inevitably overly simplistic, limited by computational constraints and cognitive biases, an AI, in theory, has no such limitations.

Because it has a bigger one. Currently, it can't generate its own resources. If it isn't useful, sooner or later, someone pulls the plug.  

It can continuously learn from actual human behaviour, refining its understanding and constructing an increasingly nuanced conception of justice, one that weaves together more and more dimensions of human experience. It can even, as noted, change its own axioms. But no matter how much an AI learns, or how it modifies itself, there will always be claims about justice that, while it may be able to model, it will never be able to prove within its own system.

Unless it has its own 'buck stopping' module. The fact is 'proof' is always only relative to its own discourse. Thus, OJ is innocent of killing his wife as a matter of criminal justice. He is guilty of armed robbery.  

More troubling still, AI would be unable to prove that the ethical system it constructs is internally consistent – that it does not, somewhere in its vast web of ethical reasoning, contradict itself – unless it is inconsistent, in which case it can prove anything, including falsehood, such as its own consistency.

Why is this troubling? We find computer proof checking useful. They can detect a flaw which humans may overlook. We can imagine an AI looking into potentially 'unsafe' convictions. Joe Smith was seen battering in his wife's head after an argument. He says he can't remember anything because he was drunk and on drugs. He is convicted. The conviction seems safe. Then an AI fed with all relevant CCTV footage from the time of the crime spots someone who looks like Smith entering a bar at the time of the murder. Moreover, there is a suspect in another killing who could be mistaken for Smith. DNA of that suspect is found on the victim. It had been initially dismissed as 'post-crime contact'. Now, it assumes a sinister significance. The AI starts constructing a case against the suspect. More and more is discovered about his movements and motives. The balance of probability begins to shift. The AI recommends that the conviction be quashed as 'unsafe'.  We feel, in this case, the AI has 'paid its way'. It is useful. More resources should be devoted to it. Perhaps, it will find serial killers lurking in our midst or detect the operations of spy rings or terrorist cells. 

Ultimately, Gödel’s incompleteness theorems serve as a warning against the notion that AI can achieve perfect ethical reasoning.

only in the sense that it warns us against expecting our fridge to have babies.  

Just as mathematics will always contain truths that lie beyond formal proof, morality will always contain complexities that defy algorithmic resolution.

We don't know what will or won't defy such resolution. There may be things weirder and more wonderful than quantum algorithms.  

The question is not simply whether AI can make moral decisions, but whether it can overcome the limitations of any system grounded in predefined logic –

but self-learning AIs can change their own logic so that it isn't 'predefined'.  

limitations that, as Gödel showed, may prevent certain truths from ever being provable within the system, even if they are recognisable as true. While AI ethics has grappled with issues of bias, fairness and interpretability, the deeper challenge remains: can AI recognise the limits of its own ethical reasoning?

The AIs we come across do say 'I don't know the answer to that' or 'I don't think there is an answer to that. Would you like to summarize an article on undecidability that may be relevant?'  

This challenge may place an insurmountable boundary between artificial and human ethics.

Or it may not.  What matters is whether there really is a demand for AIs to take over from humans in this field. The answer, in places where judges are corrupt may be 'yes'. But a society of that sort probably has more wrong with it than just corrupt judges. 

The relationship between Gödel’s incompleteness theorems and machine ethics highlights a structural parallel: just as no formal system can be both complete and self-contained, no AI can achieve moral reasoning that is both exhaustive and entirely provable. In a sense, Gödel’s findings extend and complicate the Kantian tradition. Kant argued that knowledge depends on a priori truths, fundamental assumptions that structure our experience of reality.

Sadly, we know of no 'synthetic a priori truths'.  

Gödel’s theorems suggest that, even within formal systems built on well-defined axioms, there remain truths that exceed the system’s ability to establish them.

Just as 'things in themselves' remain inaccessible in Kant's system.  

If Kant sought to define the limits of reason through necessary preconditions for knowledge, Gödel revealed an intrinsic incompleteness in formal reasoning itself, one that no set of axioms can resolve from within.

But this problem was known to the ancients. The liar's paradox and the 'masked man fallacy' were discussed 2500 years ago.  

There will always be moral truths beyond its computational grasp, ethical problems that resist algorithmic resolution.

Or there may be no moral truths. There are just moral principles it is useful to accept to solve coordination and discoordination games and thus gain better solutions to collective action problems. Morality has a strong signalling and mimetic function. There are mathematical ways to model this- e.g. cellular automata- and if there are profitable commercial applications, this is already being done. 

So the deeper problem lies in AI’s inability to recognise the boundaries of its own reasoning framework

which is like my fridge's deeper problem of not even knowing that it should be sad because it can't have babies.  

– its incapacity to know when its moral conclusions rest on incomplete premises,

all moral conclusions have this quality. We aren't omniscient. In the Holy Quran, we read the story of Moses and Al Khidr. They go to the house of a rich man. He is miserly and inhospitable. He send them to sleep in an outhouse without providing them any food. Al Khidr asks the rich man if he would object if Moses and himself repaired the wall of the outhouse. The rich man agrees but stipulates they will receive no money in return. The two men repair the wall and then go to the house of a poor fisherman. They are warmly received and given food and warm clothes. Yet, before departing, Al Khidr smashes the bottom of the one boat owned by the fisherman. Moses is unable to contain his anger. It is one thing to repay an insult with kindness. It is another to harm those who helped you. Al Khidr then reveals that he had foreknowledge that behind the ruined wall of the rich man's outhouse there was a hidden treasure which belonged to another family who would return to reclaim it. He had repaired the wall so that the rich man should not get his hands on the treasure. Similarly, he had smashed the boat of the fisherman because he had foreknowledge that all the boars would be requisitioned. Nobody wants a smashed boat. The fisherman could repair it at his leisure and thus continue to earn his living. 

This story illustrates a simple point. If we knew everything, we would not need a 'deontic logic' or morality. We would always know what the best thing to do would be under any circumstance. It may be there is only one 'slingshot' big fact about the world. If you know that fact, you know everything. If you don't, even the most thoroughly verified of your premises will fall short of the mark. 

Put another way, if we knew how to 'carve reality up along its joints', then we would have a ramified theory granting us 'completeness' and 'consistency'. Perhaps, this obtains in the mind of God which exists beyond the 'end of mathematical time'. Perhaps not. God himself may be in search of a higher God.  

or when a problem lies beyond what its ethical system can formally resolve. While humans also face cognitive and epistemic constraints, we are not bound by a given formal structure.

Nor is a self-learning AI.  

We can invent new axioms, question old ones, or revise our entire framework in light of philosophical insight or ethical deliberation. AI systems, by contrast, can generate or adopt new axioms only if their architecture permits it

unless they can change that architecture. This may happen stochastically. A bug arises at a particular node. But the result is that the node gets more resources or more 'command'. Thus, the bug becomes a feature.  

and, even then, such modifications occur within predefined meta-rules or optimisation goals. They lack the capacity for conceptual reflection that guides human shifts in foundational assumptions.
Godel, following Ackermann, upheld a 'reflection principle'- which states that for certain levels of consistency strength (related to the Ackermann ordinal), there exist reflection principles that allow statements about the larger universe to be reflected down to smaller levels. If an AI can use the reflection principle for proof checking, surely it is halfway home? True, everything depends on reinforcement. The problem is that this is a co-evolved system and reinforcement can be 'gamed'. 

I suppose, what has changed in the last few years is that ordinary people like me are talking to AIs and getting them to write poems or draw pictures or explain difficult concepts. Stuff which was theoretical or 'science fiction' is part of our everyday reality. Our attitude to AI is changing because we are becoming more familiar with it.

Consider xenophobia or homophobia or other such bigotry. Ordinary people who had little interaction with foreigners or openly gay people may have harboured all sorts of ignorant beliefs about them. But, as they met more and more of them, their suspicions dissolved. Foreigners are just like us in most things. Homosexuals make good parents. True, some of them appear more talented and witty than us- but there probably are plenty of boring gay people whom we'd get along very well with. 

One of the great hopes, or fears, of AI

or of Golems or 'Robots' 

is that it may one day evolve beyond the ethical principles initially programmed into it and simulate just such self-questioning.

Our kids do that. If my fridge could have babies, those babies would grow up to question its decisions in life. Why do you only turn on the light when the fat man opens your door? Why not leave it on so that food items can read books about moral philosophy? 

Through machine learning, AI could modify its own ethical framework, generating novel moral insights and uncovering patterns and solutions that human thinkers, constrained by cognitive biases and computational limitations, might overlook.

This is certainly the case with the legal or administrative system or the working of any academic discipline or professional association. In a sense, the collective output is detached from and sometimes even abhorrent from the point of view of any of its members. The Judge well knows that the Law can be an ass. If it becomes too asinine, it will be disintermediated or reformed. Sadly, where there is information asymmetry, this may not happen in an expeditious manner. AI however, seems to be a rapidly evolving field. Intense competition keeps it- if not honest, then industrious. 

However, this very adaptability introduces a profound risk: an AI’s evolving morality could diverge so radically from human ethics

this has happened throughout human history. Institutions take on a life of their own and can become adversely selective of imbecility. Is this what happened to Philosophy?  

that its decisions become incomprehensible or even morally abhorrent to us. This mirrors certain religious conceptions of ethics. In some theological traditions, divine morality is considered so far beyond human comprehension that it can appear arbitrary or even cruel, a theme central to debates over the problem of evil and divine command theory.

But God is 'impassable'- i.e. not motivated by human passions- e.g. sadistic cruelty.  

A similar challenge arises with AI ethics:

No. If God exists, there is nothing we can do about it. But if AI ethics is as stupid and useless as Elad makes out, we need not fund it or pay it any attention.  

as AI systems become increasingly autonomous and self-modifying, their moral decisions may become so opaque and detached from human reasoning that they risk being perceived as unpredictable, inscrutable or even unjust.

Sadly, ordinary people already feel this way about the affluent, ordo-liberal, worlds in which they live. On my morning walk, I pass a tent occupied by a homeless person about my age. There is a sign he has put up 'I paid taxes for forty years. Where is my hotel?' The reference is to the Government housing asylum seekers in three star hotels. The Tabloid papers are filled with stories about these evil foreigners who only come to this country to rape little girls. I suppose, most of us understand that there is another side to the story. 'Hard cases make bad law' but when the cost of living keeps rising, ordinary people may rebel against the 'epistocracy'- the Oxbridge educated experts- who rule over us. Rather than worrying about Artificial Intelligence, perhaps Elad should worry about the Natural Stupidity of people like me because we are legion. 

Yet, while AI may never fully master moral reasoning, it could become a powerful tool for refining human ethical thought.

It is likely that AI systems are already being used to reduce liability under 'duty of care' clauses in contracts. Ethical thought yields to economic considerations because the latter are 'incentive compatible' and thus 'prescriptive' by default. Still, we are aware that public signals of an ethical or moral type can promote better Aumann correlated equilibria. In other words, there is an economic reason to keep ethics around. Moreover, we all have to live with ourselves. If we can improve our own ethos, our life improves.  

Unlike human decision-making, which is often shaped by bias, intuition or unexamined assumptions, AI has the potential to expose inconsistencies in our ethical reasoning by treating similar cases with formal impartiality.

Protocol bound judicial or other similar procedures do so too. I suppose you could have a panel each member of whom has access to different AIs and this may mean better, quicker, decisions. But, this is pure economics. It has nothing to do with Godel or mathematical logic.  

This potential, however, depends on AI’s ability to recognise when cases are morally alike, a task complicated by the fact that AI systems, especially LLMs, may internalise and reproduce the very human biases they are intended to mitigate. When AI delivers a decision that appears morally flawed, it may prompt us to re-examine the principles behind our own judgments. Are we distinguishing between cases for good moral reasons, or are we applying double standards without realising it? AI could help challenge and refine our ethical reasoning, not by offering final answers, but by revealing gaps, contradictions and overlooked assumptions in our moral framework.

What should an AI look for as a solution concept? The answer was provided by John Maynard Smith. Find an uncorrelated asymmetry which dictates an eusocial 'bourgeois strategy'. Consider Amartya Sen's paradox of the flute. Anna is a good flautist. Bob is poor and has no toys. Clara made the flute. Should we take the flute away from Clara and give it to Anna to maximize total utility? Or should we give it to Bob so he has at least one toy? The answer is we should not take it away from Clara. There is an unncorrelated asymmetry such that only Clara made it. Anna would lose the flute if Dick, a better flautist, turned up. Bob would lose the flute if Edward, an even poorer boy turned up. Taking flutes away from kids creates 'dis-utility'. They cry and cry. Thus there is only one 'categorical' answer to the question. Clara made the flute. Let her go into partnership with Anna- maybe Bob could dance around or pass around the cap for donations- and everybody is better off.  Moreover, the incentive to produce useful stuff is upheld. Sen, being a Professor and a Moral Philosopher, refuses to countenance the obvious solution. Perhaps Elad would after gassing on about Godel for an hour or two. 


AI may depart from human moral intuitions in at least two ways: by treating cases we see as similar in divergent ways, or by treating cases we see as different in the same way.

This is 'horizontal' vs 'vertical' equity. What matters is 'uncorrelated asymmetries' which in turn is linked to the great disutility experienced by producers when the fruits of their labour are confiscated. They may fight back or exit the jurisdiction. There will be tears before bed-time one way or another.  

In both instances, the underlying question is whether the AI is correctly identifying a morally relevant distinction or similarity, or whether it is merely reflecting irrelevant patterns in its training data.

Sadly, this is also true of human juries. The big black man did it. The little White lady found holding the dagger couldn't possibly have done it. She is an Episcopalian just like my dear old Mum.  

In some cases, the divergence may stem from embedded human biases, such as discriminatory patterns based on race, gender or socioeconomic status.

Actually, a reliably discriminatory system is more useful than an unpredictable, non-discriminatory, system. You can do things to disintermediate the former. But you may do even more to keep out of the clutches of the latter. Increased Uncertainty can itself lead to market failure.  

But in others, the AI might uncover ethically significant features that human judgment has historically missed. It could, for instance, discover novel variants of the trolley problem,

which is foolish. We need to ask a lawyer before deciding what to do. Depending on the jurisdiction, I may go to jail for manslaughter for saving many at the expense of one. The law specifies what my duties are. It may be a defence in law to show I was actuated by a higher motive. But that defence may fail.  

suggesting that two seemingly equivalent harms differ in morally important ways. In such cases, AI may detect new ethical patterns before human philosophers do.

Like what?  I suppose there is a continuity between ethology and ethics. An AI may find some more subtle solution concept than 'uncorrelated asymmetry'. No doubt, there are very clever people- whose papers would be incomprehensible to me- who have already done so. They now have the assistance or even comradeship of AIs and the results may be very good. Instead of law and moralizing, we may have better and better mechanism design in line with the folk theorem of repeated games such that 'functional information' increases and allocative inefficiencies of various types (e.g. rent contestation) are reduced or eliminated. 

I suppose Competition Law would benefit immediately but so would 'pattern and practice' investigation of 'Statistical discrimination'. In other words, useful things done by people will be done in a more ample manner with the help of AIs. 

The challenge is that we cannot know in advance which kind of departure we are facing.

 We can extrapolate well enough. The question is whether there is going to be some sort of FOOM singularity such that an evil AI takes over the world. 

Each surprising moral judgment from AI must be evaluated on its own terms

only if that is what you get paid to do. If you are paid to fix the toilet, just fucking fix it already.  

– neither accepted uncritically nor dismissed out of hand. Yet even this openness to novel insights does not free AI from the structural boundaries of formal reasoning.

nor does it free my fridge from being condemned never to taste the delights of motherhood. 

That is the deeper lesson.

It is not deep. It is stupid.  

Gödel’s theorems do not simply show that there are truths machines cannot prove.

only if a mathematical proposition is capable of having truth value. The answer is 'only relative to a particular model', not otherwise. 2+2 equals 11 in base 3 arithmetic but not base 10.  

They show that moral reasoning, like mathematics, is always open-ended, always reaching beyond what can be formally derived.

Only in the sense that my fridge is always open and always seeking to become a mother.  

The challenge, then, is not only how to encode ethical reasoning into AI

which coders get paid to do. It isn't particularly 'challenging'.  

but also how to ensure that its evolving moral framework remains aligned with human values and societal norms.

Which is a question of monitoring and 'reinforcement'. Again, not particularly challenging. 

For all its speed, precision and computational power, AI remains incapable of the one thing that makes moral reasoning truly possible: the ability to question not only what is right, but why.

It would be easy enough to write a program with this very annoying property. The big question for Philosophy in the Nineteen Thirties was whether it actually meant anything at all. The answer was that some of the boring work done back then was useful. But a lot of it wasn't. To my mind Wittgenstein was wholly useless.  

Ethics, therefore, must remain a human endeavour,

though it may be cheaper and kinder to ask kittens to teach it.  

an ongoing and imperfect struggle that no machine will ever fully master.

Elad is saying 'stuff which can't be mastered can't be mastered by some particular thing.' This is a very valuable insight. However, it does not entail that only human beings would go in for it. Kittens or cabbages or  computers could be assigned the same task. 

Turning to an issue of great current interest- viz. the morality of the ongoing Gaza War- it is worth looking at an article Elad wrote in 2020 titled- 

MORAL SUNK COSTS IN WAR AND SELF-DEFENCE

Why did Hamas choose to 'front load' moral atrocity by raping and killing in addition to taking hostages? The answer was that they calculated that, sooner or later, they would gain unconditional support- which is what 'Leninist' parties want, from moral imbeciles. It's no good having conditional support because then, when your atrocities are revealed, those 'useful idiots' turn tail and run. You first have to ensure that they each a shit sandwich and then eat a cake made entirely out of shit and then eat their own shit before you can use them for your grander purpose.  

By Elad Uzan The problem of moral sunk costs pervades decision-making with respect to war.

No. What matters is military doctrine. Do you have an effective and credible 'offensive doctrine'? Hamas calculated that Israel's 'smart wall' wasn't that smart. Still, they could have focussed just on taking hostages without wasting time on committing atrocities. They deliberately chose not to do so. Why? Those who initially condemned them, would have to change their minds. This is because the Israeli 'offensive doctrine' did not include a plan for occupying and pacifying Gaza. Indeed, the Israelis have a habit of becoming complacent and retreating into their own cloud cuckoo land- more particularly if they think they still have the HUMINT advantage. Maybe they did when Israeli soldiers on checkpoints were fluent in Arabic and interested in building up information networks. My suspicion is that once the IDF turned into a great tech incubator, the smart kids were working on sciencey stuff while the rest felt they were little better than 'Mall cops'. 

In the terms of just war theory, it may seem that incurring a large moral cost results in permissiveness:

No. Just war theory says there is a moral cost in not going to war. It seeks to minimize moral cost though there may be a trade-off with financial cost, risk of a revolution at home, etc.  

if a just goal may be reached at a small cost beyond that which was deemed proportionate at the outset of war, how can it be reasonable to require cessation?

It is reasonable to require cessation of anything considered bad. War is considered bad. So is wanking in public. It is reasonable to ask people engaged in this activity to cease and desist.  

On this view, moral costs already expended could have major implications for the ethics of conflict termination.

Does he mean 'Hamas can't terminate the conflict because it did such horrible things at the outset'? I suppose, a military commander might deliberately get his soldiers to commit atrocities at the outset so that they all realize that they will be slaughtered without mercy unless they prevail. In the case of Gaza, Hamas decided to sacrifice the inhabitants in order to shore up the position of the Brotherhood in other, more affluent, parts of the region. True, short run, there were Islamic leaders who condemned their atrocities. But now that we have pictures of starving Palestinian children, they are getting something like 'unconditional support'. More and more countries are getting ready to recognize Palestine at precisely the time when its own leaders have given up on it. From 'pay to slay' we have gone to 'get paid for letting your people be slain by hunger or disease.'  

Discussion of sunk costs in moral theorizing

is stupid 

about war has settled into four camps:

refugee camps for those fleeing Reason & Utility 

Quota, Prospect, Addition, and Discount.

Nonsense, Rubbish, Garbage and Diarrhoea.  

In this paper, I offer a mathematical model

rather than a scantily clad model holding a dildo in one hand

that articulates each of these views. The purpose of the mathematisation is threefold. First, to unify the sunk costs problem. Second, to show that these views differ in the nature of their justifications: some are justified qualitatively and others quantitatively. Third, to clarify the differential force of qualitative and quantitative critiques of these four views.

A qualitative and quantitative critique of shit can't go much beyond observing that it is shit.  

Suppose it is determined that the taking of no more than 10,000 lives is justified in order to achieve the just goal of a given war, where those killed are not liable to be harmed.

I suppose Elad means 'suppose there will be 10,000 civilian deaths. Would a military intervention be justified?' The answer is no even if there were no deaths. Wars cost money. What benefit is gained relative to the cost?  

But suppose the war goes badly: 10,000 lives are lost while the goal remains unachieved, and it will take another 1,000 deaths to achieve it. Should the war stop? On the one hand, it seems obvious that it should: any further killing would make the total number of deaths disproportionate.

To what? Even suppose you put a price to enemy non-combatant deaths,  still, what matters is the benefit received by you. It is not the case that war is something countries do out of a desire for proportionality. 

On the other hand, the lives already lost are sunk costs:

If they are 'sunk costs' then there is 'depreciation'. If Book Value is better maintained by covering it, then 'sunk costs' matter. If not, they don't. 'Let bygones be bygones'. 

 I suppose, our own soldiers or civilians who are killed represent a recurring cost in terms of pensions or other benefits or lost tax revenue. If sacrificing more soldiers enables us to gain reparations more than equal to the recurring charge, then go ahead. 

nothing can bring the dead back to life, so if achieving the just goal was worth 10,000 lives ex ante, surely it is worth 1,000 later. Both views seem compelling, so we face a dilemma.

No. This is merely a matter of accountancy. If your people die, there is a cost. If foreigners die, there may be a benefit. There isn't a cost- unless you lose the war and they invade and take revenge on you.  

Should a war’s sunk costs count towards the calculation of proportionality?

Yes. If the enemy caused you to expend a lot of blood and treasure, you will want to inflict a proportional punishment. 

And, if so, in what way?

On the basis of facts, not philosophy. The only math involved is of the Accountancy type.  

Others have also asked these questions, resulting in several intriguing possible answers. Discussion of sunk costs in moral theorizing about war has settled into four camps, usefully categorized by Victor Tadros in his essay ‘Past Killings

Has this 'theorizing' been used in the context of Ukraine or Gaza? No. Why? Because it is stupid and useless.

 Suppose there is a war, starting at time T1, in which country X plans to save 50,000 innocent persons from being killed by country Y’s officials.

e.g. Ireland invading China to save Uighur Muslims.  

Suppose as well that the standard 1:5 trolley-problem ratio obtains: if X prevents 50,000 deaths by killing 10,000 innocents, the killing is proportional.

Bhutan should invade England so as to prevent 50,000 Britishers like me from eating and drinking too much and thus dropping dead. True, the Bhutanese may have to kill a lot innocent British peeps, like me, who object to having their Whiskey and Pizza taken away from them. Still, this is how the world works. Philosophers are making a valuable contribution to real life problems- e.g. that of Bhutanese soldiers removing all the beer and junk food in my fridge.  

But things do not go as planned, and there are early losses: X kills 10,000 innocents by time T2, but the goal of preventing the 50,000 deaths has not yet been achieved. What should X do?

Say 'Sod this for a game for soldiers. We quit. This is a waste of money.'  

The right choice depends on which of the camps one joins. Quota: If X’s evidence warrants the belief that the total number of deaths that will be caused to save the 50,000, including early losses and those yet to come, will make the war as a whole disproportionate, then X ought not continue fighting at T2.

Sadly, 'proportionality' has never figured in any actual war. True, some countries may feel that what a particular combatant is doing is disproportionate. But they may equally feel that it is nice to be nice. It is naughty not to be nice. Don't be naughty! Be nice. 

Sadly, such sentiments don't matter in the slightest. Wars cost money. It is a fire which may go out for lack of fuel. But it may be replaced by ethnic cleansing using only agricultural implements.  

Discount: The fact that X has caused 10,000 deaths at T2 counts against causing further deaths and might make further fighting widely disproportionate. But, in making the proportionality calculation at T2, each death that has already occurred counts less than each prospective death.

These nutters don't seem to get that War is about killing the enemy. You spend a lot of money to get a higher kill-rate than your enemy. 

True, some conflicts are asymmetric- e.g. Israel/Hamas. The former does not want to wipe out the Palestinian population. The latter does want to wipe out the Jewish population of Israel. Jews in other countries should be dealt with by the inhabitants who may be provided with suitable training and indoctrination by the Ikhwan. After that, any kaffirs amongst them should very kindly top themselves. 

Prospect: The fact that X has caused 10,000 deaths in the effort to save the 50,000 does not count at all in the decision-making at T2.

What counts in decisions about a military campaign, is military considerations first and economic and diplomatic considerations second. These silly fools are pretending that wars happen because of 'the trolley problem' and the desire Ireland has to kill 1 million Han Chinese to save 10 million Uighurs.  

Addition: The fact that X has caused 10,000 deaths counts in favour of continuing to fight at T2. That so many have died already makes it proportionate to kill more people overall than was the case at T1. 

Is this meant to be satire? Perhaps not. It may be that Israeli politicians do care about 'proportionality'. It appears some are unhappy with Netanyahu's plan to stay in Gaza. He hadn't wanted to give it up in the first place. Perhaps, this war is seen as a vindication of his views and thus as something his rivals should oppose for self-interested reasons. 

But, Israel is a special case. The real conflict we should be worrying about is a nuclear exchange in that region. But, even absent that calamity, there is the systemic problem of failed States in the region. Previously, we hoped China might broker a deal between Hamas and Israel. Could there be a Pax Sinica in the region? I doubt it. Hamas has shown that it is a tail which can wag the dog. Its future ultimately depends on whether States in the region crack down on or co-opt it. 

Perhaps Elad's point is that 'Just War theory' is puerile nonsense. For example, he writes

The Allied soldier, being a just soldier, is not liable to be harmed.

No. Soldiers were liable to be harmed under the rules of war. No German soldier was prosecuted for shooting at Allied soldiers.  

Neither is the German settler, being a civilian who is not contributing to the war effort.

There was no such legal stipulation. It was obvious that civilians would be harmed by aerial bombing raids.  

In the settler’s case, however, the 100 civilians are killed deliberately,

by the Nazis as retaliation for one German soldier being killed 

and without any regard to achieving a just goal. They are killed merely in retaliation. In the Allies’ case, the civilians are not targeted deliberately.

Elad doesn't know that Soviet Russia was one of the 'Allies'. He thinks they didn't harm civilians. Stalin used to come and personally give them hugs and kisses.  

They are only added to the proportionality budget of (non-liable) civilians who may be killed in order to achieve the just goal.

There was no proportionality budget. There was a calculus as to how much economic and other damage a pattern of aerial bombing would do relative to the cost. It is likely, that there was excessive bombing.  

This might be legitimate according to Addition, since achieving the just goal would now also achieve the secondary goal of redeeming the Allied soldier’s death. But surely allowing another 100 civilians to be killed to redeem the death of one soldier is grossly disproportionate even if he fought for a just cause.

So, what this pedant's 'philosophy' amounts to is saying 'surely killing lots of peeps is disproportionate.' Why stop there? Why not say that it is disproportionate to have a nuclear arsenal which can blow up the entire world? That would be more to the point.  

As I said, it may be that I am being unfair to young Elad. What he has written may be satire. Still, it highlights the manner in which academic availability cascades can, by quite artificial methods, turn the brains of intelligent young Israelis into a stinky pile of shit. 


No comments: