Saturday 18 November 2023

Is Tim Williamson stupid?

Tim Williamson has an article in Aeon titled 'patterns of reality'.  Sadly, reality is arbitrary. Nothing has a 'sufficient' or 'necessary' reason. Nor does reason. It is true that we have to make arbitrary guesses so as to get on with doing useful things. But logic has no magical property such that those guesses become less arbitrary or that some grand pattern is revealed.

Consider the following-

Maria is either at home or in the office. She’s not at home. Where is she?

We don't know.  We may guess she is in the office. But we don't know she is in the office till we actually find her there. 

You might wonder why I started with such an unpuzzling puzzle. But in solving it, you already used logic.

There is no solution. We simply don't know.  

You reasoned correctly from the premises ‘Maria is either at home or in the office’ and ‘She’s not at home’ to the conclusion ‘Maria is in the office.’

No. We knew that Maria could be on her way between the home and the office. We didn't know she was in the office. It was merely a place she might be.  

That might not seem like a big deal, but someone who couldn’t make that move would be in trouble.

A person who did 'make that move' would be in deeper trouble. He would be literally mad. It is obvious that a person who is normally in either one place or the other must also occasionally be on the way between the two places.  

We need logic to put together different pieces of information, sometimes from different sources, and to extract their consequences.

No. We need a Structural Causal Model which can make predictions and help us achieve our goals.  

By linking together many small steps of logical reasoning, we can solve much harder problems, as in mathematics.

No. We link together pieces of math to solve math problems. But our solutions may be fatally flawed because we don't fully understand how those pieces fit together or what further assumptions we are making to get them to fit together. Ultimately, it is verification which matters. Logic by itself can't get you to any substantive solution. Either the thing works and is useful- i.e. we can verify that the thing 'pays for itself' or else you have made an error somewhere along the line. 


Another angle on logic is that it’s about inconsistency. Imagine someone making all three statements ‘Maria is either at home or in the office’, ‘She’s not at home’, and ‘She’s not in the office’ (about the same person at the same time). Those statements are jointly inconsistent; they can’t all be true together.

Truth does not matter. Informativity does. 'Maria is where Maria is' isn't informative. Who cares if it is true or not?  

Any two of them can be true, but they exclude the third. When we spot an inconsistency in what someone is saying, we tend to stop believing them.

No. We stop believing them when they appear deceptive or untrustworthy or up to no good. Their statements can still be informative. But, lies too can be informative.  

Logic is crucial for our ability to detect inconsistency,

No. Everything is always inconsistent or else it is not informative. To test for 'compossibility', not consistency, asking unexpected questions is the way to go. In the case of a piece of mathematics or philosophy or some other type of reasoned argument, adopt a different perspective and see if things still stack up. However, a 'gedanken' can only get you so far. It is verification that matters. But for that you need a superior Structural Causal Model. 

even when we can’t explain exactly what has gone wrong. Often, it is much more deeply hidden than in that example. Spotting inconsistencies in what is said can enable us to work out that a relative is confused, or that a public figure is lying.

This may be said after the fact. If a public figure's lips are moving, it is likely he is lying. If a relative is running around naked with a radish up her bum, we say she is confused though this may be the usual way granny prepares herself for an orgy.  

Logic is one basic check on what politicians say.

Sincerity is important. Logic isn't.  

To put your pattern of reasoning in the simplest form, you went from premises ‘A or B’ and ‘Not A’ to the conclusion ‘B’. The deductive action was all in the two short words ‘or’ and ‘not’. How you fill in ‘A’ and ‘B’ doesn’t matter logically, as long as you don’t introduce ambiguities. If ‘A or B’ and ‘Not A’ are both true, so is ‘B’. In other words, that form of argument is logically valid. The technical term for it is disjunctive syllogism. You have been applying disjunctive syllogism most of your life, whether you knew it or not.

We have been rejecting that type of specious reasoning all our life.  The fact is, there are many types of logic. Some have a 'law of the excluded middle'. Some don't. Also, there can be 'dialethia'- i.e. two different types of Truth. 'Natural deduction' featuring conditional tautologies (Gentzen calculi) and intuitionistic logic may come closer to the way we actually reason. 

Except for a few special cases, logic can’t tell you whether the premises or conclusion of an argument are true. It can’t tell you whether Maria is at home, or whether she’s in the office, or whether she’s in neither of those places. What it tells you about is the connection between them; in a valid argument, logic rules out the combination where the premises are all true while the conclusion is false.

Some logics do. Some don't. Jain logic might say 'Maria can be both in the home and the office and neither in the home and the office'. Maybe, this is because Maria is a customized version of Siri. Alternatively, Maria may have a home office which is actually a kennel because Maria is a dog with aspirations of a type increasingly common among canines.

Even if your premises are false, you can still reason from them in logically valid ways – perhaps my initial statement about Maria was quite wrong, and she is actually on a train.

So, logic is useless. It may have seemed a cool subject when people were looking for a proof that God exists and will really fuck over all the infidels and blasphemers and my cousin the Bishop who doesn't get that the Holy Trinity includes Batman.  


The logical validity of forms of argument depends on logical words: as well as ‘or’ and ‘not’, they include ‘and’, ‘if’, ‘some’, ‘all’, and ‘is’. For instance, reasoning from ‘All toadstools are poisonous’ and ‘This is a toadstool’ to ‘This is poisonous’ illustrates a valid form of argument, one that we use when we apply our general knowledge or belief to particular cases.

 There could be a toadstool that has been specially treated so as not to be poisonous.  The problem with logic is that it jumps to unwarranted conclusions. 

A mathematical instance of another form of argument is the move from ‘x is less than 3’ and ‘y isn’t less than 3’ to ‘x is not y’, which involves the logical principle that things are identical only if they have the same properties.

Some logics have that 'principle' others don't. If x is 'intensional' and does not have a well defined 'extension' then Leibniz's principle of identity fails. Thus, suppose x is the number of the children I think I have currently and y is the number of children I will know I have after opening the envelope containing the results of a paternity test, then, after I open the envelope, x will equal y. I will know that in addition to the twins I had with my beautiful wife, Nigel Farage, I am also be the proud father of Boris Johnson.  Some mathematical objects are 'intensional' or 'constructive' in a certain sense. Indeed, William Lawvere uses category theory to show how they might be 'dialectical'. It is foolish, at this late hour, to pretend there is only one logic and that reality conforms to it. We don't know whether Reality is logical. We don't even know if any logic is consistent without some sort of 'divine axiom' (from the point of view of 'reverse mathematics'). Another way of saying this is to admit that there is something arbitrary about any type of logical reasoning. We aim for 'naturality' but the thing may be beyond our grasp even at 'the end of mathematical time'. 

In everyday life and even in much of science, we pay little or no conscious attention to the role of logical words in our reasoning because they don’t express what we are interested in reasoning about. We care about where Maria is, not about disjunction, the logical operation expressed by ‘or’. But without those logical words, our reasoning would fall apart; swapping ‘some’ and ‘all’ turns many valid arguments into invalid ones. Logicians’ interests are the other way round; they care about how disjunction works, not where Maria is.

But logicians have turned out to be as stupid as shit. Look at Williamson- or his student Jason Stanley. It would be nice if one could sit in one's armchair and make great discoveries about the world. But all we can do if find the arbitrary shite we stipulated for. Logic says nothing can be validly deduced from anything save what is wholly non-informative. 

Logic was already studied in the ancient world, in Greece, India and China. To recognise valid or invalid forms of argument in ordinary reasoning is hard.

It is easy in any buck stopped protocol bound context- e.g. the law courts or the Accountancy profession. But this does not mean that reasoning of this sort gets at the truth. As Lord Coke said about the Common Law, it is 'artificial reason'. We hoped for 'naturality' but only found arbitrariness. 

We must stand back, and abstract from the very things we usually find of most interest. But it can be done. That way, we can uncover the logical microstructure of complex arguments.

This fool can't uncover shit.  

For example, here are two arguments:

‘All politicians are criminals, and some criminals are liars, so some politicians are liars.’

‘Some politicians are criminals, and all criminals are liars, so some politicians are liars.’

The conclusion follows logically from the premises in one of these arguments but not the other. Can you work out which is which?

Nothing follows from either. These are imperative not alethic statements. Jorgensen's dilemma arises. Normative sentences don't have truth value though they may be informative. Even otherwise, where   'intensional' terms which don't have well defined extensions are used, it is always possible to find some 'model' to make these statements look either foolish or sensible. 

When one just looks at such ordinary cases, one can get the impression that logic has only a limited number of argument forms to deal with, so that once they have all been correctly classified as valid or as invalid, logic has completed its task, except for teaching its results to the next generation. Philosophers have sometimes fallen into that trap, thinking that logic had nothing left to discover. But it is now known that logic can never complete its task. Whatever problems logicians solve, there will always be new problems for them to tackle, which cannot be reduced to the problems already solved. To understand how logic emerged as this open-ended field for research, we need to look back at how its history has been intertwined with that of mathematics.

Math, when done by very clever people, is very very useful. Logic isn't at all. Analytical Philosophy turns the brains of those who teach it into shit. 

The most sustained and successful tradition of logical reasoning in human history is mathematics.

Mathematicians like Brouwer of Per Martin-Lof can create new types of mathematical logic because they are creating new types of mathematics. Thus the arrow of causation is from math to logic. Bertrand Russell soon realized he had wasted his time. Logic told him that space must have constant curvature. Einstein asked a disruptive question and saw that the existing theory was inconsistent. Eddington led an astronomical expedition which showed Einstein was right. Godel, who was hella bright, did move from Physics to Math to Logic but became a some what marginal figure- indeed, he gave a mathematical proof of God! He believed 'Logic was in the world, like Zoology', but his system requires an 'Absolute Proof'. Like 'natural proofs' (which require a way to distinguish random from pseudo-random) these don't seem to exist. We are stuck with arbitrariness or some category theoretical 'dialectical' approach. This does mean that great advances can be made but they pay-off is better tech not better armchair Pundits able to pronounce on questions of ethics or political philosophy. 

Its results are applied in the natural and social sciences too, so those sciences also ultimately depend on logic.

Logic is a by product of mathematics though it has an independent genealogy in law and the rhetoric used by statesmen to sway the Assembly. Sadly, logic is badly taught or not actually understood by those who teach it and thus it has been productive of wholly erroneous availability cascades in a number of academic fields. I've discussed the 'intensional fallacy' in Social Choice Theory in a recent book. But there are countless such examples.  

The idea that a mathematical statement needs to be proved from first principles goes back at least to Euclid’s geometry.

But those mathematical statements were already empirically known. Deductive reasoning was being used by architects and accountants and navigators.  

Although mathematicians typically care more about the mathematical pay-offs of their reasoning than its abstract structure, to reach those pay-offs they had to develop logical reasoning to unprecedented power.

No. They had to find more abstract structures underlying different branches of mathematics- e.g. algebra and geometry- this is what Grothendieck called 'Yoga'. It isn't the case that the budding mathematical genius first learns logic and then goes on to make great discoveries. The reverse is the case. They develop intuitions and then may invent a logic to systematize it. It turns out that 'naturality' or non-arbitrariness on the one hand, and pure randomness on the other are grails difficult, perhaps impossible, to achieve. Yet progress can always be made on any specific problem- indeed, that progress appears exponential. With Voevodsky, we had computer proof checking. Who knows what new vistas quantum computing will open up?

An example is the principle of reductio ad absurdum. This is what one uses in proving a result by supposing that it does not hold, and deriving a contradiction.

The Nicarguan horcrux of my neighbour's cat must be controlling my brain-waves. Suppose that weren't the case. Then that horcrux would be Guatemalan, which is patently absurd! 

For instance, to prove that there are infinitely many prime numbers, one starts by supposing the opposite, that there is a largest prime, and then derives contradictory consequences from that supposition.

Which is fine because numbers don't really exist. So this is a proof that if there is an infinite sequence of a certain type then there may be an infinity of objects with a particular property in that sequence.  But this is a conditional tautology of an arbitrary type. 

In a complex proof, one may have to make suppositions within suppositions within suppositions; keeping track of that elaborate dialectical structure requires a secure logical grasp of what is going on.

No. It needs sound intuitions of what is going on. That's the reason, we don't expect computers to come up with really interesting conjectures- at least, not yet. On the other other hand, a computer solved the 4 colour problem and found the error in Godel's proof of God.  

As mathematics grew ever more abstract and general in the 19th century, logic developed accordingly. George Boole developed what is now called ‘Boolean algebra’, which is basically the logic of ‘and’, ‘or’ and ‘not’, but equally of the operations of intersection, union, and complementation on classes. It also turns out to model the building blocks for electronic circuits, AND gates, OR gates and NOT gates, and has played a fundamental role in the history of digital computing.

This is an Anglo-Saxon perspective. What was happening on the Continent was more interesting. Fundamental intuitions of Space and Time and Identity were changing so as to permit mathematical progress.  

Boolean logic has its limits. In particular, it doesn’t cover the logic of ‘some’ and ‘all’.

It is extensional.  

Yet complex combinations of such words played an increasing role in rigorous mathematical definitions, for instance of what it means for a mathematical function to be ‘continuous’, and of what it means to be a ‘function’ anyway, issues that had led to confusion and inconsistency in early 19th-century mathematics.

Sadly, these confusions remain current in much of academia. Logic is no help. There is no algorithmic way of distinguishing a well defined set or function from something which is impredicative or intensional. This may not matter if we have good enough approximations for a specific purpose but it can lead to mischievous academic availability cascades which look 'mathsy' but are actually utter nonsense. Analtickle philosophy proved no help at all in this context. Indeed, it jumped on every crazy bandwagon it could find.  

The later 19th century witnessed an increasing trend to rigorise mathematics by reducing it to logical constructions out of arithmetic, the theory of the natural numbers – those reached from 0 by repeatedly adding 1 – under operations like addition and multiplication. Then the mathematician Richard Dedekind showed how arithmetic itself could be reduced to the general theory of all sequences generated from a given starting point by repeatedly applying a given operation (0, 1, 2, 3, …). That theory is very close to logic.

No. It influenced the development of mathematical logic which, however, quickly split into different camps- e.g. 'constructivists', 'Platonists' etc.  

He imposed two constraints on the operation: first, it never outputs the same result for different inputs; second, it never outputs the original starting point. Given those constraints, the resulting sequence cannot loop back on itself, and so must be infinite.

If infinitely repeated- sure.  

The trickiest part of Dedekind’s project was showing that there is even one such infinite sequence. He did not want to take the natural numbers for granted, since arithmetic was what he was trying to explain. Instead, he proposed the sequence whose starting point (in place of 0) was his own self and whose generating operation (in place of adding 1) constructed from any thinkable input the thought that he could think about that input. The reference in his proof to his own self and to thoughts about thinkability was unexpected, to say the least. It does not feel like regular mathematics.

This was a period when 'Psychologism' and 'Logicism' were somewhat intertwined. Dedekind's 'cut' helped establish the arithmetical continuum.  

But could anyone else do better, to make arithmetic fully rigorous?

One could move on from 'finitism' to more general notions of a constructive type. Rigour may not matter in itself, however the discoveries of the Thirties were useful in shaking people out of a naive Leibnizian faith in a 'mathesis universalis'- an algorithmic method of cranking out all knowledge. This, I suppose, was the original attraction of Logicism. You wouldn't have to think very hard or bother with messy experiments, and sooner or later you'd be an omniscient God.  

A natural idea was to reduce arithmetic, and perhaps the rest of mathematics, to pure logic.

mathematical logic. Sadly, it isn't any use outside mathematics and even then its purpose is to show why certain promising approaches are illicit or how a particular intuition can be misleading.  

Some partial reductions are easy. For example, take the equation 2 + 2 = 4. Applied to the physical world, it corresponds to arguments like this (about a bowl of fruit):

There are exactly two apples.

There are exactly two oranges.

No apple is an orange.

Therefore:

There are exactly four apples and oranges.

This is not a valid deduction. It may be the case that a particular combination of apples and oranges spontaneously generates more fruit. All we can say is 'assuming apples and oranges behave in such and such a way, this result is likely'.  

Phrases like ‘exactly two’ can be translated into purely logical terms: ‘There are exactly two apples’ is equivalent to ‘There is an apple, and another apple, and no further apple.’ Once the whole argument has been translated into such terms, the conclusion can be rigorously deduced from the premises by purely logical reasoning.

But the deduction may be wrong because there is a fact about the world we are unaware of. Mathematical logic is fine for mathematics and heuristic deductions made by actuaries or police detectives with expert knowledge of their own field may hold up very well. But it isn't the case that 'purely logical reason' can get to anything informative purely by itself.  

This procedure can be generalised to any arithmetical equation involving particular numerals like ‘2’ and ‘4’, even very large ones. Such simple applications of mathematics are reducible to logic.

They can be reduced to anything you like- e.g. giving kisses, which may be more fun than doing logic. 

However, that easy reduction does not go far enough. Mathematics also involves generalisations, such as ‘If m and n are any natural numbers, then m + n = n + m’. The easy reduction cannot handle such generality. Some much more general method would be needed to reduce arithmetic to pure logic.

General methods are fine provided you have rules of 'restricted comprehension' otherwise you soon end up with nonsense. But kids already know this.  


A key contribution was made by Gottlob Frege, in work slightly earlier than Dedekind’s, though with a much lower profile at the time. Frege invented a radically new symbolic language in which to write logical proofs, and a system of formal deductive rules for it, so the correctness of any alleged proof in the system could be rigorously checked. His artificial language could express much more than any previous logical symbolism. For the first time, the structural complexity of definitions and theorems in advanced mathematics could be articulated in purely formal terms. Within this formal system, Frege showed how to understand natural numbers as abstractions from sets with equally many members. For example, the number 2 is what all sets with exactly two members have in common. Two sets have equally many members just when there is a one-one correspondence between their members. Actually, Frege talked about ‘concepts’ rather than ‘sets’, but the difference is not crucial for our purposes.

Frege had 'unrestricted comprehension'. That's where Russel's paradox and 'Type theory' came in. But everybody already knew that you must be very scrupulous in making deductions on a narrow, not a too general, basis.   


Frege’s language for logic has turned out to be invaluable for philosophers and linguists

useless tossers 

as well as mathematicians. For instance, take the simple argument ‘Every horse is an animal, so every horse’s tail is an animal’s tail.’

I donated a Burro's tail plant to my favourite horse. Thus there is a horse which has a tail which is a plant and not an animal's tail at all.  

You need to define 'horse's tail' as the tail protruding from a horse's own backside and then define horse and backside and so forth. The thing isn't worth the bother. 

It had been recognised as valid long before Frege, but Fregean logic was needed to analyse its underlying structure and properly explain its validity.

But it wasn't valid! Either you have a definition- and then an infinite regress of definitions (unless there are 'atomic propositions')- or you just have a Gentzen type conditional tautology- if horse's tails are the tails of an animal, then a horse's tail is an animal's tail. But this isn't very interesting. 

Today, philosophers routinely use it for analysing much trickier arguments.

which is why they fuck up so badly.  

Linguists use an approach that goes back to Frege to explain how the meaning of a complex sentence is determined by the meanings of its constituent words and how they are put together.

Useless linguists may do so. Useful ones actually know lots of languages and can tell us interesting things about them.  

Frege contributed more than anyone else to the attempted reduction of mathematics to logic.

Which failed.  

By the start of the 20th century, he seemed to have succeeded. Then a short note arrived from Bertrand Russell, pointing out a hidden inconsistency in the logical axioms from which Frege had reconstructed mathematics. The news could hardly have been worse.

No. It was obvious that 'unrestricted comprehension' was stooooopid. Kids learn about this by the time they are five years old. Mummy and Daddy say they love them more than the whole world. Yet, they won't let them stay up to watch a nice movie on TV. It is obvious, that parents love you to bits in some contexts but not others.  

The contradiction is most easily explained in terms of sets, but its analogue in Fregean terms is equally fatal. To understand it, we need to take a step back.

No. We just need to remember something we learnt as small children- viz. context is what matters. At one stage being a good boy means going potty in the actual potty. At a later stage it means not quitting your job as a Tax Accountant so as to follow your bliss as a Dolly Parton impersonator.  


In mathematics, once it is clear what we mean by ‘triangle’,

this will never be clear save in very restricted contexts. A love triangle isn't like the ones Pythagoras had a theorem about. What is an anyon triangle? What about an Ising spin triangle? People talking about triangles may mean very different things.  

we can talk about the set of all triangles: its members are just the triangles. Similarly, since it is equally clear what we mean by ‘non-triangle’, we should be able to talk about the set of all non-triangles: its members are just the non-triangles.

Love triangles, or anyon triangles and other such exotic beasties may be non-triangles in some sense. My point is that no 'natural language' term has a well defined extension. Pretending there is such a set is how you end up talking nonsense. 

One difference between these two sets is that the set of all triangles is not a member of itself, since it is not a triangle, whereas the set of all non-triangles is a member of itself, since it is a non-triangle. More generally, whenever it is clear what we mean by ‘X’, there is the set of all Xs. This natural principle about sets is called ‘unrestricted comprehension’. Frege’s logic included an analogous principle.

The problem was much deeper than Russell realized. Mathematical and logical objects are themselves epistemic. Thus their extension is never well-defined. Unless you focus on doing useful stuff, you end up babbling nonsense though your great discovery is made by every 3 year old kid who comes to understand that even the truest of claims- e.g. Mummy Daddy love me more than the world- has to be understood in a very restricted context. It does not mean your parents won't punish you if you are naughty.  


Since it is clear what we mean by ‘set that is not a member of itself’, we can substitute it for ‘X’ in the unrestricted comprehension principle. Thus, there is the set of all sets that are not members of themselves. Call that set ‘R’ (for ‘Russell’). Is R a member of itself? In other words, is R a set that is not a member of itself? Reflection quickly shows that if R is a member of itself, it isn’t, and if it isn’t, it is: an inconsistency!

As Godel pointed out this is just a semantic paradox. He drew attention to a deeper type of 'intentional paradox'. Godel's reaction to Russel became increasingly Platonic. By the Seventies, it was obvious that correspondence theories of truth opened the door either Panalethia or Paranoid Schizophrenia of some type. This didn't mean pragmatic advances couldn't be made. It's just that the associated Philosophy or Logic was 'anything goes'.  

That contradiction is Russell’s paradox. It shows that something must be wrong with unrestricted comprehension. Although many sets are not members of themselves, there is no set of all sets that are not members of themselves. That raises the general question: when can we start talking about the set of all Xs? When is there a set of all Xs? The question matters for contemporary mathematics, because set theory is its standard framework. If we can never be sure whether there is a set for us to talk about, how are we to proced

 Type theory, homotopic and Martin-Lof a la Voevodsky's univalent foundations. Or just do useful stuff. The philosophy Dept. can't interfere because over the last forty years they have become adversely selective of idiocy. 

Logicians and mathematicians have explored many ways of restricting the comprehension principle enough to avoid contradictions but not so much as to hamper normal mathematical investigations. In their massive work Principia Mathematica (1910-13), Russell and Alfred North Whitehead imposed very tight restrictions to restore consistency, while still preserving enough mathematical power to carry through a variant of Frege’s project, reducing most of mathematics to their consistent logical system. However, it is too cumbersome to work in for normal mathematical purposes. Mathematicians now prefer a simpler and more powerful system, devised around the same time as Russell’s by Ernst Zermelo and later enhanced by Abraham Fraenkel. The underlying conception is called ‘iterative’, because the Zermelo-Fraenkel axioms describe how more and more sets are reached by iterating set-building operations. For example, given any set, there is the set of all its subsets, which is a bigger set.

Zermelo formulated the axiom of choice- a piece of magic by which you can turn one sphere into as many spheres as you like! This is the Banach Tarski paradox. Brouwer was very worried about this. But his system can have something similar.  

Set theory is classified as a branch of mathematical logic, not just of mathematics. That is apt for several reasons.

Category theory may be better fitted to the task. 

First, the meanings of core logical words like ‘or’, ‘some’ and ‘is’ have a kind of abstract structural generality; in that way, the meanings of ‘set’ and ‘member of’ are similar.

That generality is wholly mathematical. This is Grothendieck's 'Yoga' program of uniting disparate mathematical fields on the basis of greater generality. The question is whether meta-mathematics is mathematical or logical. I suppose the answer is metalogic overlaps with meta-mathematics. My impression is that the maths has overtaken both. 


Second, much of set theory concerns logical questions of consistency and inconsistency. One of its greatest results is the independence of the continuum hypothesis (CH), which reveals a major limitation of current axioms and principles for logic and mathematics. CH is a natural conjecture about the relative sizes of different infinite sets, first proposed in 1878 by Georg Cantor, the founder of set theory. In 1938, Kurt Gödel showed that CH is consistent with standard set theory (assuming the latter is itself consistent). But in 1963 Paul Cohen showed that the negation of CH is also consistent with standard set theory (again, assuming the latter is consistent). Thus, if standard set theory is consistent, it can neither prove nor disprove CH; it is agnostic on the question. Some set theorists have searched for plausible new axioms to add to set theory to settle CH one way or the other, so far with little success. Even if they found one, the strengthened set theory would still be agnostic about some further hypotheses, and so on indefinitely.

Which is why there will always be more logics than logicians and more types of mathematics than mathematicians. Instead of a nice neat reduction of math to logic, we have an explosion of both! 

A working mathematician may use sets without worrying about the risk of inconsistency or checking whether their proofs can be carried out in standard set theory. Fortunately, they normally can. Those mathematicians are like people who live their lives without worrying about the law, but whose habits are in practice law-abiding.

This is the old 'Erlangen' type view according to which philosophers and logicians would 'police' science and math. But the latter two burgeoned and attracted smarter and smarter people while the former two attracted cretins like Jason Stanley.  

Although set theory is not the only conceivable framework in which to do mathematics, analogous issues arise for any alternative framework: restrictions will be needed to block analogues of Russell’s paradox, and its rigorous development will involve intricate questions of logic.

Not really. For any given purpose, univalent foundations can be found easily enough.  

By examining the relation between mathematical proof and formal logic, we can start to understand some deeper connections between logic and computer science: another way in which logic matters.

This is where model theory comes in. The intuition here is that 'equivalence' means that you get nothing more out of the one than the other.  This is like the 'matam/vigyan' (doctrine/praxis) distinction in Hindu thought. Two soteriologies may have different doctrines but cash out as the same thing.

Most proofs in mathematics are semi-formal; they are presented in a mix of mathematical and logical notation, diagrams, and English or another natural language. The underlying axioms and first principles are left unmentioned. Nevertheless, if competent mathematicians question a point in the proof, they challenge the author(s) to fill in the missing steps, until it is clear that the reasoning is legitimate. The assumption is that any sound proof can in principle be made fully formal and logically rigorous, although in practice full formalisation is hardly ever required, and might involve a proof thousands of pages long. A proof in a framework of formal logic is still the gold standard, even if you personally never see a bar of gold.

Computers can help here- which is why Voevodsky's work was so valuable. Sadly, computerized proofs can themselves be error prone. The bigger problem is 'interpretation'. The proof might be true but only in an incompossible world- e.g. one where there is 'modal collapse'- i.e. everything that is, necessarily is- the only true statement includes all truth. 

The standard of formal proof is closely related to the checking of mathematical proofs by computer.

But the computer's code would also have to be checked and then that checking would have to be checked and so forth. 

An ordinary semi-formal proof cannot be mechanically checked as it stands, since the computer cannot assess the prose narrative holding the more formal pieces together (current AI would be insufficiently reliable). What is needed instead is an interactive process between the proof-checking program and human mathematicians: the program repeatedly asks the humans to clarify definitions and intermediate steps, until it can find a fully formal proof, or the humans find themselves at a loss. All this can take months. Even the finest mathematicians may use the interactive process to check the validity of a complicated semi-formal proof, because they know cases where a brilliant, utterly convincing proof strategy turned out to depend on a subtle mistake.

Consider the 1998 'proof by exhaustion' solution to the Kepler conjecture. The formal proof was  accepted in 2017 by a leading journal after automated  proof checking. The problem with proving a thing by checking every eligible item is that you can't generalize from it. Moreover, is this really 'deduction' or a very painstaking type of investigation? We may gain no additional insight by verifying our intuition. Does that mean our intuition is not very interesting? 

Historically, connections between logic and computing go much deeper than that. In 1930, Gödel published a demonstration that there is a sound and complete proof system for a large part of logic, first-order logic. For many purposes, first-order logic is all one needs. The system is sound in the sense that any provable formula is valid (true in all models).

Though it may be sheer nonsense in every real world context. 

The system is also complete in the sense that any valid formula is provable. In principle, the system provides an automatic way of listing all the valid formulas of the language, even though there are infinitely many, since all proofs in the system can be listed in order.

This is like saying there is an algorithmic way of generating any possible work or, more picturesquely, by picturing an infinite bunch of monkeys who, sooner or later, type out the complete works of Shakespeare. The thing is amusing but not informative.  

Although the process is endless, any given valid formula will show up sooner or later (perhaps not in our lifetimes). That might seem to give us an automatic way of determining in principle whether any given formula is valid: just wait to see whether it turns up on the list. That works fine for valid formulas, but what about invalid ones? You sit there, waiting for the formula. But if it hasn’t shown up yet, how do you know whether it will show up later, or will never show up? The big open question was the Decision Problem: is there a general algorithm that, given any formula of the language, will tell you whether it is valid or not?

There are plenty. Sadly, there is always at least one case where they will all disagree.  


Almost simultaneously in 1935-36, Alonzo Church in the US and Alan Turing in the UK showed that such an algorithm is impossible.

It is possible for a language which says almost nothing interesting or informative.  

To do that, they first had to think very hard and creatively about what exactly it is to be an algorithm, a purely mechanical way of solving a problem step by step that leaves no room for discretion or judgment. To make it more concrete, Turing came up with a precise description of an imaginary kind of universal computing machine, which could in principle execute any algorithm. He proved that no such machine could meet the challenge of the Decision Problem. In effect, he had invented the computer (though at the time the word ‘computer’ was used for humans whose job was to do computations; one philosopher liked to point out that he had married a computer). A few years later, Turing built an electronic computer to break German codes in real time during the Second World War, which made a major contribution to defeating German U-boats in the North Atlantic. The programs on your laptop are one practical answer to the question ‘Why does logic matter?’

Computers matter because they do cool and useful stuff. Anything which helps guys make better computers matters to them. It doesn't matter to us who are only concerned with the practical benefit we receive from the purchase of a computer. Turing was a homosexual. I have no doubt that homosexual activity mattered to him and that, as a result, the British people gained greatly. We were foolish to try to stop him from taking part in such activity. He killed himself. It matters that we stop being nasty to homosexuals because their homosexual activity matters to them and enables them to better serve us through all the good things they do in their lives. But this does not mean we need to ourselves partake in homosexual activities or the nonsense that masquerades as Philosophical Logic. 


Logic and computing have continued to interact since Turing. Programming languages are closely related in structure to logicians’ formal languages.

The former are useful. The latter are not.  

A flourishing branch of logic is computational complexity theory, which studies not just whether there is an algorithm for a given class, but how fast the algorithm can be, in terms of how many steps it involves as a function of the size of the input.

This gives rise to the notion of the impossibility of a 'natural proof' similar to the problem with a Godelian 'absolute proof'.  

If you look at a logic journal, you will see that the contributors typically come from a mix of academic disciplines – mathematics, computer science, and philosophy.

The philosophers stick out like a sore thumb by reason of their bone-headedness.  

Since logic is the ultimate go-to discipline for determining whether deductions are valid,

but they need 'witnesses' which is just like 'verification'.  

one might expect basic logical principles to be indubitable or self-evident – so philosophers used to think.

Actually, scepticism about logic is as old as logic. It is thought that Greek 'Pyrrhonism' was influenced by Indian thought. 

But in the past century, every principle of standard logic was rejected by some logician or other. The challenges were made on all sorts of grounds: paradoxes, infinity, vagueness, quantum mechanics, change, the open future, the obliterated past – you name it. Many alternative systems of logic were proposed.

Fuzzy logic and 'reverse mathematics' both suggest that for any given purpose we can work with a smaller 'rule set'. Sadly, this does not rid us of arbitrariness.  

Contrary to prediction, alternative logicians are not crazy to the point of unintelligibility,

some of the greatest mathematicians were or are. This doesn't mean they are wrong. 

but far more rational than the average conspiracy theorist;

some very bright people are conspiracy theorists. Indeed, so are some very good and gentle souls.  

one can have rewarding arguments with them about the pros and cons of their alternative systems. There are genuine disagreements in logic, just as there are in every other science.

Not to mention debates as to who just farted. Is it true that 'he who smelled it, dealt it?' Or is it rather the case that he who denied it, supplied it? The epistemology of fart production is a tangled field.  

That does not make logic useless, any more than it makes other sciences useless.

Logic like homosexual activities may contribute to the production of useful stuff. This does not mean it is useful in itself.  

It just makes the picture more complicated, which is what tends to happen when one looks closely at any bit of science.

or analyse the likely source of a fart in a crowded lift.  

In practice, logicians agree about enough for massive progress to be made.

Reinventing the wheel isn't progress.  

Most alternative logicians insist that classical logic works well enough in ordinary cases.

Not bothering with logic works even better 

(In my view, all the objections to classical logic are unsound, but that is for another day.)

But no arguments for it are non arbitrary or, indeed, informative.

What is characteristic of logic is not a special standard of certainty, but a special level of generality.

Godel wanted logic to treat of general concepts. The problem is that 'intensional fallacies' arise unless there is an 'absolute proof' or some other such 'Archimedian point'.  

Beyond its role in policing deductive arguments, logic discerns patterns in reality of the most abstract, structural kind.

but they are arbitrary. Logic can't get its head around this.  

A trivial example is this: everything is self-identical.

Sadly, because of Time's arrow, nothing is.  

The various logical discoveries mentioned earlier reflect much deeper patterns.

Logic is Hell. The deeper pattern here is futility.  

Contrary to what some philosophers claim, these patterns are not just linguistic conventions. We cannot make something not self-identical, however hard we try.

Only if differences are indiscernible. Because of space-time, nothing is self identical unless it doesn't really exist in which case it still isn't because it may be being sodomized by a unicorn in Imaginationland.  

We could mean something else by the word ‘identity’, but that would be like trying to defeat gravity by using the word ‘gravity’ to mean something else.

It probably does mean something else to do with string-theory or stuff more arcane yet. 

Laws of logic are no more up to us than laws of physics.

Only in the sense that grammar is- in other words, a really stupid sense. 

No comments: