Monday, 26 April 2021

Moore's 'Being, Univocity & Logical Syntax'- part 1

Adrian Moore's paper 'Being, Univocity & Logical Syntax' is available here.

The obvious question that arises in this connection is how Being could be connected to anything linguistic? Surely language burgeons as it solves purely social coordination or discoordination problems? On the other hand, talk of Being tends to get relegated to pedants. It is appropriate that Paideia nod toward the rattle of its own infancy but why pretend that what was prattled then was other than charming nonsense?

No doubt, a particular protocol bound discourse might be concerned with some scientific, or utile, more or less objective for narrowly defined, aspect of reality. But its 'primitive terms' would be undefined. They would not themselves be considered as having any real, as opposed to instrumental, being or reality. A 'legal fiction' is still a fiction though it may be taken as a fact for a particular purpose. 

 No doubt, after Humanity's existence is concluded, some entity might say- those 'primitive terms' were asymptotically approaching such and such definitions. But, even if we knew one such definition now, it would be useless to us if not outright mischievous. What is important is all the different twists and turns of meaning the primitive term took to help get us to the end point.

 Moreover, provided we evolved under Knightian Uncertainty and thus use regret minimizing strategies, then- as I have argued elsewhere- our thinking must include ontologically dysphoric categories so as to solve Newcombe problems or (which may be the same thing) generate 'hedges' and 'income effects'. In other words, there are going to be things 'not a home in the world', or which are incompossible with our being, which drive our dynamics. But this is a purely economic matter- like surviving as individuals or even as a species.

A related point is that if Language is 'strategic' or 'economic'- i.e. has an allocative function- then it will feature some intransitivity and 'money pumps' to drive liquidity. On the one hand, this means armchair philosophy can pay for itself at the margin. But that margin shifts unpredictably. Sad. 

It could be claimed that philosophy is a protocol bound discourse whose 'primitive terms' are posited to really exist. But it could equally be claimed that such philosophy is empty verbiage. Do we have any strong reason to pick a side on this issue?

 One possible answer is that God is good and God wants us to say God is good in the same way we are good- i.e. not sarcastically- and so 'univocity' is important coz human beings like chatting but they'd better keep mentioning the goodness of God, without any hint or irony or sarcasm, otherwise the Supreme Being will fuck us up something fierce. 

 Another answer is that man is the shepherd of Being- i.e. has to talk high falutin' bollocks otherwise the Jews take over unless of course Hitler gets his ass handed to him in which case ...urm... we should gas on about how Being is a defense against planetary technology and the not cool aspects of Hitlerism. 

What about 'Logical Syntax'? This just comes down to a more or less ignorant intimation of category theory. Voevodksky showed how Univalent foundations  could be useful- computer checking of mathematical proofs and maybe self-teaching A.Is that won't go paranoid almost immediately, etc- but even more useful is the acknowledgment that there will always be many more Mathematics than there can be Mathematicians. Logic isn't in the world- like Zoology. It is mainly far out of this world like Ewoks in Star Wars. 

Academic Philosophy is well siloed. Moore's paper which came out a little before Voevodksy's tragic death, begins thus

There are three strikingly different groups of philosophers with whom a title such as this, a title in which mention of being or existence is conjoined with mention of the broadly semantic or the broadly grammatical, is liable to resonate especially loudly: analytic metaphysicians;

who became irrelevant when Science decided it didn't know what exists, it might never know what exists, ignoramus et ignorabimus, but so long as it could create cooler and cooler tech who actually gives a fuck?  

historians of medieval philosophy;

who are historians- they literally live in the past

and students of post-structuralism, or more specifically students of the work of Deleuze.

i.e. the mad.  

Moore ably summarizes the silliness of 'metametaphysics' thus-

Analytic metaphysicians have long been exercised by questions about what exists. Do mathematical entities exist, for example?—and if not, how come there are infinitely many primes?

Mathematicians who were exercised by such questions found new axiom systems to do math in. This could be very useful- e.g. Brouwer's invention of 'choice sequences'. Nothing similar can be said for analytic metaphysics. Even David Lewis ended up talking nonsense. 

Do holes exist?—or is it just that, among the physical things that exist, some are perforated or porous or some such?

Algebraic topology has made many useful discoveries. I suppose one could give a philosophical account of different theorems in this regard. But why bother? What good would it do?  

The more analytic metaphysicians have grappled with these questions, the more self-conscious they have become about what they are up to. There is accordingly a thriving branch of analytic metaphysics, sometimes referred to as metametaphysics, whose aim is to clarify what is at stake in addressing such questions.

But metaphysics has contributed nothing useful. What is at stake in addressing something useless is the opportunity cost of your time. Only if you are utterly useless at everything should you play this game. But when paupers sit down to a poker game they can't even afford a candle or a pack of cards. They remain in the dark holding imaginary cards.  

How far is it a matter of ascertaining the human-independent constitution of reality and how far a matter of settling on a way of speaking, perhaps even settling on an interpretation of the verb ‘exist’?

The answer is get a useful type theory- or don't bother because the thing wouldn't pay for itself. Just give up on this pseudo problem. Obviously, there is a way to regulate expression so it is more useful, but actually doing so is pointless unless you are actually doing something useful in which case protocols will spontaneously appear as a solution to coordination and discoordination problems. 

Moore next turns to Deleuze-

 Deleuze resurrects a medieval debate that arises even given an inventory of all the things that exist. Suppose we have such an inventory, never mind for the time being what might have been required to compile it.

Suppose we have a library containing all books with true knowledge. Does it or does it not contain a book which contains all the truths in the other books? If the answer is yes, then some writer can know all truth. If the answer is no, no writer can know all truth. But, if we have an inventory of the library, then there is a quick way to decide if truth is univocal- it can have a single author. This means there is a method in P of showing P equals NP. Just sitting in an armchair, we have found an important result which high I.Q Computer Scientists have been searching for in vain! Will someone give us a Field Medal? No. We will be laughed at. Our supposition, 'begged the question'. If an inventory exists then it merely reflects the ontological assumptions of whoever drew up the inventory. My inventory of the world is- mine, not-mine. It dismisses the univocity of being. Stuff which isn't mine, or aint helping me directly- Andromeda Galaxy, I'm looking at you!- isn't important. The type of being it has is de trop. But there is nothing interesting about my non univocal theory of being. It merely reflects my stupidity and narcissism. It doesn't tell us anything about the world. I suppose one might say- 'univocity is the better doctrine. Physicists don't say what isn't in our light cone does not exist or that it has a different type of Being. Indeed, they accept there may be a type of particle which doesn't interact with anything visible to us. Other Sciences may have 'emergents' or things yet more rich and strange like the Gaia hypothesis. Mathematics may have 'Oracles' and 'Witnesses' and mention of 'Yoga'. Even the Law may turn out to recognize- as happens in India- the legal personality and property rights of a particular deity or a river. No doubt, there is an element of controversy to such assertions. Some may say- 'this is illegitimate! You are treating something which does not really exist as if it is as real as Tom, Dick or Harry!' The problem here is that telling bastards they are bastards may get you punched in the nose. What is illegitimate may be as useful, or as belligerent, as what is not. Ultimately, in making an inventory, strategic considerations prevail. Carving up the world according to its joints gets more complicated if the thing is alive and has fangs or claws or won't show you its boobs unless you put a ring on it. 

There is a question, on which the medieval debate turns, about the nature of the things in this inventory. One way to put the question is as follows: are any of these things so different in kind from ordinary things such as apples and snakes that our very talk of the ‘being’ of the former has to be understood differently from our talk of the ‘being’ of the latter?

The assumption here is that 'ordinary things' would feature in a universal inventory in the same way as they do for us in ordinary conversations about 'snakes' and 'apples'. Science has shown that ordinary things we thought were different- e.g. the morning and evening star- are actually one and the same. On the other hand, things we thought were the similar- dolphins and sharks- were actually very different.  

This is really a debate about transcendence. Another way to put the question is this: do any of these things enjoy an absolute transcendence?

Yes, if you believe there could be a universal inventory. It must be the case that there is some higher point of view such that the 'world is carved up along its joints'. This is not an unreasonable view. It may be that there is a simple way to write down a 'Theory of Everything' in a mathematical language we don't yet know but which a few may already have an intimation of.

Of course, there is no sharp division between Deleuze’s quest and the analytic metaphysicians’ quest.

This is not obvious to me. Still, Moore probably knows a lot more about this and his peers may find this insight of his quite useful.  

It is all very well my urging us to prescind temporarily from what would be required to compile an inventory of all the things that exist. But if this inventory had to include things that were absolutely transcendent,

e.g a shorter inventory which could generate the inventory or perhaps even generate all the objects in it 

then there would be an issue about what it would mean even to say that such things ‘exist’,

surely one would say the meta-inventory exists in a more useful and profound manner? It is ontologically higher. It can generate everything else and enfold their true essence within itself.  

and that issue would lie squarely in the analytic metaphysicians’ territory.

No. It would lie squarely in the neighbor's cat's kitty litter. Anything which is absolutely transcendent must be the shit of that cat's Nicaraguan horcrux. Suppose the reverse. Then the absolutely transcendent might be the shit of a non-Nicaraguan horcrux- like maybe a Guatemalan horcrux?- which is just plain absurd.  

I think analytic metaphysics 'attempts to focus philosophical reflection on smaller problems that lead to answers to bigger questions'. Surely the existence of 'the absolutely transcendent'- or even what it would mean to have a 'Theory of Everything'- are very big questions? Indeed, they abolish all small questions or even big antinomies- e.g. Free Will v Predestination. 

Even so, prima facie at least, there are two broadly different sets of concerns here. What animates Deleuze’s concerns? All manner of things. But he has two projects in particular that deserve special mention. One of these is to extend Heidegger’s work on being.

Why not his work opposing  Weltjudentum, which, as he confided to his dear diary as Nazi tanks entered Paris, "is ungraspable everywhere and doesn't need to get involved in military action while continuing to unfurl its influence, whereas we are left to sacrifice the best blood of the best of our people"Sadly, Heidi's blood was not sacrificed by the Nazis- which showed they were the worst of his people. 

Heidegger drew a distinction between being and the entities that have it.

Did he? What type of being did that distinction have? I honestly don't know. I also don't honestly know if Heidi did either. But then, one may well ask, does this 'honest knowing' have being? Perhaps only dishonest non-knowing-for-sure exists. 

Perhaps it was a Heidi who could not be a being whom we refer to as speaking of Being. 

The entities that have it are all the things that we can be given or all the things of which we can make sense; their being is their very giveability or intelligibility.

But all beings stand in a relationship to my neighbor's cat whose Nicaraguan horcrux shits out absolute transcendence. Anyway, Heidi is a fine one to talk about intelligibility.

 By drawing this distinction Heidegger enabled being itself to become a distinct focus of attention, a privilege which he argued traditional metaphysics had prevented it from enjoying.

Meanwhile Heidi's folk were according Weltjudentum a privileges of a very modern kind based on the chemistry of Zyklon B. 

Not only that; he also contributed a great deal to our understanding of being.

But what did that understanding contribute? Anything useful? Anything beautiful? Anything true?

So what if a few cunts got worthless PhDs and tenure teaching that stupid shit? There was plenty of other deadwood in other University Departments.  

Deleuze wants to enlarge that understanding. He thinks that Heidegger has helped us towards a unified account of being.

We don't have a unified account of beings. Once we get it we can say 'this helped us get the account we wanted'. Till then this unified account has no being. Heidi's help, be it what it may, is not known to have advanced or retarded anything in this respect.  

In particular, he thinks that Heidegger has helped us towards an account of being which, though it certainly acknowledges the many profound differences between entities, and indeed the many profound differences between their ways of being, does not cast any of them in the rôle of the absolutely transcendent and does not involve any Aristotelian polysemy.

Where is that account? What does it say about the Higgs particle? Nothing? Then it isn't really an account of being at all is it?  

(Heidegger himself emphasized the magnitude of this task. At one point he put the pivotal question as follows: ‘Can there... be found any single unifying concept of being in general that would justify calling these different ways of being ways of being?’ (1982, p. 176, emphasis in original).5 )

At that time Matter was being linked to Energy. That sounds like a 'unifying concept'. Did Heidi understand it? No. 

Metaphysics begins where Physics ends. This isn't philosophy. It is sticking your head in the sand while all around you a great Fortress is rising up.  

Nevertheless Deleuze thinks that Heidegger’s investigations are importantly incomplete. It is not enough, in Deleuze’s view, for an account of being to accommodate all these differences; it is not even enough for it to expose and articulate them all.

Did Hedi expose and articulate the difference between fermions and bosons? Did he anticipate anyons? If not, he wasn't exposing or articulating shit.  

If we are properly to understand what it is for everything to be immanent,

we would need to know all Physics so as to know all Neurology and all Cognitive Science and so forth. Of course if minds don't actually supervene on anything material then...fuck Deleuze, we need lessons in telepathy and telekinesis and cool stuff of that sort.  

then we must also understand how these differences themselves contribute to the fundamental character of being.

The problem here is that we don't know what differences are fundamental. Maybe none are. We don't know. 

Time and Space seemed very different as did Matter and Energy or more recently bosons and fermions and so forth. But then there was a time when the Jew seemed very different from the 'Aryan'- at least to Heidi. Gay sex was considered fundamentally different to reproductive, hetero, sex.  Deleuze lived through a period of great cultural and intellectual ferment. But his 'plane of immanence' was obsolete.  

We must acknowledge a kind of ontological priority that these differences enjoy over the entities between which they obtain,

Because Time really is very different from Space as is Gay sex from Straight Sex and Jews from Aryans. I mean to say, Matter can't be changed into Energy anymore than you turn a money grubbing Jew into a genuine creative Scientist or Mathematician or Musician. All they are capable of is Negroid vulgarity- but even there their only aim is to make money for themselves. That is why, though you find Yids all over the world, nowhere do they have a country of their own. Parasites, you see, need a host. Apparently some swindling Jews are raising money to establish a colony in Palestine! Why not on the Moon! I tell you as a fact of ontology that the Jews can never have their own State for the same reason that no Man could ever set foot on the Moon! The sublunary sphere is ontologically different just as the Jews are biologically different. Failure to acknowledge the ontological priority of difference over the entities between which they obtain leads to ridiculous conjectures- e.g. man can set foot on the moon or the Jews could ever thrive in a State of their own. 

the entities whose being is under investigation, the things of which we can make sense. For this, Deleuze thinks, we need to look beyond Heidegger.

To some guy yet crazier and more ignorant. 

The second project is somewhat more nebulous but no less significant. Appeals to absolute transcendence can be used to evade all sorts of problems, both theoretical and practical.

But these guys never solve any problems. They are too stupid. Who cares what they 'evade'?  

Thus many people turn to their belief in God to help them acquiesce in suffering, and many of these in turn appeal to God’s absolute transcendence to help them address the question of how suffering can be part of God’s plan.

The answer is simple. Take a few decades of mortal misery and as a reward you get E-fucking-ternity of Living Large in a Heavenly Mansion while all the infidels get tortured in Hell.  

Their answer is that we cannot really understand how, since all talk of God, including talk of God’s plan, is at most an analogical extension of talk that we can really understand.

I don't really understand Actuarial Science or how the Stock Market works. But I have faith that my Pension provider aint just pissing my money up the wall.  

In Deleuze’s view, such appeals to absolute transcendence come too easily. Alluding to the famous Dostoevskian adage, he writes, ‘One must not say, “If God does not exist, everything is permitted.” It is just the opposite... It is with God that everything is permitted,’ 

The problem here is that everybody knows God wants us to be good and do useful stuff and shut the fuck up about our boring philosophy.  

Learning how to live without recourse to absolute transcendence is in large part an ethical exercise for Deleuze.

It was an unethical exercise in wasting everybody's time by writing stupid shit. There are genuine lunatics whom this sort of thing should be left to.  

It means learning how to confront all that we are given in such a way as to make sense of it in its own terms, resisting the seductions of, as it may be, an inscrutable theodicy or an abstract teleology in whose transcendent terms all our afflictions are ultimately justified.

Some people say the 'Humanities' can save us from Nazism or Donald Trump or whatever. Clearly it didn't do this in the past and doesn't do this now. 

The sad truth is, we evolved on an uncertain fitness landscape. Our 'afflictions' are a 'discovery' process. As Science improves more and more of this suffering can be turned into knowledge. Already, our instinct to shun the afflicted is being damped down. We understand that calling the Doctor may put the fellow back on his increasingly productive feet. We cherish human lives much more because all lives are more productive and thus beneficial to us. Moreover, even a large disabled or dependent population is a good thing because it permits more Medical research as well as causes 'regret minimizing' excess capacity for essentials to be maintained. The paradox of the 'Cities of the Plain' is that if their people had been rational as well as selfish, rather than stupid and greedy, then they'd have had a fully functioning Welfare State. Of course, if there was a resource crunch, the poor would have gone to the wall. But that's what happens anyway in Welfare States during a fiscal crunch. There is an entitlements failure. 

It is a non-escapist life-affirming exercise.

No. It was an escapist exercise in obfuscation, stupidity and bullshit.  

For Aristotle, it is pretty much axiomatic that certain differences between things are great enough, in themselves, to preclude a single sense of being in application to those things.

So you have a bunch of categories. But, it turned out, actually specifying the categories wasn't useful. Just calling people on 'meta-metaphoricity'- i.e. treating a figure of speech as a fact about the world and constructing another figure of speech, also to be taken as a fact, on that basis- and telling them not to be so stupid is all that is required.  

Spinoza, by contrast—at least on Deleuze’s reading of him—is prepared to understand all differences between things, however great, as themselves constituting the character of being. (This is what I earlier said that Heidegger fails properly to do—again, on Deleuze’s reading of him.) The difference between Aristotle and Spinoza is thus a fundamental difference of approach.

Surely there was a fundamental difference in Faith? Spinoza came from a monotheistic community. Aristotle didn't. 

On the Spinozist approach, it is not just that any mention of the being of a thing is to be understood as a reference to one particular entity; any mention of the multiplicity and diversity of things is to be understood as a reference to that entity, whose essence is expressed in the very differences between them.

This is actually like a 'slingshot' argument. Everything is connected by the web of predication so to know everything about one thing is to know everything about everything. Obviously, this can be refined; indeed, something like this adheres in the Yoneda Lemma. Sadly, that Mathsy stuff is difficult to follow if you are posting on your blog in-between sips of Bacardi and half an eye cocked on whatever is binge-worthy on Netflix. What? It's called multi-tasking. Everybody is doing it. You can't tell me all these shite books academics keep publishing aren't written in the same way. 

This is what Deleuze calls an ‘affirmation’ of difference. For x to be affirmed of y, in Deleuze’s sense, is—very roughly—for the essence of y, or for the sense of y, to be expressed through x; and for x to be affirmed tout court is for x to be affirmed of being.

But this never happens. It would be great if we could catch hold of a y which 'affirms' an x. It would be like an Einstein Rosen bridge. You'd have non-locality and be able to do all sorts of weird shit. 

It would be wonderful if we could sit in an armchair and get magical powers just by thinking about 'difference' or 'immanence' or 'transcendence'. But such being as us humans possess permits nothing of the sort. Sad. 

This affirmation of difference allows Spinoza to see being in the differences between things, rather as one sees a single image in the differences of hue, the differences of brightness, and the differences of location between the myriad different pixels that compose it. It allows him to make sense of things as part of one and the same immanent reality by making integrated sense of what differentiates them. But Deleuze thinks we can go further. For Spinoza still believes in one privileged unified entity that is prior to all multiplicity, prior to all diversity, prior to all difference. Deleuze thinks we can acknowledge the univocity of being without appeal to any such entity. (To this extent he is in line with Heidegger. Heidegger denied that being is itself an entity) If we deny that there is any such entity, we can understand any mention of the multiplicity and diversity of things entirely in its own terms and still make sense of things as part of one and the same purely immanent reality.

The problem here is that you are making sense of nonsense. We know that our picture of the reality is defective. We just don't know which bits aren't wholly so. Sure, you can shuffle things in your mind till you say 'Eureka! I now get why immanence or transcendence or Yogic Levitation aint nonsense!' but just faking Enlightenment saves time. 

This is not just an affirmation of difference.

Indeed! 

It is an affirmation of the affirmation of difference.

& an affirmation of that affirmation & so on 

For it allows difference itself to be its own affirmation.

& the affirmation of that affirmation & so on 

Such affirmation is no longer conceived as a reference to some other entity, some single substance in which everything is anchored. There is no such entity. Difference is itself the ultimate reality.

But only because there is this local anchoring. So you have chains and antichains and a poset instead of a well ordered set. Big whoop.  

Indeed it is itself ever different, like an ever changing image in which the only constancy is the change. Being can still be seen in the differences between things; but it no longer has any stable identity or overall unity of its own.

Why? A partially ordered set isn't so very different from a totally ordered set. You can always find some search or sorting algorithm to identify 'maximals' or 'get to the Pareto front'. Engineers and Actuaries and Accountants and so forth do this all the time.  

It no longer has any independent status as an entity in its own right.

Sez who? Amazon doesn't know everything about its customer base. But it keeps analyzing its expanding Data Set in smarter and smarter ways. It may be that its decision processes are already a 'black box' which nobody fully understands. But this does not mean 'customer base' is not an independent entity. I suppose you could argue that an A.I will soon know everything about us. Then it will be able to control our actions without our conscious knowledge. We will be slaves. Humanity will not be an independent entity in its own right. It would merely be an extension of a Super-Computer. It is certainly possible that our genes and our culture have made us vulnerable to this sort of predator or parasite. But then Humanity as an entity probably has only a very limited existence in the timeline of this Universe. Sad.  

These differences, which are differences of power, are what Deleuze characterizes as intensive differences:

The problem here is that if we say power or force exist independently of interactions- they are the unwinding of an entelechy embedded in a monad- then we have entered a paranoid world where what is visible is delusive. Marxism, at one time, was about actual interactions. Its 'linguistic turn' turned it into paranoid garbage of a Gnostic type. It railed against occult forces in eschatological language. But, if there is an eschaton, there must also be a katechon. Free Market Economics concentrated on improving the katechon- i.e. the economy- so as to stave off the 'final crisis' or Day of Wrath. Mimetic effects- imitating more successfully interacting agents- swamped Growth Theory. The Chinese Marxists, in the Eighties, realized that what Marx actually said was 'To each according to his contribution'- till Scarcity at last disappears and everybody can be given everything for free.  

  There is a famous sentence on the final page of Deleuze’s Difference and Repetition where he summarizes his thinking by saying, ‘All that Spinozism needed to do for the univocal to become an object of pure affirmation [i.e. affirmation that is affirmation of itself] was to make substance turn around the modes,’ and then refers to Nietzsche’s doctrine of eternal return which he interprets as providing the wherewithal to do precisely that.

In other words, by making a theory circular- an ouroburos or phoenix born of its own ashes- you make it unfalsifiable. But then it has no instrumental value. It is not ethical. It is not action guiding. Let the clockwork run on. It will reset itself. There is nothing you can do except affirm that because nothing could be different therefore nothing should be different.  

Marrying Nietzche to Spinoza is like marrying your cat to a dog. You aren't going to get any cute kittypuppies. 

It is the Nietzschean approach that Deleuze favours. But on what grounds? On what grounds, for that matter, does he favour Spinoza’s original preparedness to understand all differences as constituting the character of being in defiance of Aristotle’s unpreparedness to do so?

The answer is that Spinoza was prepared to get rid of 'entelechy' and teleology as a sort of inner clockwork. This was perfectly sensible. Once one goes down the other road one may end up as some sort of Hermetic Magician- i.e. a swindler. 

 It may be that his Jewish heritage made Spinoza suspicious of Descartes's dualism just as Solomon Maimon became suspicious of Kant's noumena/ phenomena distinction. Rather than have two entities, why not see the former as the asymptotic limit of the latter?

 Some have suggested that Spinoza's thinking was similar to Isaac Luria's disciple Abraham Cohen Herrera who died in Amsterdam when Spinoza was a child. The difference was that Messianism had taken a dangerous turn- more particularly as the dreaded year of 1666 approached. Spinoza, in his own manner, was affirming rationalism and monism as the key to an ethical life in a well governed polity free of the sort of excesses committed by the English Puritans.

Spinoza, of course, was under the spell of the 'angel of (Euclidean) geometry and thus innocent of the 'devil of algebra' that Newton and Liebniz found in in calculus. However the notion of 'integration' allows Liebniz to embrace full blown Ocassionalism as the price of Transcendent God who keeps busy ensuring this is the best of possible worlds. This is obvious to economists who, though ignorant of Voltaire, speak happily of the 'Economics of Dr. Pangloss'- i.e. the use of a mathematical theory to paper over an obvious asymmetry. 

It may be that had the Jews enjoyed a happier History, they too would have embraced Transcendence instead of settling for a jealously immanent 'El Kanna'. Amsterdam in Spinoza's time was rife with Messianism. Henry Oldenburg, a distinguished German savant who became the first secretary of the Royal Society, wrote to Baruch Spinoza about the self-proclaimed Messiah,  Sabbatai Zevi, and the enthusiasm with which he had generated-  "All the world here is talking of a rumour of the return of the Israelites ... to their own country. ... Should the news be confirmed, it may bring about a revolution in all things." Spinoza, however, knew that Revolutions aren't always well behaved creatures. 

 Calvinists in Holland believed that the re-establishment of a Jewish State, as foretold by Scripture, was Divinely ordained. Indeed some Jews and Calvinists opposed granting Dutch citizenship to Jews even a the end of the eighteenth century on the grounds that the Jews must be ready to relocate once the Messiah arrives. In this context, Spinoza's work could be seen as a contribution to Liberal Zionism. The Jews might set up a State in Brazil or Madagascar or even in Palestine under the Ottoman Sultan (provided, Spinoza says, they give up the 'effeminate' practice of circumcision!). But they should understand that Moses was merely a law-giver. They shouldn't get too hung up on stuff in Deuteronomy or adhere to a Messiah who might turn out to be a crackpot. 

Nothing that I have said so far in this essay appears to forestall a simple stand-off between Spinoza and Aristotle; nor, perhaps, between Nietzsche and Spinoza. It is natural to look for arguments here. And relevant arguments are to be found. Nevertheless, they are not my primary concern in this essay. My primary concern is not with how the univocity of being can be established. It is with something more basic. It is with how the univocity of being can even be properly thought.

The problem here is that how things 'can be properly thought' aren't obvious at all. Those who are 'properly thinking' about stuff that is important to us are employing a vocabulary and utilizing analytical instruments which lie far beyond our ken. If we try to get up to speed on the subject by reading a couple of popular books we soon discover we are babbling nonsense. A little knowledge is a dangerous thing. Philosophy is psilosophy- a slender and anorexic wisdom doomed to perish of inanition.  

I hope that some of what I have said so far has helped in this respect. In the second section of this essay I want to recapitulate this material, but in a different form, a form that relates it back to some of the concerns of analytic metaphysicians—and indeed of analytic philosophers more generally.

Surely analytic philosophy exists only to get away from a path of inquiry which led nowhere?  

I begin with a matter that might initially appear quite unrelated. Wittgenstein, in his Tractatus Logico-Philosophicus, draws a distinction between what he calls ‘signs’ and what he calls ‘symbols’. Signs are the written marks or noises that we use to communicate.

Not necessarily. What we communicate with is stuff that solves a coordination or discoordination problem. But there may be a surplus or deficiency of signs. What matters is the 'Schelling focal solution' to the coordination game. In other words, the context determines what is or isn't a sign. Clearly a guy who is bleeding needs medical assistance. We may say 'his gushing blood and strangled moans were signs he wanted help'.  

Symbols are signs together with their logico-syntactic use.

No. Symbols may be signs or they may not be. We don't always know.  

Logical syntax is akin to ordinary grammar, but deeper.

There may be no 'i-language', we don't know and can never be sure. No doubt, there are protocols and paradigms and so forth but those protocols and paradigms might serve no purpose, innit?  

Thus ordinary grammar associates the use of the word ‘are’ in ‘Human beings are animals’ with the use of the word ‘eat’ in ‘Human beings eat animals.’

But we don't think 'is' or 'are' is similar to 'eat' or 'beat'. The former refers to a mental classification, the latter to a type of interaction.  

Logical syntax recognizes differences between these, reflected in the fact that it makes sense to add to the latter sentence, but not to the former, ‘including themselves’.

'Human beings are animals, including you!' means something. I can imagine a guy starting to say this but then changing his mind, so as not to seem rude, to somewhat lamely end up saying something redundant- viz. human beings should themselves recognize they are animals- not just tell other people they are no better than beasts.  

Wittgenstein captures the relation between a sign and a symbol as follows: ‘A sign is what can be perceived of a symbol,’ (1961, 3.32).

But everything could be perceived as a symbol while there are some signs which can be perceived as being utterly meaningless.  

And he points out that ‘one and the same sign... can be common to two different symbols,’ (1961, 3.321).

buy some symbols may have no sign- at least as yet. Consider the symbol for Socioproctology. I may speak of it. I may ask a graphic designer to design it. I may say 'aha! This and nothing else is the symbol I was looking for!' But, later on, I may change my mind when I discover it was last used by the Nazis. 

Thus the word ‘round’ is sometimes used as a noun to denote a slice of bread, sometimes as an adjective to indicate circularity: one sign, two symbols. Now I am going to assume the following: difference of logico-syntactic use is a difference of degree; in other words, one symbol can share more or less of its logico-syntactic use with another. I do not claim that this doctrine is in the Tractatus itself. Whether it is, or whether for that matter its negation is, and in either case how explicitly or implicitly, are large exegetical issues that I shall put to one side. My aim is to make use of Wittgenstein’s ideas, not to rehearse them. Adopting this doctrine seems to me the only plausible and interesting way of extending those ideas, or at least the only plausible and interesting way of extending them that subserves our current purposes. Here is an illustration of the doctrine. The word ‘round’, as well as having the two meanings already indicated, is also sometimes used as a noun to denote a complete series of holes in golf. This is yet another symbol, different from either of the other two. But it is less different from the other noun than it is from the adjective. This can be seen in the following way. There are many meaningful sentences involving the word ‘round’, such as ‘I had a round yesterday,’ in which the meaning of the rest of the sentence prevents the word ‘round’ from functioning as an adjective but still allows it to function as either of the two nouns. In other meaningful sentences, including sentences that build on this one, such as ‘I had a round yesterday and I had it toasted,’ the meaning of the rest of the sentence prevents the word ‘round’ from functioning as one of those two nouns but not as the other. However, there is no equivalent transverse ordering. There is no meaningful sentence in which the meaning of the rest of the sentence prevents the word ‘round’ from functioning as one of those two nouns but still allows it to function either as the other noun or as the adjective.

No. 'I had a round yesterday and I had it toasted by a Japanese caddie who was distinctly of the rectangular persuasion' may, in some context, be very meaningful indeed. Certainly, if my boss said this, I'd laugh my head off. I'd relate this story to my colleagues- 'rectangular persuasion! That hits the nail on the head! I've often thought something similar whenever I go up for a round. Trust the boss to come up with the mot juste! I laughed so hard I wet myself!' Then H.R would send me a memo about my racist attitude to our Japanese friends. That would take the wind out of my sails. Still 'rectangular persuasion' is priceless! 

Transverse orderings are always possible where the underlying collection of sets is not well-ordered or features incomparables. You can find a new maximal 'anti-chain' and partition the thing differently.  

Now, are there any ambiguities that do not involve any difference of logico-syntactic use, ambiguities that are, so to speak, simple differences of meaning? (In current terms such an ambiguity would involve one symbol, not just one sign.) One’s first thought is that there surely are. But on reflection the matter seems less clear. Exact sameness of logico-syntactic use cuts very finely indeed.

Only if V- the universal set- is well ordered or has a certain type of reflection principle. We don't know if this is the case for Math. How could we know it for natural language?

Very crudely speaking, if there were an ambiguity that involved no difference of logico-syntactic use, there would have to be two things such that whatever could be meaningfully said of one could be meaningfully said of the other.

But this is precisely the problem we face in Quantum Mechanics! There is a tension between identity of indiscernibles and sufficient reason. We live in a world where Science may affirm multi-verses of a type more unimaginable than any dreamed up by Comic book writers.  

In the footnotes Moore asks-

 What if someone were to insist that the word ‘round’, in this example, can function as either noun; it is just that, in one case, the sentence can only be used to say something false? I would deny that. I think an utterance of ‘I had a complete series of holes yesterday and I had it toasted,’ would be meaningless, not false.

Would it? Clearly you did something of a commemorative type. Maybe 'toasted' means you had your achievement recorded and, in accordance with the traditions of the Club, the membership committee assembled and toasted your success while the caddies stuck radishes up their bums. The fact that you said 'I had it toasted' suggests that the committee was less than enthusiastic. The Chairman said- 'Must we toast you? You know my caddie wasn't able to get the radish out of my bum last time we toasted Donald Trump. You know how vain he is. Always insists on getting his ounce of flesh. But you are a decent chap. Why not let us off the toasting? Tell you what, we'll put up a plaque in your honor.' Though not unsympathetic to their plight, you insisted on the toasting. But you paid for K-Y jelly out of your own pocket and taught the caddies how to tie a piece of string to the radishes so that retrieving them would be easier. Shame those strings broke. Personally, I blame the string theorists. Those tossers are utterly useless. They should get real jobs- preferably as caddies... unless they are of the rectangular persuasion!

But I agree that this is contentious. The contentiousness will be significant later.  If it is true that difference of logico-syntactic use is a difference of degree, then that seems to me to have a bearing on a number of other philosophical issues.

But only if 'logico-syntactic use' is working with a universal set which is well ordered or which has a certain type of reflection principle. Otherwise the thing is useless. Aristotle might say this is the fault of 'akrebia'- bringing more precision to bear on a topic than the context warrants.  

In particular, I think it has a bearing on Frege’s problem about the semantics of predicates: see Frege (1997).

One can solve that problem with 'rigid designators' which are like Directed Graphs. The problem is that in Physics we will still have to deal with indiscernibly identical particle interaction. But this same problem arises in every type of decision theory. Even if we know we are dealing with differences, we can't say what those differences are. We have to grope our way towards a Structural Causal Model. But this involves 'economia'- a discretionary type of management or accommodation. We are far from the world of 'Logico-syntactic' precision. To speak of differences in degree without being able to identify the actual differences is to have said nothing useful.  

 ...here is a more tractable issue. Is it possible to expose an ambiguity without exposing any difference of logico-syntactic use?

Yes. Context is all. It drives pragmatics. 

This time there seems little room for doubt. Clearly it is. For it is possible to show that a word is ambiguous by producing a sentence involving the word, the rest of whose meaning is presumed given, and then pointing out that a single utterance of the sentence can be interpreted as true or as false depending on how this particular word is construed. (For instance, imagine my saying, ‘I had a round yesterday,’ on the morrow of a day on which I had a slice of toast but did not so much as set foot on a golf course.) The word is thereby shown to be ambiguous even though the question of whether any difference of logico-syntactic use is involved is left open.

Either the word already meant different things- in which case it was ambiguous- or it didn't yet and was ambiguous because it could become so. 

Let us now reconsider the univocity of being. And let us think of this issue as an issue about whether the word ‘being’ and its various cognates are relevantly ambiguous.

Even if they aren't so to our knowledge, we can only say they aren't ambiguous yet. But every new occurrence of the word may be so. Only if you have a 'buck stopped' protocol bound system can you argue otherwise. But even then, if the 'buck stopping' isn't 'stare decisis'- i.e. itself bound by its past rulings- then ambiguity might yet appear. Of course, if you are using 'Tarskian primitives'- then ambiguity is built into the 'base case'. The dream of a purely intensional system- a Liebnizian Mathesis Universalis- remains a dream in Mathematics. The best we can hope for is univalent foundations for some useful purpose- but not for all purposes. 

To be sure, this may not be as innocent as it appears. It is not entirely obvious that the issue can be cast in this linguistic form without some loss. But even if it cannot, the new casting of the issue will at the very least be of (highly pertinent) interest. Anyone committed to the non-univocity of being would in these terms be committed to an ambiguity in the word ‘being’.

No. I'm committed to a dualism between me and everything else(which is actually the emanation of the Nicaraguan horcrux of my neighbor's cat) Just by saying 'fuck off back to Nicaragua' to any attempt to convict me of holding an ambiguous view of being, I have made my ontology bullet proof. It is a different matter that everybody avoids me and thinks I'm a lunatic. 

But the commitment would, I claim, be unsustainable unless the ambiguity could be exposed in a way other than that just described.

The commitment is unsustainable because- sorry to break it to you, folks- we all fucking die. Exposing 'ambiguity' is not stuff guys who bother with affirming non univocity of being care greatly about. They just want to cash their pay checks till death supervenes.  

The ambiguity would have to be, and would have to be seen to be, an ambiguity involving a difference of logico-syntactic use. Why do I claim this? For two principal reasons. Or perhaps rather, for one principal reason that can be broached in two ways. First, to think that the word ‘being’ is ambiguous is to think that some things are so different in kind from others that talk about the ‘being’ of the former cannot be understood in the same way as talk about the ‘being’ of the latter.

It is enough for uncorrelated asymmetries to exist for 'being' to contain differences such that strategy choice- including any type of speech, because language is strategic- is asymmetric in an arbitrary manner even if information is complete. Two philosophers arguing are still two different philosophers. Both may want to win. But there may be an arbitrary asymmetry. One guy has tenure. The other is hoping to get it, with the other's help. This may lead to a 'bourgeois strategy'- i.e. suck up to the boss by throwing the contest after having buttered him up good and proper.

Of course, even if philosophers were above any such petty considerations, Djikstra showed they would starve before they could agree on a rule re. the sharing of cutlery- i.e. concurrency problems would still arise. 

But unless the ‘cannot’ here means ‘cannot, as far as the meaningfulness of the talk is concerned’ as opposed to ‘cannot, as far as the truth of the talk is concerned’, then it is just not clear why anyone would think such a thing.

In other words if you can get rid of strategic behavior- i.e. truth and meaning are univocal- then...urm...what exactly? Do you get a Benthamite or Gandhian Utopia? Nope. So long as there are arbitrary asymmetries, the correlated equilibrium wont look univocal at all. There will be 'costly signals' not a 'cheap talk' pooling equilibrium. 

People think what they think because thinking evolved on an uncertain fitness landscape such that it conferred survival value.  

Unless there are differences in kind between things that are so great that  the very business of characterizing some of these things requires a different logical syntax from the business of characterizing others,

why not keep the same logic but confine those things in a separate category? Alternatively, use a different axiom system for them. Indeed, for any class of objects, it may be useful to find the 'reverse mathematical'- i.e. most parsimonious- axiom system useful for them.  

what is to prevent the devising of vocabulary that can be truly applied to all of them?

The answer has to do with time complexity and computability. It is an open question whether non constructible entities should be used.  

And if nothing is to prevent this, then what is to prevent the devising of a term that can be truly applied to whatever the word ‘being’ can be truly applied to, under each of its supposed interpretations?

It is entirely possible that there is some Latin or Tibetan formula which, if uttered, would grant you omniscience. What prevents this, to our mind, is the strangeness of the Structural Causal Model which doing so would confirm as true. 

Dare I utter the secret name of God? Omniscience would mean knowing how all my favorite TV 'stories' will end. How the fuck will I entertain myself?  

But given such a term, and given the work that it has to do (notably, enabling us to refer to the character of whatever exists), what is the rationale for thinking that the word ‘being’ is ambiguous in the first place? Why not accept that this new term is just a synonym for ‘being’, with (as we now see) its single generic meaning?

In other words, once everything is known to everybody- what ambiguity will remain? We will be as Gods. Lonely Gods with nothing to watch on Netflix. No fucking thanks! 

This question leads naturally to the second way of broaching the matter. Unless the ambiguity in the word ‘being’ involved a difference of logico-syntactic use, what rationale would there be for acknowledging different senses of the word ‘being’ as opposed to acknowledging differences among the entities to which the unambiguous word ‘being’ can be truly applied? This is an old idea, famously and marvellously captured by Quine in of his Word and Object. Quine is there concerned with a somewhat different issue: whether the terms ‘true’ and ‘exist’ are ambiguous. But his response to the claim that they are is essentially the same as the response that I am now recommending to the claim that ‘being’ is non-logico-syntactically ambiguous. Quine writes, specifically in connection with ‘true’: There are philosophers who stoutly maintain that ‘true’ said of logical or mathematical laws and ‘true’ said of weather predictions or suspects’ confessions are two usages of an ambiguous word ‘true’... What mainly baffles me is the stoutness of their maintenance. What can they possibly count as evidence?

Structural Causal Models. The one for Math is very different from the one for Meteorology.  As for 'suspects' confessions', in India they are inadmissible unless given to a Magistrate rather than a police constable who is shoving things up your bum. 

Why not view ‘true’ as unambiguous but very general, and recognize the difference between true logical laws and true confessions as a difference between logical laws and confessions?

This does not make sense. True, in this context, means algorithmically verifiable. A confession which turns out to be corroborated by CCTV evidence can be verified by a sufficiently advanced computer program which makes no assumptions about 'motives' or any other intentional state.  

The problem is that what is verifiable isn't always true. It turns out the guy has an identical twin. On the other hand a computer did find a flaw in Godel's proof of God. But we don't know if such a proof could be made 'bullet proof' by an even more advanced computer.

Similarly, I submit, in the case of ‘being’. And note that acceding to a single sense of being in this way would not preclude, in fact would encourage, acknowledging different kinds of being corresponding to the various fundamental differences between entities.

But would it be useful in any way? 

This is why Heidegger, who is certainly keen to acknowledge different kinds of being—for example, those kinds of being that are peculiarly enjoyed by ‘whos’ and those kinds of being that are peculiarly enjoyed by ‘whats’—can nevertheless be considered a champion of the univocity of being.

He could also be considered a champion of Hitler. Surely, his championing stuff you like aint something to boast about? 

In linguistic terms, then, the doctrine of the non-univocity of being had better be construed as the doctrine that the word ‘being’ has more than one logico-syntactic use; that there is one sign here that is common to more than one symbol.

Why? I may deny that language means anything at all unless it is used in a manner beneficial to me. Indeed, language is all about not listening to trees the way you listen to Mummy. It is founded on non-univocity. If trees start talking to you, see a shrink.  

A possible analogy would be the use of the existential quantifier ‘Ǝ’ in formal languages to represent both first-order quantification and second-order quantification.

Analogies from formal languages are likely to mislead unless they are 'buck stopped' in a manner we find familiar in natural language. But 'buck stopping' is about breaking concurrency deadlock or decision making under uncertainty or some other non-ideal scenario. It picks out arbitrary asymmetries. We don't feel it speaks to Truth or Justice so much as Utility and a pragmatic spirit of getting on with our lives.  

There too, arguably, there is one sign that is common to more than one symbol. Nor would the analogy stop there. Anyone who took this view of both ‘being’ and ‘Ǝ’ would no doubt insist that, in the case of ‘being’, just as in the case of ‘Ǝ’, the similarities between how the different symbols are used are sufficiently striking and sufficiently important for the commonality of the sign to be both natural and warranted.

Why not admit both are 'Tarskian primitives'? What is undefined may shade into every and any other thing which is undefined. One may as easily say Truth is Beauty as opine that Beauty is Justice. Or we may say nothing is as truly monstrous as Justice at its most sublimely beautiful. 

This was essentially Aristotle’s position. For Aristotle, the differences in reality to which different senses of being corresponded were categorial differences, deep ‘grammatical’ differences of just the sort envisaged here.

Do 'grammatical differences' arise in the absence of composition or structure? If so there must be some innate grammar which existed before speech. Maybe Chomsky is right. Some magical gene just attached itself to all members of our species at the same time. No doubt it was brought to earth by Lizard People from Planet X. Fuck you Lizard People! Fuck you very much! But for you, peeps wud think I cud rite gud and knew from Syntax. (Apparently, it isn't the money you have to pay your neighbor if had a crafty wank. I wish somebody had told me. Incidentally, that neighbor of mine had a degree in Philosophy. Now you understand why I have them bastids)

But he also insisted that the use of a single word to embrace these different senses of being was both natural and warranted: this was what the comparison with healthiness was intended to show.

In other words, he insisted that what he was saying wasn't useless pedantry.  

To repeat: the advocate of the non-univocity of being had better think that different uses of the word ‘being’ differ in their logico-syntactic use.

Or else just deny that anything anybody else says, unless it is pleasing or useful to me, has any meaning whatsoever.  

  But this presents a challenge of its own: how to show that they do.

This would involve actually reading their shite.  

The sheer fact that the word is used in linguistic contexts which themselves differ in their logico-syntactic use is not decisive.

Is this a meaningful sentence? 

Consider these two contexts: ‘That person is...’ and ‘That tree is...’ These differ in their logico-syntactic use.

Not necessarily. It may be that some trees are pals of this guy. He is married to an oak. However he can't bear elms. You should hear the way he hisses out the word 'tree' when referring to them. I understand his wife one had a fling with an elm. Her acorns turned into owls. Aristotle explained all this to Alexander. That's why the guy was in such a hurry to go conquer places where pedants were less silly. 

But it does not follow that the phrase ‘exactly two metres in height’, which can be meaningfully inserted into both, is logico-syntactically ambiguous37—not granted the assumption that difference of logico-syntactic use is a difference of degree.38 Similarly, the fact that we can talk about the ‘being’ of that person and the ‘being’ of that tree does not show that ‘being’ is logico-syntactically ambiguous. It may be a necessary condition of the non-univocity of being that ‘being’ should have application to things that are so different that there is no single logico-syntactic way of making reference to all of them; but it is not a sufficient condition.

Or it may be completely irrelevant. Some Physicists believe there are things we can't interact with. Can we say they have univocal being- i.e. exist as we do? To be on the safe side, yes. Maybe a crucial experiment will change our minds. They belong in 'Imagination-land' along with Santa & Superman. But we can't be sure yet

The problem with armchair Philosophy is that it must be wholly useless if we really did evolve on an Uncertain fitness landscape. Whatever 'categories' we have or whatever way of thinking seems natural to us, must be the product of coevolutionary processes. But, in that case, future states of the world are not given to us as a probability distribution. If it were philosophy would be an experimental science- a branch of Psychology. Frege would have lived in vain. 

The difficulty is exacerbated by the following fact. The simple way of exposing an ambiguity which we considered earlier—namely, producing a sentence involving the ambiguous word and pointing out that a single utterance of it can be interpreted as true or as false, leaving open whether the word is logico-syntactically ambiguous—has no counterpart when it comes to showing that a word is logico-syntactically ambiguous. It is of no avail to produce a sentence involving the logico-syntactically ambiguous word and then to point out that a single utterance of it can be interpreted as meaningful or as meaningless. Provided that interpreting an utterance as meaningless is not a contradiction in terms, then this is something that one can do to any utterance whatsoever. (One can always construe some word in the utterance as occurring without either its standard meaning or any other meaning.) It cuts no ice at all where ambiguity is concerned. Given an utterance of the sentence, ‘Her brooch is round,’ for example, we can construe ‘round’ as occurring without either its standard adjectival meaning or any other meaning. But that is quite irrelevant to the use of ‘round’ as a noun. Nothing about this sentence is relevant to the use of ‘round’ as a noun, given the meaning of the rest of the sentence. The meaning of the rest of the sentence precisely precludes the use of ‘round’ as a noun here.

Does it? I say 'Anne is such a square- have you seen her broach?' You reply 'You're jelly! Anne is so not square- as for her broach, it is positively round!' 

I am not suggesting that there is no way of exposing a logico-syntactic ambiguity. In the case of the word ‘round’, it is perfectly acceptable simply to point out that the word has both a nominal use and an adjectival use.

Knightian Uncertainty means not being able to specify all possible uses in advance. A 'buck stopped' protocol bound discourse may have a doctrine of 'harmonious construction' or 'univalent foundations' or whatever so as to get on with some more or less narrow, but useful job. It may be that analytical philosophy appeared useful at one time. But that time has long passed. Hark at me! The time has long passed since you stopped reading this blog post. You were hoping for porn. You weren't wholly wrong. The hard core stuff will come in my next post on this subject.  


No comments: