Tuesday 22 August 2023

Chalmers' hard problem- does Godel lead to God?

The EU funded 'Human Brain Project' has completed a computer model of the brain. Nature reports- 

'During its run, scientists under the umbrella of the Human Brain Project (HBP) have published thousands of papers and made significant strides in neuroscience, such as creating detailed 3D maps of at least 200 brain regions1, developing brain implants to treat blindness2 and using supercomputers to model functions such as memory and consciousness and to advance treatments for various brain conditions3.

“When the project started, hardly anyone believed in the potential of big data and the possibility of using it, or supercomputers, to simulate the complicated functioning of the brain,” says Thomas Skordas, deputy director-general of the European Commission in Brussels.

Almost since it began, however, the HBP has drawn criticism. The project did not achieve its goal of simulating the whole human brain — an aim that many scientists regarded as far-fetched in the first place. It changed direction several times, and its scientific output became “fragmented and mosaic-like”, says HBP member Yves Frégnac, a cognitive scientist and director of research at the French national research agency CNRS in Paris. For him, the project has fallen short of providing a comprehensive or original understanding of the brain. “I don’t see the brain; I see bits of the brain,” says Frégnac.

Perhaps brains can only see those bits of the brain it maps. But the map is not the territory unless, perhaps, it is its own lossless compression. But what is 'significant information'? How can we be sure it isn't an arbitrary notion? It seems we are relying on a 'Reflection Principle' here. All paths, it seems, lead to Godel but does Godel lead to God? 

Godel distinguished 'semantic pradoxes' which are trivial to solve using a type theory,  from what he called 'intensional paradoxes'. which deal with concepts, where the lack of an 'Absolute Proof' (such that the 'intension' would have its own completely independent logic) creates what may be an impasse till, 'the end of mathematical Time'  at which point 'naturality' will be revealed in its true, unique, wholly autonomous and self-subsistent form. At that point there will be no illusion of 'emergence' or 'supervenience'. There is a 'pullback' or 'naturality square'. 

In a sense, the Razborov Rudich result re. 'natural proof' in connection with P=NP is like the problem with Absolute Proof. In the former case, we can't have a natural proof without having a way to distinguish random from pseudo-random, in the latter, there would be something like modal collapse or else a 'Divine axiom'. Put another way, either cryptography is a mug's game or else we are machines designed to lie who, cruelly, have been tasked with creating a truth detector. Come to think of it, if evolution gave us consciousness because it has survival value (rather than it being an unwanted side-effect which will be pruned out by and by) then it was for the purpose of lying to ourselves. Which is why, as the Sufis say, the declaration 'there is no God' is the proof that there is no God but God. Of course, we could avoid this fate by claiming to have Socrates's 'synoida'- i.e. the knowledge we know nothing in which case we are either dead or 'learning'. Sadly, you can learn the truth without having it. The status of being a student means that which your learning has grasped, must disappear the moment you stop learning. 

How does all this faux mathematical or mystical musing relate to what, almost 30 years ago, David Chalmers described as the 'hard problem' of consciousness?-  

even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?

If the explanation of behaviour involves not 'hand waving' but the specification of a Structural Causal Model, then that behaviour can be duplicated under laboratory conditions. It may be that some performative behaviour is accompanied by 'experiences'- e.g. I am aware of a type of celestial music when I do my taxes whereas my Accountant, who has to redo my taxes because I'm shit at arithmetic, experiences nothing save some mild discomfort caused by my flatulent presence in his office. If we have a good Structural Causal Model of the behaviour associated with 'doing my taxes' there would be some way of tinkering with me so that I'm not shit at arithmetic and hear nothing when doing my taxes whereas my Accountant gets to hear celestial music anytime he is obliged to perform similar computations. We could then speculate on why there is this variation in phenotype. Does it have some potential evolutionary benefit or is the thing a 'spandrel' or a random type of 'noise'? It may be that some people would like to hear celestial music as they do their jobs as Accountants. Maybe there is a marketing opportunity here. Also, does the quality of music change as the type of mathematics changes? Could a 'synaesthetic' person have a musical intuition of what is good math and what is a fatally flawed theorem? These are interesting questions if we do indeed have a good Structural Causal Model which enables us to not just explain or predict behaviour but to change it in a desirable manner. However, if our 'explanation' is mere hand waving or a 'just so' story, we may indeed distinguish 'hard' problems (relating to which we can postulate no 'mechanism') and those involving mechanisms invoked by 'hand waving' but not otherwise specified. 

The Wikipedia article on 'the hard problem' clarifies the matter further-

Easy problems

The easy problems are amenable to reductive inquiry. They are a logical consequence of lower level facts about the world, similar to how a clock's ability to tell time is a logical consequence of its clockwork and structure, or a hurricane is a logical consequence of the structures and functions of certain weather patterns. A clock, a hurricane, and the easy problems, are all the sum of their parts (as are most things).

There is no 'logical consequence' here. There is merely a mechanism such that tinkering with it can alter outcomes. We don't say the clock can tell the time because of its mechanism. We say the clock is a mechanism to tell the time. Tinkering with it may make it more accurate. But there may be an even better mechanism which needs less tinkering.  


The easy problems relevant to consciousness concern mechanistic analysis of the neural processes that accompany behaviour. Examples of these include how sensory systems work, how sensory data is processed in the brain, how that data influences behaviour or verbal reports, the neural basis of thought and emotion, and so on. They are problems that can be analyzed through "structures and functions".

But such talk is mere hand-waving. If there is a Structural Causal Model embodied in an actual mechanism, or something that could be part of such a mechanism, and if it can alter outcomes, then and only then do we have information of high enough quality to become the basis of deduction. But even that deduction must be for a narrow purpose.  

Chalmers' use of the word easy is "tongue-in-cheek". As Steven Pinker puts it, they are about as easy as going to Mars or curing cancer. "That is, scientists more or less know what to look for, and with enough brainpower and funding, they would probably crack it in this century."

In other words, 'easy problems' are ones where we can hand-wave on the basis that there is a mechanism which is just bigger and more complicated than something we already have or can imagine.

But 'hand waving' isn't actually 'explaining'. It is merely a gesture to the effect that the true explanation won't be particularly startling. Yet, this is seldom the case. Doing novel stuff rather than talking about doing novel stuff opens new vistas of wonder for smart people. 

Hard problem[edit]

The hard problem, in contrast, is the problem of why and how those processes are accompanied by experience.

Surely this is an 'easy' problem which lies within the remit of evolutionary biology?  

 It may further include the question of why these processes are accompanied by this or that particular experience, rather than some other kind of experience. In other words, the hard problem is the problem of explaining why certain mechanisms are accompanied by conscious experience.[24] For example, why should neural processing in the brain lead to the felt sensations of, say, feelings of hunger? And why should those neural firings lead to feelings of hunger rather than some other feeling (such as, for example, feelings of thirst)?

These things don't inevitably happen. We may invoke evolutionary theory or the theory of machine learning to explain why certain things enter the conscious minds at certain times to certain people but not to others. I once had a girl friend who was puzzled as to why she sometimes felt sexually attracted to me. Foolishly, I revealed the secret to her. The fact is, she was under the impression that I was smart and could help her with her dissertation. Yet every time I started to talk she experienced such a strong sense of boredom (a defence against having to listen to gibberish) that the only way to avoid catatonia was to flood the brain with explicit erotic imagery which led her to wrongly conclude that I might provide her with a type of gratification for which, sadly, I lack the stamina. 

Chalmers argues that it is conceivable that the relevant behaviours associated with hunger, or any other feeling, could occur even in the absence of that feeling. This suggests that experience is irreducible to physical systems such as the brain. 

It suggests nothing at all. What we call 'experience' has survival value unless it doesn't. What matters is the fitness landscape. It may be that having sex with a very boring and stupid man will lead you to have very boring and stupid progeny who, however, may survive under the dictatorship of the boring and stupid. If you had sex with a smart and interesting man, your progeny might become dissidents and be killed under such a regime. The moment I revealed the truth to my g.f, she dumped me because it was obvious that smart and interesting kids do well in our shared milieu. A boring and stupid man like me would end up on life's scrapheap  

Chalmers believes that the hard problem is irreducible to the easy problems:

There is only hand-waving. There are no actual problems unless people are tinkering with mechanisms to improve relevant outcomes. But those people may have zero interest in, or knowledge of,  'philosophy'.  

solving the easy problems will not lead to a solution to the hard problems.

Solving easy problems can involve mathematical innovation which in turn can improve logical reasoning thus giving new insights into 'hard' problems.  

This is because the easy problems pertain to the causal structure of the world while the hard problem pertains to consciousness, and facts about consciousness include facts that go beyond mere causal or structural description.

We don't know the degree of interdependence between what appear to be different causal systems. It may that there is just one big fact which lies at the root of every chain of causation. Equally, there may be lots of different facts but the Structural Causal Model which finds the basis of 'naturality' by which Yoneda lemma can work its magic lies beyond the Eschaton of Mathematical Time. I don't know what this means. Perhaps there is an upper bound to the growth of mathematical functions and this can be represented in terms of complexity.  Or perhaps, boring fool that I am, what I have uttered is gibberish. Yet it is a fact that things like 'naturality' or 'randomness' may be, like the Chinese unicorn, things we can't encounter if known and only encounter if we can't spot them for what they are. At one time there was a transcendental God of whom we had knowledge only 'as in a glass, darkly' but now we have a Godless Universe where we know we can't distinguish random from pseudo-random or (I believe) 'law-like' from 'law-less' choice sequences. It was one thing to wonder whether God gave us free will and another to acknowledge whatever we give ourselves can never be known to involve even such paltry assertions of freedom as the decision to let the toss of a coin determine our course of action. 

I am suggesting that the 'hard' problems are essentially theological. Godel's proof of God may fail but the best result a parsimonious 'reverse mathematics' could get us to is that a 'Divine Axiom' would give us the comfort that our reasoning is consistent.

For example, suppose someone were to stub their foot and yelp. In this scenario, the easy problems are mechanistic explanations that involve the activity of the nervous system and brain and its relation to the environment (such as the propagation of nerve signals from the toe to the brain, the processing of that information and how it leads to yelping, and so on). The hard problem is the question of why these mechanisms are accompanied by the feeling of pain, or why these feelings of pain feel the particular way that they do. Chalmers argues that facts about the neural mechanisms of pain, and pain behaviours, do not lead to facts about conscious experience.

Because some other mechanism is interposed between them. If I stub my toe while fleeing a mass murderer, I don't feel pain. Later I discover, I fractured my toe. At the time, I was too focused on running away to register pain. 

Facts about conscious experience are, instead, further facts, not derivable from facts about the brain.

Just as the fact that my conscious experience of my computer being totes shit was not derivable from facts about the computer. I'm just very very stupid and sometimes try to type without plugging in my keyboard. Of course, one could say 'this is a fact about the computer software. It should flash a warning when it detects a failure to connect the keyboard'. My reply is 'what is software? I don't want it. I want hardware only because I'm trying to solve the hard problem of consciousness'. At this point, it is usual to reply 'for just a couple of thousand dollars I can upgrade your computer to be 'well-hard'. That's the type of computer really brainy people use. I'm sorry, Bill Gates has forbidden us from taking personal checks. Also we can't give you a receipt. Just push an envelope full of money under my door, Uncle, and your computer will be automatically updated.' 

The hard problem is often illustrated by appealing to the logical possibility of inverted visible spectra. If there is no logical contradiction in supposing that one's colour vision could be inverted, it follows that mechanistic explanations of visual processing do not determine facts about what it is like to see colours.

If there is a mechanism which changes how people see colours then there is an expanding data set of a factual type. These facts are generated, and therefore determined, by what people think it is like to see colours. It may that 'machine learning' programs used to train AIs to pick out certain objects may produce 'facts' useful to Doctors seeking to improve colour vision for some class of people.  


An explanation for all of the relevant physical facts about neural processing would leave unexplained facts about what it is like to feel pain.

But explanations of what it is like to feel pain may be useful in pain management mechanisms. Consider the 'mirror box' used to relieve pain experienced in phantom limbs. The patient may feel that what it is like to feel pain is associated with helplessness. They may be sceptical about some new therapy which involves what looks like a child's toy. But they may give it a try because the notion that if you feel you have control over a thing, what if feels like to have pain diminishes. The notion is that you have put the phantom limb into an uncomfortable position and so correcting that position in the mirror box relieves the symptoms because what it is like to feel pain is different from what it is to have a limb with functioning pain receptors.  

This is in part because functions and physical structures of any sort could conceivably exist in the absence of experience. Alternatively, they could exist alongside a different set of experiences. For example, it is logically possible for a perfect replica of Chalmers to have no experience at all, or for it to have a different set of experiences (such as an inverted visible spectrum, so that the blue-yellow red-green axes of its visual field are flipped).

But we don't know that anything is logically possible as opposed to compossible. We don't know if perfect replicas are 'compossible'. Consider the Banach Tarski paradox which permits perfect replicas. But it relies on the axiom of choice or something like it. But it is one thing to say x exists- e.g. non-measurable sets exist- and another to admit we know of no such beasties. We may as well speak of flying unicorns. 

No doubt, Zorn's or Yoneda's lemma are very useful in mathematics. But we don't know the relationship between what is real and what is 'natural' or 'consistent' from the mathematical point of view. Since mathematics has been 'unreasonably effective' we don't mind what arcane or incompossible assumptions mathematicians make. But, surely, our everyday logic concerns compossibility not mere logical possibility? Is it not more useful to speak of horses and what horses could be genetically engineered to accomplish and lay aside the question of when and where flying unicorns will be observable?  


The same cannot be said about clocks, hurricanes, or other physical things. In those cases, a structural or functional description is a complete description.

No. Descriptions are given by conscious beings for specific purposes. There is no complete or 'natural' or 'non arbitrary' or 'absolute' description.  

A perfect replica of a clock is a clock,

no perfect replicas exist. It is a different matter that for some specific purpose, two items may be regarded as interchangeable. I received my great-grand father's clock as a coming of age present. I discover that my distant cousin has an identical clock. He says my branch of the family were ostracized and disinherited because of some moral lapse. My ancestor, purchased a clock identical to the one his father had bequeathed his heir and pretended it was the original clock which his father had given him in token of forgiveness. 

It may be that I don't want to believe my cousin. I engage an expert in the hope that he can prove my clock is the genuine heirloom. At that time, I discover that clocks made in the same factory in the same year are not actually identical. The expert can discover that my clock was purchased in Madras- which is why it had suffered more corrosion from the sea air, while my distant cousin has a clock which must have remained far inland because it exhibits no corrosion. The conclusion is that my ancestor purchased the clock in Madras while my cousin inherited the genuine item.  

a perfect replica of a hurricane is a hurricane, and so on.

No. A hurricane is distinguished from other hurricanes by the time and place of its occurrence. It can have no replica because a second hurricane happening at the same time and place would be much more lethal. It might be termed a super-hurricane.  

The difference is that physical things are nothing more than their physical constituents.

Sadly, physical things are distinguishable by location in Space-Time. But what is the nature of Space-Time? Is it holographic? Is there a multi-verse? 

For example, water is nothing more than H2O molecules, and understanding everything about H2O molecules is to understand everything there is to know about water.

No. It doesn't explain why at a certain time and place we'd be willing to pay a lot to acquire it or  to dispose of it. 

But consciousness is not like this. Knowing everything there is to know about the brain, or any physical system, is not to know everything there is to know about consciousness. So consciousness, then, must not be purely physical.

If we knew everything about an atom in one place and time, we would also know everything about everything because the atom is linked by the web of predication to everything else. This is the 'kevalya' of the Jains. Whether that ontology is 'physicalist' is a matter of debate. One may say that it features mechanistic evolution such that inert matter eventually gains sentience and even omniscience. This is a 'dynamic' conception of what constitutes matter or substance. But this may be a feature of a 'Theory of Everything'. 

To conclude, semantic paradoxes arising out of a 'false-binary'- e.g. 'physical' & 'mental'- are indeed trivial. There is no qualitative difference between useful statements regarding either sphere nor do our current scientific theories permit us to sharply differentiate between the web of causation or predication involved in each realm. 

Are there 'hard problems'? 'Intensional paradoxes' or questions about 'Naturality', 'Randomness', 'Absolute' Representations or Proofs appear to involve open problems such that assumptions which we know are unwarranted or which must be inconsistent are nevertheless very useful for the smartest people right now. But Technology- i.e. the application of such arcane reasoning- is evolving too fast for the professional philosopher to keep up. Indeed, in some fields,  'hard problems' are actually driving applied mathematics beyond the realms of what is 'pure'. The other point is that we have AIs built up on the basis of 'easy' rules which are crossing 'qualitative' boundaries at astonishing speed. However, this may be a misleading impression. Yet, philosophy has become merely an impressionistic, hand waving, sort of discourse. Perhaps it is our fault for taking an interest in 'popular' philosophy.

No comments: