Saturday, 9 July 2022

Dorje Brody & Fake news

Is information informative? The answer depends on eligibility or admissibility. Some new information may cause the collapse of what we thought was our information processing module. I remember being blown away by a Satyajit Ray film that was being screened on TV. Then I realized I wasn't looking at the TV. I was looking out of the window. Also, I was in Calcutta. Not an interesting part of Calcutta. I was in a suburb where everybody had dedicated themselves to being as boring as possible. 

 The reason I mention this is because, further to my post titled 'Chomsky & Informativity', I want to look, today, at Prof. Dorje C Brody's recent paper- 'Noise, Fake News & Tenacious Bayesians'. 

Briefly, my position is that it is foolish to be Bayesian unless that is what you are paid to be. A Principal would never be Bayesian. Because of 'uncorrelated asymmetries', their 'theory and praxis' (matam & vigyan) would be robust to changes in the information set. This is because 'doctrine' is not informative. Some aspects of 'science' or 'praxis' may be in the sense that parameters may change for a specific purpose but there is no 'Belief Revision' properly speaking. There is a change in how the prevailing Structural Causal Mechanism is used but this is a pragmatic accommodation which is temporary. On the other hand, if there is extra utility then there are mechanisms by which, at the margin, uncorrelated asymmetries change and so the nature of the population changes. But that's not for a mathematical reason- though there may be a Polya process- it is for an economic or Tardean mimetic reason.

Dorje writes in the Conversation- 

Understanding the human mind and behaviour lies at the core of the discipline of psychology.

No. Getting paid to talk bollocks to stupid students lies at the core of 'academic' disciplines.  

But to characterise how people’s behaviour changes over time, I believe psychology alone is insufficient – and that additional mathematical ideas need to be brought forward.

Economic ideas. These would only have a mathematical expression at the Pareto frontier where 'optimality' can yield adjoint functors or other such mathematical beasties. Sadly, there is an economic reason (regret minimization) why we stay away from the Pareto frontier and have ontological dysphoric goods and services. I suppose, one could also invoke the 'free energy principle' and say people want to minimize 'surprise' by inhabiting a pawky, fatalistic, Lebenswelt.


My new model, published in Frontiers in Psychology, is inspired by the work of the 19th-century American mathematician, Norbert Wiener.

He was twentieth century. Wiener and his son helped Indian and Chinese mathematicians like Kosambi and, the genius, Chern.  

At its heart is how we change our perceptions over time when tasked with making a choice from a set of alternatives.

Perceptions don't matter. Mimetics does matter. Uncorrelated asymmetries do matter. Most importantly, the correct econ theory says choose such that the 'conatus' of choice making is conserved or expanded in scope. In other words, your perception of the outcome of the choice is less important than your continuing to have the power to choose. If Biden chooses to suck off Putin, he won't get the chance to do so. He will be sedated by his Doctors and the Veep will take over. 

Such changes are often generated by limited information,

Perception is a gestalt with non informative inputs. It is relatively independent of information geometry save where 'reinforced' for a specific purpose  

which we analyse before making decisions that determine our behavioural patterns.

Unless no great loss results by omitting to do so.  But, by Witsenhausen's counter-example, there is a mathematical reason for resisting Dorje's argument because decision theory is often concerned with decentralized stochastic control problems.

To understand these patterns, we need the mathematics of information processing.

But information is not processed only for representational purposes. There are strategic and other economic or mimetic considerations. The fact is, for many purposes, what is processable as 'information' is merely a proxy for informativity.  

Here, the state of a person’s mind is represented by the likelihood it assigns to different alternatives

Fuck off! The thing is too cognitively costly. Also, there is a strong economic or evolutionary reason not to do anything so fucking stupid.  

– which product to buy, which school to send your child to, which candidate to vote for in an election and so on.

But these will be swamped by mimetic effects or, if Uncertainty increases, regret minimizing considerations. 

Understanding rational mind

As we gather partial information, we become less uncertain

No. Knightian uncertainty increases. On the other hand 'information gathering' could be said to affect 'felicity' associated with a meta-choice function- i.e. gathering partial information can increase felicity from 'choosing to choose'.  

– for example, by reading customer reviews we become more certain about which product to buy.

No. We gain more felicity from having to plump for one option rather than another. Certainty hasn't increased but 'regret minimization' has fallen because we can say 'I don't regret pulling the trigger on this coz, after all, I did all that was humanly possible to get the decision right. That's all you need for ataraxy- i.e. serenity. You can live with your decisions coz you did the due diligence. The rest is up to God.  

This mental updating is expressed in a mathematical formula worked out by the 18th-century English scholar, Thomas Bayes.

Who wanted to disprove Hume's 'Indian Prince' argument. But, 'inferential statistics', or 'inverse probability'- i.e. notions of likelihood can have no such power because the thing is multiply realizable. Only a better Structural Causal Model which allows immediate tinkering with parameters to alter outcomes will suffice. 

It essentially captures how a rational mind makes decisions by assessing various, uncertain alternatives.

Fuck off. Nothing essentially captures shite which can't exist.  Minds exist because of uncorrelated asymmetries. They are 'bourgeois strategies' of an evolutionarily stable type. 


The equation shows the flow of information over time, t.

It is information of a representational type but there are other sorts of informtion which are more important for decision theory.  

X is a random variable representing different probabilities corresponding to different alternatives.

This is the problem with the Bayesian approach. Where do the probabilities come from? The alternatives don't yet exist. What 'frequentism' can capture them? The thing would be a prediction of a Structural Causal Mechanism. In Quantum physics we are dealing with identical particles. But things in the social sphere have uncorrelated asymmetries only some of which are apparent.   

If we assume that the information is revealed at a constant rate σ,

We are confining ourselves to the representational not the qualitative. By the time the picture is completed it will be too late to take action. There is a Kafka toxin type problem here. It is better to 'manage the news'. This does mean that fake news prevails with some adjustment at the margin. Our victorious army is never defeated. It mounts successful reconnaissance missions followed by combined operations of a logistical type designed to shorten lines of communication and supply.  

and that the noise that obscures the information is B ((described by a theory called Brownian motion, which is random), then the equation can give us the information flow. 

If the thing is representational, sure. But a lot of information flows are about 'observables' which change the 'unobservables' which are what we are really interested in. 'Noise', then, is an observable which is not 'decisive' for some reason or another. But such noise may not necessarily 'cancel out'. Instead what is noise today is signal with respect to a future state of the world.  


When combining this concept with the mathematics of information (specifically signal processing), dating back to the 1940s,

stuff about how to improve radio reception- which is representational. But most information is no such thing.  

it can help us understand the behaviour of people, or society, guided by how information is processed over time.

There is a story of a British Army officer who found his way out of the thick Burmese jungle using a map of the London Underground. Dorje is a very smart guy. He may indeed gain an understanding from the mathematical equivalent of something wholly independent or irrelevant.  

It is only recently that my colleagues and I realised how useful this approach can be.

Brainy peeps are useful. Dorje is a first class mathematician. He is bound to be doing useful things in purely mathematical terms.  

So far, we have successfully applied it to model the behaviour of financial markets (market participants respond to new information, which leads to changes in stock prices),

But what markets do is generate information. You may say that they aggregate preferences of various types. You may say that the preferences of arbitrageurs are information sensitive. But arbitrageurs aren't the only type of agent. Indeed, they only exist because any coordination game can have an associated discoordination game by reason of 'hedging' and 'income effects'. 

and the behaviour of green plants (a flower processes information about the location of the sun and turns its head towards it).

So, hedge fund managers have the same IQ as a sunflower. Good to know.  

I have also shown it can be used to model the dynamics of opinion poll statistics associated with an election or a referendum, and drive a formula that gives the actual probability of a given candidate winning a future election, based on today’s poll statistics and how information will be released in the future.

The favorite to win is caught on camera sucking off his rival in exchange for a Big Mac and fries. How that information is released may indeed affect the outcome of the election.  I'm not saying anything like that happened to me when I ran for the post of President of the Socioproctologists Association. Its the sort of thing which could happen to anyone who runs for high office. 

In this new “information-based” approach, the behaviour of a person – or group of people – over time is deduced by modelling the flow of information.

In other words, making assumptions. But it is the assumptions which are doing the heavy lifting. The maths is just window dressing.  

So, for example, it is possible to ask what will happen to an election result (the likelihood of a percentage swing) if there is “fake news” of a given magnitude and frequency in circulation.

What sort of 'fake news'? Kennedy's 'missile gap'? No such gap existed but the intuition was correct that nuclear doctrine had to change. The Cuban crisis made this clear.  There was 'fake news' about Trump being a Nazi or Biden being totes senile- but there was a grain of truth as well. Real news would be 'the various candidates are boring shitheads who will enrich the same bunch of plutocrats while lying their heads off'. Fake news is that 'our man has a bit of backbone. Their guy sucks off homeless dudes.' But one candidate may actually behave better in a crisis. 


But perhaps most unexpected are the deep insights we can glean into the human decision-making process. We now understand, for instance, that one of the key traits of the Bayes updating is that every alternative, whether it is the right one or not, can strongly influence the way we behave.

This is the Monty Hall problem. Menus are informative. That's why Arrow Debreu is a fantasy. If we had information about all possible futures  markets, we would also know which Scientific Research Programs will succeed and which countries and companies will thrive and which will collapse. 


If we do not have a preconceived idea, we are attracted to all of these alternatives irrespective of their merits and will not choose one for a long time without further information.

Not if uncorrelated asymmetries exist. I know I'm old and stupid. I follow a bourgeois strategy in voting for a Party which wants to help the old and stupid because there are a lot of us.  

This is where the uncertainty is greatest, and a rational mind will wish to reduce the uncertainty so that a choice can be made.

My rational mind wishes to reduce the price of cake. Sadly, it has no such power.  Scratch that. Happily, it has no such power coz otherwise I'd be dead of diabetes. 

But if someone has a very strong conviction on one of the alternatives, then whatever the information says, their position will hardly change for a long time –it is a pleasant state of high certainty.

No. It is a pleasant state of self-perceived moral integrity or sticking to your guns.  Sadly, one may later find one was merely obstinate and ignorant and incapable of proper reasoning.


Such behaviour is linked to the notion of “confirmation bias”

or a bigoted rejection of dis-confirmation. The guys who stormed Capitol Hill didn't want Biden confirmed as President, the didn't want Congress to certify the election result, because they insisted that Trump had actually won. Why? Because POTUS had said so. Can President's lie? Sure- if they belong to the wrong party.  

– interpreting information as confirming your views even when it actually contradicts them. This is seen in psychology as contrary to the Bayes logic, representing irrational behaviour. But we show it is, in fact, a perfectly rational feature compatible with the Bayes logic – a rational mind simply wants high certainty.

We may want an angel to descend and to state in a voice of thunder that it wasn't us wot farted. Who smelled it, dealt it.  

Kavka's toxin is a case where managing our own beliefs alters outcomes. This means it is rational to 'manage the news'. But the news is not about certainty. It points to possible worlds of a previously unimagined kind. Maybe the Capitol Hill rioters could have intimidated Congress. Suppose Biden decided not to risk a Civil War. What if another election had been held? Maybe Trump would have won. Suddenly we would view those lunatics as sensible people- perhaps even as patriotic people. If Biden and Pence and Pelosi and so forth had turned out to be poltroons, only Trump could be trusted in the White House. But Biden was and is no coward. People of the 'silent generation' seldom are. 

The approach can even describe the behaviour of a pathological liar. Can mathematics distinguish lying from a genuine misunderstanding?

Apparently schizophrenic people are better at spotting liars. They ignore what is said and concentrate on other non-verbal cues. Another giveaway is length of response. Liars say less- or less that is relevant. Truth tellers can and do expand on their statement.  

It appears that the answer is “yes”, at least with a high level of confidence.

This is reasonable. Computer programs are mathematical and are in fact used to spot faked results in various fields.  But the heuristic on which they are based aren't really mathematical. They are economic. If you are lying or faking results, you will keep relevant information short and introduce more randomness than is warranted. 


If a person genuinely thinks an alternative that is obviously true is highly unlikely – meaning they are misunderstanding – then in an environment in which partial information about the truth is gradually revealed, their perception will slowly shift towards the truth, albeit fluctuating over time.

Bayes was a clergyman. He had a vested interest in getting people to believe in miracles. The odd thing is that I've gotten older and sadder I am much more prepared to believe that miracles do exist- but only for good people who are naturally constituted to be, and who deserve to be, happy.  

Even if they have a strong belief in a false alternative, their view will very slowly converge from this false alternative to the true one.

That convergence will be quick because of mimetic effects and 'tipping points'. Cellular automata theory is the correct way to simulate this. 

However, if a person knows the truth but refuses to accept it – is a liar –

No. The person is 'ontologically dysphoric'- i.e. not at home in this world. I may know I am biologically male and human but I feel I am in the wrong body. I should be a female penguin.  Also, I didn't fart just now. I suffer from a rare condition, brought on by stress I suffered when I was with the SAS, which makes it medically impossible for me to fart. Prince Andrew can't sweat. I can't fart. Why will no one believe me? Is it because I am living in the wrong universe? 

then according to the model, their behaviour is radically different: they will rapidly choose one of the false alternatives and confidently assert this to be the truth.

This is a heuristic. If you are arguing against option X, you clutch at any Y which militates against X. But that is 'Social Choice'. It is economics, not mathematics. 

Any heuristic can be given a mathematical expression but, why bother? Our brains are hardwired for this sort of thing. The time class of the mathematical solution may be exponential. We have an intuition of the direction of the focal solution. 

How are actual social choice problems solved? The answer is 'transferable utility'- you pay off some, threaten others and bore the rest into submission.  

(In fact, they may almost believe in this false alternative that has been chosen randomly.)

It is a fact that invisible penguins have been discovered in Argentina. The CIA covered it up. I didn't fart. An invisible penguin did the dirty deed. Tell you what, I'll give you this valuable bottle of single malt whiskey if you admit that the CIA does indeed get up to all sorts of shenanigans. No. This is genuine single malt whiskey. I didn't just fill up the bottle with my own pee but you are right in your guess that I had asparagus for lunch. However did you come to know?  

Then, as the truth is gradually revealed and this position becomes untenable, very quickly and assertively they will pick another false alternative.

Okay, okay. Invisible penguins were wiped out during the Falklands Campaign. But this is an old house. There can be spontaneous methane emission from the floor boards. Furthermore, such emissions can indeed cause fart like noises. Look, how about I go to the toilet and retrieve some nice chocolate cake I stored there which you are welcome to gobble up all by yourself?  

Hence a rational (in the sense of someone following the Bayes logic) liar will behave in a rather erratic manner, which can ultimately help us spot them.

The problem here is that the statements of a guy strongly committed to a cause may be very erratic indeed. Churchill hated Stalin. But, once he had to ally with him, Churchill took many steps to ensure that Stalin had confidence in him. Yet, once out of office, Churchill played an important role in creating a united front against Stalinist aggression. This was erratic certainly. But it was courageous and on the side of Truth, Justice and the Anglo-American Way (Churchill's mum was American).  

But they will have such a strong conviction that they can be convincing to those who have limited knowledge of the truth.

Conviction that your cause will prevail is not the same thing as being an inveterate liar.  

For those who have known a consistent liar, this behaviour might seem familiar.

Consistent liars alter their behavior depending on the truth testing protocols they are subject to. If these protocols are strict, the way to go is to appear deeply stupid but fundamentally on the right side. If they are lax, appear brilliant but impatient of lesser minds. This means you can talk out the clock in a manner which is deliberately irrational to signal your contempt of your inquisitors. Say things like 'but, according to your own stupid ideology, invisible penguins have to exist! Indeed, it is a corollary of the abc theorem which Mochizuki has claimed to prove! I mean, if you are too stupid to even understand Mochizuki how the fuck- according to the categorical theoretical representation of your shitty ideology- can you even begin to smell farts let alone point the finger at who tooted? Incidentally, my pal- the late Steven Hawkins- was working on this problem when he sadly passed away.'  

Of course, without the access to someone’s mind, one can never be 100% sure. But mathematical models show that for such behaviour to arise from a genuine misunderstanding is statistically very unlikely.

So maths can show your hunch was right- provided you do the maths. But fart smelling has the same property. I was able to prove the existence of invisible penguins by just such means. Will I get a Nobel Prize for this? Of course not! I iz bleck and didn't go to no posh Skool or Collidge. 

This information-based approach is highly effective in predicting the statistics of people’s future behaviour in response to the unravelling of information – or disinformation, for that matter. It can provide us with a tool to analyse and counter, in particular, the negative ramifications of disinformation.

But a more effective method is to say 'you eat dog turds. That's why your brains have turned to shit'. Also you can fart vigorously when approached by any virtue signaling cunt wot has 'statistical information'.  

Turning to Dorje's paper, we find this 

Decision making arises when one is not 100% certain about the “right” choice, due to insufficient information.

This is false. Decision making arises when you are empowered and inventivized to make a decision. It may be protocol bound. If so, there may be a 'guillotine' such that the decision has to be made on available, admissible, evidence. But it would be regret minimizing to 'hedge' under such circumstances. You may decide to apply a smaller penalty or give a smaller reward till more facts are available.  

The current knowledge relevant to decision making then reflects the prior uncertainty.

Not if decision making is protocol bound. Even if it isn't, it will have a 'discovery' component as well as an 'optimizing' component.  

If additional partial information about the quantity of interest arrives, then this prior is updated to a posterior uncertainty. To see how this transformation works it suffices to consider a simple example of a binary decision—a decision between two alternatives labeled by 0 and 1—under uncertainty. Suppose that we let X be the random variable representing a binary decision so that X takes the value 0 with probability p and X equals 1 with probability 1 − p, where the probabilities reflect the degree of uncertainty.

But the degree of uncertainty is itself uncertain unless we know there is a population of a certain sort. One may say there are subjective probabilities and that is what Bayesian analysis is concerned with. But why should there be subjective probabilities? There may be betting markets where that sort of thing has already been aggregated. In other words, 'priors' are impredicative. To assume they are sets or proper classes is to beg the question. Ex poste, we may impute a mathematical function but for sufficiently important decisions, this could only be done at 'the end of time'. In other words, this is a non-informative theory. It is mere 'doctrine' or hand waving or window dressing. 

In the context of an electoral competition, one can think of a two-candidate scenario whereby X = 0 corresponds to candidate A and X = 1 corresponds to candidate B. Then the probabilities (p, 1 − p) reflect the a priori view of a given decision maker—a voter for example.

But that prior is impredicative. It has no mathematical description. One may stipulate or impute a description but that is an arbitrary procedure. True if we make all sorts of crazy or lazy assumptions you can get to something canonical or 'natural'. But you are now talking about a world in which language would not exist. Coordination would happen by magic. This is an occassionalist universe and suffers 'modal collapse'- i.e. what is is all anything can be.  

In particular, if p > 0.5, then candidate A is currently preferred over candidate B.

We can't say that. Why? We know that the probability of a horrible candidate getting elected will be exaggerated by our side so as to motivate us to go out and vote. Probabilities are themselves strategic and impredicative- just like preferences. Remember this article was published in a Psychology journal. I'm not saying Dorje- a brilliant mathematician- can't inspire very useful 'apps' but that could be done without any 'doctrinal' window dressing. Maybe growing different algorithms and letting them compete and combine and winnow out costly procedures will get us a better solution even if that solution is a black box and we don't know why it is successful. But stuff like that is already happening anyway because of the profit motive.  

With this setup, the decision maker receives additional noisy information about the “correct” value of X. For example, one might read an article

because of one's preferences 

that conveys the information that voting for candidate A is likely to be the correct decision.

But candidate B reads the same article and steals the clothes of candidate A by appearing to support the correct policy even more strongly.  

The idea then is to translate this information into a numerical value so as to be able to understand and characterize how the view of the decision maker, represented by the probabilities (p, 1 − p), is affected by acquiring further information.

But both 'numerical values' and 'informativity' are impredicative and strategic. They are multiply realizable. What you are setting yourself to do is nothing but an exercise in bigotry of an arbitrary kind. The bad thing about bigotry is it can make you prefer a horrible to a bad alternative even if you started off firmly opposed to the horrible alternative. Prussian Junkers ended up hailing a 'bohemian' Corporal. 

To be fair Dorje's paper is very clearly presented. Students will definitely benefit by reading his work and looking at the relevant literature. 


No comments: