Thursday, 12 May 2022

Turing's lawless algorithms.

Thanks to Bearnays, Turing became aware that his work- relying on the principle of the excludeed middle- could not necessarily describe any machine whose existence it proved. In other words, his result was non-informative. A converse statement- viz anything verifiable as corresponding to a description might not exist in Turing's oeuvre. In other words, to say 'there can be a thinking machine' would be like saying 'there can be a flying unicorn'. The thing is not informative. It is imaginative. Equally, to say 'thinking can be represented by something machine readable' would be non-informative. It is a pious hope or the motivation for a research program but is itself outside the scope of any logical discourse. 

Turing, on Bearnays suggestion, used Brouwer's notion of 'overlapping choice sequences' to get round this difficulty. The problem was that computable numbers no longer had a unique representation but this seemed worthwhile because the class of computable numbers became 'absolute' with respect to its representation in this or that formal system. The caveat is that 'formal system' must have a representation as a 'law-like' sequence. Yet we know formal systems can evolve. This means that the evolutionary state imposes uncorrelated asymmetries which is why uniqueness does not obtain. But it could do to the extent that the 'law-less' is constructible. Another way of saying this is that if there is a coevolved process which represents co-evolution non arbitrarily then proof of existence implies constructive description. This would make mathematics 'informative'. Perhaps it is so 'at the end of Time'. Otherwise evolution will be superseded. It is not that machines will be able to think, it is rather that humans will seek to become machines. Thought and Affect will be spandrels to be eliminated to improve efficiency. Ontological dysphoria will be banished. Humans would have become at home in the world at the cost of being turned into domestic appliances. 

My point is that when we consider how Turing came to his 'eidetic' or 'absolute' result, we come to see that when we use the term 'thinking machine', the mind represents 'thinking' in a non-unique manner. There is an element of choice or arbitrariness. In other words there is a lot of scope for philosophers to write nonsense.

Consider the following by Sebastian Sunday Greve writing in Aeon- 

‘It might be argued that there is a fundamental contradiction in the idea of a machine with intelligence,’ is how (Turing) began his final reflections in the lecture, which culminated in his ‘plea for “fair play for the machines”’. He illustrated what he had in mind with a little thought experiment, which may be regarded as an early precursor to the Turing test:
Let us suppose we have set up a machine with certain initial instruction tables [ie, programs], so constructed that these tables might on occasion, if good reason arose, modify those tables. One can imagine that after the machine had been operating for some time, the instructions would have altered out of all recognition, but nevertheless still be such that one would have to admit that the machine was still doing very worthwhile calculations. Possibly it might still be getting results of the type desired when the machine was first set up, but in a much more efficient manner.

The set of procedures would have been culled- sure- but the instructions would be the same. The knowledge we gain would be about what procedures are useful. However, there is a danger that some safeguard has been lost and thus 'robustness' has decreased. 


Commenting on this case, he then added:
In such a case one would have to admit that the progress of the machine had not been foreseen when its original instructions were put in. It would be like a pupil who had learnt much from his master, but had added much more by his own work. When this happens I feel that one is obliged to regard the machine as showing intelligence.

So a virus shows intelligence. But a virus may evolve into a Speigelman monster. It may lose robustness and go extinct because of a small change in the fitness landscape. Suppose the Ukraine war gets out of hand and we end up blowing up the planet. As a species we'd be considered bonkers, not brainy, by Galactic savants.

Turing knew that whatever he, or others, might feel about such a case was less important than whether machine intelligence is really possible. But he also knew – as well as anyone – that conceptual clarity at the fundamental level, such as might be achieved through philosophical reflection, was going to be crucial to any major scientific advancement in the right direction. Arguably, all of his philosophical work had only this instrumental aim of conceptual clarity. It was certainly always characteristic of his work that the two – philosophy and science (or, more generally, fundamental science and applied science) – went hand in hand in this way. In the lecture, this is shown by his immediate continuation from the passage above: ‘As soon as one can provide a reasonably large memory capacity it should be possible to begin to experiment on these lines.’

But that memory might be externally accessible or encoded in the fitness landscape by some coevolutionary process. Philosophy, on learning of Turning's use of Choice Sequences in 1937 would have had to say that there was no 'open' question here. The thing was already closed. The word thinking could not be used in the same way for viruses and human beings and machines. All this type of discourse could produce was non-informative figurative language of an imaginative, not philosophical, type. 

Turing’s strong interest in practical experimentation was one reason why he consistently advocated for the development of high-speed, large-memory machines with a minimally complex hardware architecture (child machines, as he would later call them), so as to give the greatest possible freedom to programming, including a machine’s reprogramming itself (ie, machine learning). Thus, he explained:
I have spent a considerable time in this lecture on this question of memory, because I believe that the provision of proper storage is the key to the problem of the digital computer, and certainly if they are to be persuaded to show any sort of genuine intelligence.

Or reduce the human labor involved as well as the scope for human error. Ultimately this was the economic reason which drove R&D. Philosophy can say little when the bead counters move in.  


In a letter from the same period, he wrote: ‘I am more interested in the possibility of producing models of the action of the brain than in the practical applications to computing.’ Due to his true scientific interests in the development of computing technology, Turing had quickly become frustrated by the ongoing engineering work at the National Physical Laboratory, which was not only slow due to poor organisation but also vastly less ambitious in terms of speed and storage capacity than he wanted it to be. In mid-1947, he requested a 12-month leave of absence. The laboratory’s director, Charles Darwin (grandson of the Charles Darwin), supported this, and the request was granted. In a letter from July that year, Darwin described Turing’s reasons as follows:
He wants to extend his work on the machine still further towards the biological side. I can best describe it by saying that hitherto the machine has been planned for work equivalent to that of the lower parts of the brain, and he wants to see how much a machine can do for the higher ones; for example, could a machine be made that could learn by experience?

Humans using a memoryless computer could learn by experience and that method of learning could be algorithmic to a certain extent.  

Turing participated in ...a discussion with Newman and Jefferson, chaired by the Cambridge philosopher R B Braithwaite, on the question ‘Can Automatic Calculating Machines Be Said to Think?’ At the outset, the participants agree that there would be no point in trying to give a general definition of thinking.

Why? One could say 'Thinking is categorical. The result of thinking is a functor- a mapping between categories.' That might be useful- more particularly to philosophers who would have been motivated to keep up with the math rather than go down cul de sacs of stupidity.   

Turing then introduces a variation on the ‘imitation game’, or the Turing test. In his 1950 paper, he says that he is introducing the imitation game in order to replace the question he is considering – ‘Can machines think?’ – with one ‘which is closely related to it and is expressed in relatively unambiguous words’. The paper’s version of the game, which is slightly more sophisticated, consists of a human judge trying to determine which of two contestants is a human and which a machine, on the sole basis of remote communication using typewritten text messages, with the other human trying to help the judge while the machine pretends to be a human. Turing says that:
[T]he question, ‘Can machines think?’ should be replaced by ‘Are there imaginable digital computers which would do well in the imitation game?’

This is foolish. A game is not a game if the player is dead or imaginary. A guy can fool another guy that a mannequin really lurves him. 'Weekend at Bernie' type stuff happened all the time in the Indian Planning Commission- or so I fondly believe. There is only one test of what is or isn't 'thinking' or 'emoting'- viz whether it has survival value. But that's up to the fitness landscape which, no doubt, includes such snakes and ladders as are constituted by our own stupidity or serendipity as beings who like to bloviate. Turing was smart but a lot of smart guys talk ultracrepidarian crap. Socioproctology isn't smart. But it does point an accusing finger at assholes who aren't doing anything useful at all while cluttering up Academia.



No comments: