Monday 27 October 2014

Hamming- Maths is unreasonably effective in explaining why it must be so.

'Not too long ago I was trying to put myself in Galileo's shoes, as it were, so that I might feel how he came to discover the law of falling bodies. I try to do this kind of thing so that I can learn to think like the masters did-I deliberately try to think as they might have done.

Well, Galileo was a well-educated man and a master of scholastic arguments. He well knew how to argue the number of angels on the head of a pin, how to argue both sides of any question. He was trained in these arts far better than any of us these days. I picture him sitting one day with a light and a heavy ball, one in each hand, and tossing them gently. He says, hefting them, "It is obvious to anyone that heavy objects fall faster than light ones-and, anyway, Aristotle says so." "But suppose," he says to himself, having that kind of a mind, "that in falling the body broke into two pieces. Of course the two pieces would immediately slow down to their appropriate speeds. But suppose further that one piece happened to touch the other one. Would they now be one piece and both speed up? Suppose I tied the two pieces together. How tightly must I do it to make them one piece? A light string? A rope? Glue? When are two pieces one?"

The more he thought about it-and the more you think about it-the more unreasonable becomes the question of when two bodies are one. There is simply no reasonable answer to the question of how a body knows how heavy it is-if it is one piece, or two, or many. Since falling bodies do something, the only possible thing is that they all fall at the same speed-unless interfered with by other forces. There's nothing else they can do. He may have later made some experiments, but I strongly suspect that something like what I imagined actually happened. I later found a similar story in a book by Polya [7. G. Polya, Mathematical Methods in Science, MAA, 1963, pp. 83-85.]. Galileo found his law not by experimenting but by simple, plain thinking, by scholastic reasoning.

I know that the textbooks often present the falling body law as an experimental observation; I am claiming that it is a logical law, a consequence of how we tend to think.

Newton, as you read in books, deduced the inverse square law from Kepler's laws, though they often present it the other way; from the inverse square law the textbooks deduce Kepler's laws. But if you believe in anything like the conservation of energy and think that we live in a three-dimensional Euclidean space, then how else could a symmetric central-force field fall off? Measurements of the exponent by doing experiments are to a great extent attempts to find out if we live in a Euclidean space, and not a test of the inverse square law at all.

But if you do not like these two examples, let me turn to the most highly touted law of recent times, the uncertainty principle. It happens that recently I became involved in writing a book on Digital Filters [8. R. W. Hamming, Digital Filters, Prentice-Hall, Englewood Cliffs, NJ., 1977.] when I knew very little about the topic. As a result I early asked the question, "Why should I do all the analysis in terms of Fourier integrals? Why are they the natural tools for the problem?" I soon found out, as many of you already know, that the eigenfunctions of translation are the complex exponentials. If you want time invariance, and certainly physicists and engineers do (so that an experiment done today or tomorrow will give the same results), then you are led to these functions. Similarly, if you believe in linearity then they are again the eigenfunctions. In quantum mechanics the quantum states are absolutely additive; they are not just a convenient linear approximation. Thus the trigonometric functions are the eigenfunctions one needs in both digital filter theory and quantum mechanics, to name but two places.

Now when you use these eigenfunctions you are naturally led to representing various functions, first as a countable number and then as a non-countable number of them-namely, the Fourier series and the Fourier integral. Well, it is a theorem in the theory of Fourier integrals that the variability of the function multiplied by the variability of its transform exceeds a fixed constant, in one notation l/2pi. This says to me that in any linear, time invariant system you must find an uncertainty principle. The size of Planck's constant is a matter of the detailed identification of the variables with integrals, but the inequality must occur.

As another example of what has often been thought to be a physical discovery but which turns out to have been put in there by ourselves, I turn to the well-known fact that the distribution of physical constants is not uniform; rather the probability of a random physical constant having a leading digit of 1. 2, or 3 is approximately 60%, and of course the leading digits of 5, 6, 7, 8, and 9 occur in total only about 40% of the time. This distribution applies to many types of numbers, including the distribution of the coefficients of a power series having only one singularity on the circle of convergence. A close examination of this phenomenon shows that it is mainly an artifact of the way we use numbers.

Having given four widely different examples of nontrivial situations where it turns out that the original phenomenon arises from the mathematical tools we use and not from the real world, I am ready to strongly suggest that a lot of what we see comes from the glasses we put on. Of course this goes against much of what you have been taught, but consider the arguments carefully. You can say that it was the experiment that forced the model on us, but I suggest that the more you think about the four examples the more uncomfortable you are apt to become. They are not arbitrary theories that I have selected, but ones which are central to physics,

In recent years it was Einstein who most loudly proclaimed the simplicity of the laws of physics, who used mathematics so exclusively as to be popularly known as a mathematician. When examining his special theory of relativity paper [9. G. Holton Thematic Origins of Scientific Thought, Kepler to Einstein, Harvard University Press, 1973.] one has the feeling that one is dealing with a scholastic philosopher's approach. He knew in advance what the theory should look like. and he explored the theories with mathematical tools, not actual experiments. He was so confident of the rightness of the relativity theories that, when experiments were done to check them, he was not much interested in the outcomes, saying that they had to come out that way or else the experiments were wrong. And many people believe that the two relativity theories rest more on philosophical grounds than on actual experiments.

Thus my first answer to the implied question about the unreasonable effectiveness of mathematics is that we approach the situations with an intellectual apparatus so that we can only find what we do in many cases. It is both that simple, and that awful. What we were taught about the basis of science being experiments in the real world is only partially true. Eddington went further than this; he claimed that a sufficiently wise mind could deduce all of physics. I am only suggesting that a surprising amount can be so deduced. Eddington gave a lovely parable to illustrate this point. He said, "Some men went fishing in the sea with a net, and upon examining what they caught they concluded that there was a minimum size to the fish in the sea."

2. We select the kind of mathematics to use. Mathematics does not always work. When we found that scalars did not work for forces, we invented a new mathematics, vectors. And going further we have invented tensors. In a book I have recently written [10. R. W. Hamming, Coding and Information Theory, Prentice-Hall, Englewood Cliffs, NJ., 1980.] conventional integers are used for labels, and real numbers are used for probabilities; but otherwise all the arithmetic and algebra that occurs in the book, and there is a lot of both, has the rule that

1+1=0.

Thus my second explanation is that we select the mathematics to fit the situation, and it is simply not true that the same mathematics works every place.

3. Science in fact answers comparatively few problems. We have the illusion that science has answers to most of our questions, but this is not so. From the earliest of times man must have pondered over what Truth, Beauty, and Justice are. But so far as I can see science has contributed nothing to the answers, nor does it seem to me that science will do much in the near future. So long as we use a mathematics in which the whole is the sum of the parts we are not likely to have mathematics as a major tool in examining these famous three questions.

Indeed, to generalize, almost all of our experiences in this world do not fall under the domain of science or mathematics. Furthermore, we know (at least we think we do) that from Godel's theorem there are definite limits to what pure logical manipulation of symbols can do, there are limits to the domain of mathematics. It has been an act of faith on the part of scientists that the world can be explained in the simple terms that mathematics handles. When you consider how much science has not answered then you see that our successes are not so impressive as they might otherwise appear.
4. The evolution of man provided the model. I have already touched on the matter of the evolution of man. I remarked that in the earliest forms of life there must have been the seeds of our current ability to create and follow long chains of close reasoning. Some people [11. H. Mohr, Structure and Significance of Science, Springer-Verlag, 1977.] have further claimed that Darwinian evolution would naturally select for survival those competing forms of life which had the best models of reality in their minds-"best" meaning best for surviving and propagating. There is no doubt that there is some truth in this. We find, for example, that we can cope with thinking about the world when it is of comparable size to ourselves and our raw unaided senses, but that when we go to the very small or the very large then our thinking has great trouble. We seem not to be able to think appropriately about the extremes beyond normal size.

Just as there are odors that dogs can smell and we cannot, as well as sounds that dogs can hear and we cannot, so too there are wavelengths of light we cannot see and flavors we cannot taste. Why then, given our brains wired the way they are, does the remark "Perhaps there are thoughts we cannot think," surprise you? Evolution, so far, may possibly have blocked us from being able to think in some directions; there could be unthinkable thoughts.

If you recall that modern science is only about 400 years old, and that there have been from 3 to 5 generations per century, then there have been at most 20 generations since Newton and Galileo. If you pick 4,000 years for the age of science, generally, then you get an upper bound of 200 generations. Considering the effects of evolution we are looking for via selection of small chance variations, it does not seem to me that evolution can explain more than a small part of the unreasonable effectiveness of mathematics.

Conclusion. From all of this I am forced to conclude both that mathematics is unreasonably effective and that all of the explanations I have given when added together simply are not enough to explain what I set out to account for. I think that we-meaning you, mainly-must continue to try to explain why the logical side of science-meaning mathematics, mainly-is the proper tool for exploring the universe as we perceive it at present. I suspect that my explanations are hardly as good as those of the early Greeks, who said for the material side of the question that the nature of the universe is earth, fire, water, and air. The logical side of the nature of the universe requires further exploration.

No comments: