Pages

Wednesday, November 17, 2010

Through a crystal darkly

In previous remarks on randomness and computation, I mentioned the work of Gregory Chaitin, a mathematician and theorist who has written and spoken (he is a brilliant speaker) extensively for both specialist and general audiences. Chaitin's technical work is highly regarded, but his interpretations and extrapolations are sometimes a little idiosyncratic and he is inclined to sound a bit New Agey at times. (He is rumored to receive help in his thinking from a giant crystal!)

Paul Davies (a physicist and writer) is, by contrast, sober and restrained - even a little pedestrian by comparison - but he is a reliable guide within his areas of expertise. I recently came across a foreword by Davies to a book of Chaitin's essays* in which Davies gives his perspective on the significance of Chaitin's work and its implications for physics and our view of the world generally.

Chaitin (who had been obsessed from his childhood years with Kurt Gödel's incompleteness theorem) "greatly extended the scope of Gödel's basic insight," writes Davies, "and recast the notion of incompleteness in a way that brings it much closer to the real world of computers and physical processes. A key step in his work is the recognition of a basic link between mathematical undecidability and randomness. Something is random if it has no pattern, no abbreviated description, in which case there is no algorithm shorter than the thing itself which captures its content. And a random fact is true for no reason at all; it is true 'by accident' so to speak ... Chaitin was able to demonstrate that mathematics is shot-through with randomness ... Mathematics, supposedly the epitome of logical orderliness is exposed as harboring irreducible arbitrariness." (p. vi)

"[M]athematics contains randomness - or accidental, reasonless truths," Davies explains, "because a ... universal Turing machine [an idealized computer], may or may not halt in executing its program, and there is no systematic way to know in advance if a function is computable (i.e. the Turing machine will halt) or not." (p. viii)

But this limitation on what we can know or predict (known as Turing uncomputability) applies not just to mathematics and computers but also to scientific theories. On Chaitin's view, a scientific theory is like a computer program that predicts our observations (the experimental data).

Indeed, in the words of Paul Davies, " ... we may regard nature as an information processing system, and a law of physics as an algorithm that maps the input data (initial conditions) into output data (final state). Thus in some sense the universe is a gigantic computer, with the laws playing the role of universal software." (p. viii)

And if the laws of physics are computer algorithms, there will be randomness in the laws of physics stemming from Turing uncomputability. But, according to Davies, the randomness will, in reality, be "even more pronounced than that which flows from Turing uncomputability." (p. viii)

He points out that the real universe differs in a crucial respect from the concept of a Turing machine. "The latter is supposed to have infinite time at its disposal: there is no upper bound on the number of steps it may perform to execute its program. The only relevant issue is whether the program eventually halts or not, however long it takes. The machine is also permitted unlimited memory ... If these limitless resources are replaced by finite resources, however, an additional, fundamental, source of unknowability emerges. So if, following Chaitin, we treat the laws of physics as software running on the resource-limited hardware known as the observable universe, then these laws will embed a form of randomness, or uncertainty, or ambiguity, or fuzziness - call it what you will - arising from the finite informational processing capacity of the cosmos." (pp. viii-ix)

There are, it seems, different forms or levels or randomness. The 'mild' form which - as chaos theory shows - is implicit even in classical, deterministic physics; the pseudo-randomness which can be generated by simple computer algorithms; the well-known randomness inherent in quantum mechanics; and perhaps the deepest levels of all stemming from proven features of idealized computers (Turing machines) and from seeing the universe itself as a giant computer - one with specific limitations on its processing capacities.

These are difficult (and to some extent speculative) ideas. But I think they are worth pursuing and may even have profound implications for how we see ourselves and our world.

It is, of course, impossible to draw definitive political or metaphysical conclusions from them, but, if the ideas are sound, there will be such conclusions to draw.

Let me just mention two thoughts which come immediately to mind: Chaitin's and Davies' notions are utterly incompatible with any political ideology which attempts to predict, plan and control human affairs; and they also appear to undermine perspectives which incorporate notions of a providential force operating behind the scenes and impinging on natural processes, historical events and/or individual destinies. 


* Thinking about Gödel and Turing: essays on complexity, 1970-2007 (World Scientific, 2007).

20 comments:

  1. What an exciting nugget! Awesome!

    Although it may seem tangential, this is in a similar vein and makes a good connection to the above. Recently as I followed a trail inspired by R. P. Wolff, who had praised Nobel Prize–winning economist Amartya Sen. The wiki on Sen contains the following:

    Sen's papers in the late 1960s and early 1970s helped develop the theory of social choice, which first came to prominence in the work by the American economist Kenneth Arrow, who, while working at the RAND Corporation, famously proved that all voting rules, be they majority rule or two thirds-majority or status quo, must inevitably conflict with some basic democratic norm. Sen's contribution to the literature was to show under what conditions Arrow's impossibility theorem would indeed come to pass as well as to extend and enrich the theory of social choice ...

    So I looked up Mr. Arrow. Here is part of a discussion on his "Impossibility Theorem" as it analyzes the outcomes of voting in a democracy:

    The framework for Arrow's theorem assumes that we need to extract a preference order on a given set of options (outcomes). Each individual in the society (or equivalently, each decision criterion) gives a particular order of preferences on the set of outcomes. We are searching for a preferential voting system, called a social welfare function (preference aggregation rule), which transforms the set of preferences (profile of preferences) into a single global societal preference order...

    [here a discussion of optimal decision parameters] ...

    Arrow's theorem says that if the decision-making body has at least two members and at least three options to decide among, then it is impossible to design a social welfare function that satisfies all these conditions at once...

    [It becomes possible]... to prove that any social choice [ie, voting] system respecting unrestricted domain, unanimity, and independence of irrelevant alternatives is a dictatorship.

    Arrow's theorem is a mathematical result, but it is often expressed in a non-mathematical way with a statement such as "No voting method is fair", "Every ranked voting method is flawed", or "The only voting method that isn't flawed is a dictatorship". These statements are simplifications of Arrow's result which are not universally considered to be true. What Arrow's theorem does state is that a voting mechanism, which is defined for all possible preference orders, cannot comply with all of the conditions given above simultaneously...

    Amartya Sen ... demonstrated another interesting impossibility result, known as the "impossibility of the Paretian Liberal". (See
    liberal paradox for details). Sen went on to argue that this demonstrates the futility of demanding Pareto optimality in relation to voting mechanisms.
    Source: http://en.wikipedia.org/wiki/Arrow's_impossibility_theorem
    [had to split up this comment for length -- contd]

    Which leads to the liberal paradox:

    ReplyDelete
  2. Which leads to the liberal paradox:

    a logical paradox advanced by Amartya Sen, building on the work of Kenneth Arrow and his impossibility theorem, which showed that within a system of menu-independent social choice, it is impossible to have both a commitment to "Minimal Liberty", which was defined as the ability to order tuples of choices, and Pareto optimality....

    The most contentious aspect is, on one hand, to contradict the libertarian notion that the market mechanism is sufficient to produce a Pareto-optimal society; and on the other hand, argue that degrees of choice and freedom, rather than welfare economics, should be the defining trait of that market mechanism....

    The example shows that liberalism and Pareto-efficiency cannot always be attained at the same time. Hence, if liberalism exists in just a rather constrained way, then
    Pareto-inefficiency could arise....

    What can society do, if the paradox applies and no corresponding social decision function can handle the trade off between Pareto-optimality and liberalism? One sees that mutual acceptance and self-constraints or even contracts to trade away actions or rights are needed.

    http://en.wikipedia.org/wiki/Liberal_paradox

    Part of the paradox, it seems to me, is that democracy does not assure that our voting will result in optimal decisions, and we cannot guarantee both freedom of the market and optimal operation of the market -- which contradicts "classic liberal" economics (of the American Tea Party, for instance).

    Ie -- back to randomness -- the market is not inherently stable and in a free society it may be impossible to control outcomes for the benefit of all.

    Hmmm?

    ReplyDelete
  3. The obverse of the last observation would be, as society increasingly places itself under the rule of scientific management -- technocracy, in other words -- optimizing for manageability logically must entail a reduction in individual human freedom.

    ReplyDelete
  4. I'm glad you appreciated the post, GC. I was looking forward to not dealing with difficult ideas for a while - and now you have presented a whole new set of intellectual puzzles quite out of my comfort zone! I was vaguely familiar with the idea that no voting method is fair, but give me a day or two to take in what you've said here.

    ReplyDelete
  5. GTC, what a tour de force! I agree with Mark that it'll take a bit to assimilate all this. At the outset, I'm skeptical of attempts to chase a will-o'-the-wisp like Pareto optimization precisely because (I believe) no technocracy can accomplish it without unintended consequences that lead to...Pareto inefficient results. From what I see of Arrow's theorem, it seems to me to support my view--central planning doesn't work. The conclusion that only a dictatorship is fair seems the perfect example of ivory tower idiocy. Assuming the math is right, the words simply don't follow the numbers.

    ReplyDelete
  6. Mark, after commenting to GTC that "central planning doesn't work," which seems in line with the conclusions of your original post, I want to back off a bit and urge that the problems with randomness seem, by experience, to be overstated. In other words, whatever theoretical difficulties there may be in predicting human affairs, history tells us a different lesson. We see people in similar situations behaving similarly. Maybe there is a kind of Avogadro's number at work. Although the actions of individual human beings are beyond our ability to predict, we can make fair deductions about aggregate behavior. Or, maybe it's simply a matter of understanding motivation and incentives. Machiavelli remains an excellent predictor of the actions of people (and dictators) under certain circumstances. In any case, I don't see pure randomness in history.

    ReplyDelete
  7. GTC, your final comment that "optimizing for manageability logically must entail a reduction in individual human freedom" is, from my perspective, absolutely irrefutable. Not only are you right, you are profoundly right!

    ReplyDelete
  8. That doesn't happen very often. Thank you. LOL.

    ReplyDelete
  9. CONSVLTVS, in fact I quite agree with you. Aggregate behavior can sometimes be predicted, and a knowledge of human psychology can enable one to predict even individual behavior to some extent. And I agree there are patterns in history, but they tend to be rather complicated patterns such that projections into the future are virtually impossible (unless the projections are very general - like the inevitable failure of utopian schemes, or the negative consequences of irresponsible fiscal policies, etc.).

    I am also acutely aware that part of the human condition is that we are subject to the unexpected - the unpredictable - and this unpredictability applies of course not just to individuals but to groupings of individuals which are subject both to external threats and also to unexpected internal changes ('phase transitions', etc.).

    But my interest in randomness and associated ideas is inspired primarily by a natural curiosity about fundamental questions about reality, and I take heed of your implicit warning not to draw simplistic political conclusions from them.

    ReplyDelete
  10. My mind is still boggling at the though of someone "who had been obsessed from his childhood years with Kurt Gödel's incompleteness theorem". That's a whole topic in itself, I guess.

    ReplyDelete
  11. By which I meant "thought" of someone etc.

    ReplyDelete
  12. Yes Alan, mind-boggling, and there is more. This is from a Scientific American article by Chaitin (2006): "In 1956 Scientific American published an article by Ernest Nagel and James R. Newman entitled "Goedel's Proof." Two years later the writers published a book with the same title - a wonderful work that is still in print. I was a child, not even a teenager, and I was obsessed by this little book. I remember the thrill of discovering it in the New York Public Library. I used to carry it around with me and try to explain it to the other children."

    And while we're on deep and difficult questions, how do you do the umlaut in this comments section? You will note my clumsy 'Goedel'. (In the posts you can switch to html.)

    ReplyDelete
  13. I believe that umlaut happened entirely by chance.

    Where exactly did Mr Chaitlin go to school? What did his teachers think about him, as they taught the seven times table? How did he get into the New York Public Library at such an age?

    Mysteries abound.

    ReplyDelete
  14. Actually, I cut and pasted the phrase from your text, and the umlaut came along for the ride.

    ReplyDelete
  15. Yes, Alan, mysteries abound indeed. I am preparing a little piece incorporating some of Chaitin's more eccentric (non-mathematical) ideas. You may (or may not) lose interest in biographical details when you have read it.

    ReplyDelete
  16. When I read Davies in your extracts above, I suspect there is a shift taking place from physical randomness to epistemic uncertainty. When it is said that "randomness" is to be found in mathematics, doesn't this just mean that some problems are unsolvable by us? How does it follow that there is randomness in nature? Even if we have a proof that the foundations of arithmetic are insoluble, how can we infer anything about the physical universe from that?

    When he says: "these laws will embed a form of randomness, or uncertainty, or ambiguity, or fuzziness - call it what you will - arising from the finite informational processing capacity of the cosmos", I don't see what the cosmos has to do with it. Isn't information processing something that only we do (with the help of our man-made machines)?

    ReplyDelete
  17. Alan, the idea is that the cosmos is a computer. It's an idea that's been around for decades, but I first began to take it seriously when I read Seth Lloyd's book Programming the universe in which he makes (I think) a strong case that the universe is a quantum computer. Since then I have read other material which makes the case that information is fundamental to physics ("it from bit"). They may be speculative - but these ideas are definitely worth exploring in my opinion.

    ReplyDelete
  18. By that analogy, then, we need to nominate a programmer and a user. Don't we?

    ReplyDelete
  19. Putting aside the question of whether the cosmos is a computer, the other question is whether Davies invalidly argues from an epistemic premise to an ontological conclusion.

    The invalid move is what might be called Berkeley's fallacy -- much as I love the great GB.

    ReplyDelete
  20. Alan, one would have to look at Davies' full piece (not just my summary) to see if his argument is invalid in the way you suggest, but my sense is that he is arguing hypothetically and validly. If the cosmos processes information (just as a computer does, or maybe a quantum computer) then any limitations on an idealized computer (such as Chaitin talks about) will be compounded by other (processing capacity) limitations in the case of the 'cosmic computer'. As I understand it, information is fundamental to an understanding of physics, and there are strong parallels between thermodynamics and information theory, with the latter being arguably the more fundamental theory.

    On the question of the programmer, one must, I suppose, be agnostic. The laws of physics are, let us say, the program, and there are suggestions that they have evolved with the universe, and were less defined in its early stages.

    I don't claim to be anything like an expert on all this, of course. I am just an interested observer who is trying to keep track of - and make sense of - what I see as some fascinating ideas.

    ReplyDelete