The computer science department of my school regularly organizes a seminar where they invite researchers from elsewhere to present their work. Everyone can come, listen and see. Unfortunately, since it's most of the time in the afternoon, students like me often have a course at the same time.
Last year, some seminars took place in our free time, so, with my friend Antoine we decided to attend some of them. One was particularly interesting: on October the 20th, 2009, Cristian S. Calude from University of Auckland, New Zealand, gave a talk entitled "Algorithmic Randomness: A Primer". Here is the presentation of the talk:
Measuring and calibrating incompressibility is used to define algorithmically random objects, finite (e.g. bit strings) or infinite (e.g. reals). The talk presents theoretical and experimental recent results that shed new lights on Monte Carlo simulations, mathematical provability and quantum randomness.
I didn't understood everything during the talk but it was interesting enough
to get me thinking about it for a little while. And the next time I went to
my father's place, I discuss the subject with him since he's also in the
computer science research field (and by an irrelevant coincidence, his name
is Antoine, just like the friend I talked about earlier). Our conversation
went from defining randomness more or less formally to questions like
are some random sequences more random than others? and
is it really
possible to have one good definition for "random"?. It happens
(and this time the coincidence is relevant) that he had recently read
a book which, among other things, talked about randomness. This was the first
time I heard about Gregory Chaitin and his
book, The Limits of
Some time later, I came across a blog post (that I can't find anymore :-/) which also talked about this book and claimed it was a mind-blowing read, or something along these lines. So I put the book in my Amazon waiting-list. And then, a post in the Lisp reddit linked to the book again, so this time I ordered it.
I read the book this summer and indeed, it's a mind-blowing read, it changes the way you think about mathematics.
First, aside from the content of the book, Chaitin must be an excellent professor. The subjects covered by the book are rather complex, but it doesn't feel that way at all when you're reading it. Even if you're a bit tired, and even if English is not your native language (it's my case, I suppose native speakers can guess it from my English...), the book is easily readable and understandable. The author repeats himself a lot, in a lot of different ways. Sometimes from a different point of view or simply with different words. The reader doesn't need to remember a notation defined twenty pages before. Each time something is needed to understand what follows, Chaitin recalls it to us. In fact, the articles in the book are adapted transcripts of some of Chaitin's lectures. That's probably one of the reasons he repeats himself that much (he's speaking, not writing), and that's why I said that he must be an excellent professor. This is truly pleasant.
It's pleasant, but it's also irritating sometimes. I did find myself thinking "Okay I get that, please go on now" while reading the book. But this is a minor inconvenience compared to how much easy it makes the reading and understanding.
The only other professor I know who does that is Laurent Régnier, who was my discrete mathematics professor during my first and second year at university. The "I get that, go on" symptom was also there during some of the courses he gave, but I still never hesitate to say that he is the best teacher I ever had.
Back to our business. The book consists of three of Chaitin's lecture transcripts plus a lot of LISP* code, and some Mathematica code. Chaitin used Mathematica to implement his own LISP, and fortunately he also wrote a C version of his interpreter, so you're not forced to use non-free software if you want to run the code in the book. However the Mathematica code is still interesting because it's easily readable even if you don't know Mathematica. I'll come back on the use of LISP code in the book later.
The first lecture is entitled Randomness in arithmetic and the decline
and fall of reductionism in pure mathematics. I don't know for you but
this title got me into it the first time I read it! "Randomness in
arithmetic" is the mathematical topic of the lecture, and "the
decline and fall of reductionism in pure mathematics" is the philosophic
interpretation of the mathematical results. Chaitin discusses about
incompleteness, from Gödel to Turing and then his own findings. He then uses
these results to construct a purely random object he calls
probability and he shows that there are things in mathematics that are
true for no reason at all, and mathematical results that can't be obtained by
reasoning! Chaitin then advocates experimental mathematics, he justifies it
by recalling that incompleteness results imply that Hilbert's dream of
axiomatising mathematics will remain a dream forever, and he notes that
experimental physics works pretty well if one is not ashamed of saying
I goofed there, let's forget it from time to time.
The second lecture is about Elegant LISP programs. Here Chaitin defines what all mathematicians agree to call "elegant" when talking about proofs: shortness. Then he says that the same notion could apply for code: a program is elegant if there's no shorter program with the same result. The author then explains why he loves LISP (there are only good reasons to do so, anyways ;-)) and why he'll use S-expressions and his LISP instead of lambda-calculus or a classical Turing machine. Next Chaitin shows the not-so-surprising incompleteness result that you can't prove that a LISP S-expression is elegant (given the above definition of "elegant"), but what is cool here is that he proves that using LISP itself. The author then uses this result to introduce his Algorithmic Information Theory and talk about the halting probability again on the way. Finally, he draws the same conclusion about the limits of reasoning in pure mathematics, and the philosophical implications.
The third lecture, An invitation to Algorithmic Information Theory,
goes pretty straight to the subject of Chaitin's Algorithmic Information
Theory, as the title suggests. This allows Chaitin to insist on what his job
has been and what is its added value compared to Gödel and Turing regarding
incompleteness. This especially includes the notion of program-size
complexity and the fact that he actually runs his proof on a real world
computer, with no need for a formal and unusable lambda-calculus or a
Turing Machine with an unrealistic infinite tape. Then Chaitin talks again of
the halting probability, the fact that this number, which is well-defined
using natural language, is algorithmically irreducible and that you can't
even deduce what its bits are... Then follows a long discussion about what
should be understood from these incompleteness results at a meta-mathematical
Sometimes to get out more information of a set of axioms, you've
just got to put more in, it's how physics works, why not maths? And
Chaitin has some very good points justifying the experimental approach to
mathematics, but I let you discover them by reading the book :-).
The end of the book consists of well commented LISP code and the output of its execution. The presence of LISP code in the book is a big part of why it's such a good read. The fact that the author wants to code his proof for real, and actually does it, forces him to stay realistic and understandable. This is a big change compared to usual papers written by mathematicians or theoretical computer scientists, and in my opinion, this change is for the better.
tl;dr: The Limits of Mathematics is a must-read if you're interested in computer science, mathematics, or even in sciences in general. It is quite expensive but totally worth it!
I'm very pleased to announce that this article made it to the front page of Hacker News and of the Computer Science subreddit! If you want to discuss about it I invite you to do so in one of these pages comments. Also, thanks to jacobolus for the numerous related-links he has posted on Hacker News. I'd also like to see a follow-up to the question raised by otakucode on reddit about Stephen Wolfram and his relation with Chaitin, or rather the relation between their work. I don't have any answer, and it might be interesting.
* I use here the same capitalization as Chaitin does
for "LISP". It is in fact accurate since the kind of Lisp that is
used is a LISP 1 (no need for macros etc) with just an additional