Does Not Compute
One of the most basic ways to think about a computer program is that it is a device which takes in integers as inputs and spits out integers as outputs. The C# compiler, for example, takes in source code strings, and those source code strings are essentially nothing more than enormous binary numbers. The output of the compiler is either diagnostic text, or strings of IL and metadata, which are also just enormous binary numbers. Because the compiler is not perfect, in some rare cases it terminates abnormally with an internal error message. But those fatal error messages are also just big binary numbers. So let's take this as our basic model of a computer program: a computer program is a device that either (1) runs forever without producing output, or (2) computes a function that maps one integer to another.
So here's an interesting question: are there functions which cannot be computed, even in principle on a machine with arbitrarily much storage, by any C# program (*)?
We already know the answer to that question. Last year I pointed out that the Halting Problem is not solvable by any computer program, because the assumption that it is solvable leads to a logical contradiction. But the Halting Problem is just a function on integers. Let's say that the input of our function H is a number which when written out in binary is a Unicode string that might contain a C# program. The output is 1 if the program is an illegal C# program, 2 if it is a legal C# program which halts, and 3 if it is a legal C# program which does not halt. If it were possible to write a program that reliably computes function H and always terminates then it would be possible to use it to solve the Halting Problem, which we've shown is impossible. Therefore H is not a computable function.
Let's explore this a bit further. The "Turing Machine" model of computing is that a computer is a machine that has three kinds of storage: first, there's a fixed amount of "internal" storage that describes the current state of the processor, second, there is arbitrarily much "external" storage in the form of paper tape, disk drives, or whatever, that can contain binary data, and third, there is some way of identifying the "current position" being manipulated in the external storage. The Turing Machine also has strict rules that describe how to change the internal state, the external state, and the current position. One of the internal states is the "start" state, and one of the internal states is the "halt" state; once the machine gets to the halting state, it stops. Otherwise, it runs forever.
Without loss of generality, let's suppose that our Turing Machine's external storage is arbitrarily many bits, either zero or one, and that the internal storage is some fixed number of bits, say n. This is pretty restrictive, but we haven't actually lost anything fundamental here. Real computers of course give the appearance of manipulating storage that consists of 32 bit integers or 64 bit doubles or whatever, but at some level inside the processor, it is manipulating individual bits. There is no difference in principle between a machine that manipulates one bit at a time and a machine that manipulates 64 bits at a time; the latter is simply more convenient.
So then how many rules do we need to come up with for our Turing machine? A Turing machine with n bits of internal state has 2n possible states, and there are two possibilities for the value at the "current position" in the external state. (**) So that means that there are 2n+1 state transition rules. Each transition rule will have to encode three things: (1) what are the n bits of the new internal state? (2) what value should the external state be changed to? and (3) how should we update the current position?
Again without loss of generality, we can update the current position by decreasing it by one, increasing it by one, or leaving it the same. In practice that is inconvenient, but in principle that is enough. So those are three possibilities. Thus, each state transition rule is one of 2 x 2n x 3 possibilities. There are 2n+1 state transition rules. Therefore the total number of possible Turing Machines that have n bits of internal storage is 3 x 2n+1 raised to the 2n+1 power, which, yes, grows pretty quickly as n gets large, but which is clearly a finite number.
Each one of these n-bit Turing Machines essentially computes a function. You start it up with the external storage in a particular state and the machine either runs forever, or after some finite number of steps it halts. If it halts, then the output of the function is the value left behind in the external storage.
Again without loss of generality, let's consider the value computed by each one of those possible Turning machines when the external storage is initially all zeros. When given that starting configuration, each of those Turing machines either runs for some number of steps and then halts with the result, or it runs forever. Let's ignore the ones that run forever. Of the ones that are left, the ones that terminate, one of them must run the longest (***). That is, one of those machines that halts must have the largest number of steps taken before entering the halting state.
We therefore can come up with a function S that goes from integers to integers. The function S takes in n, the number of bits in the Turing Machine internal state, and gives you back the largest number of steps any of the possible n-bit Turing Machines that halts takes to halt. That is, S takes in the number of bits of internal storage and gives you back the amount of time you have to wait for the slowest of the n-bit machines that actually terminates, when it is started with empty external storage.
Is S a computable function? Can we write a computer program that computes it?
Your intuition should be telling you "no", but do you see why?
.
.
.
.
.
.
.
.
Because if S were computable then H would be computable too! All we'd have to do to compute H is to make a computer program that compiles a given C# program into a Turing Machine simulator that starts with an empty tape. We take the number of bits of state, n, of that Turing Machine, and compute S(n). Then we run the Turing Machine simulator and if it takes more than S(n) steps then we know that it must have been one of the n-bit Turing machines that runs forever. We'd then be able to reliably compute H in finite time. Since we already know that H is not reliably computable in finite time then we know that S must not be computable either.
The argument that I'm advancing here is known as the "Busy Beaver" argument because the n-bit Turing Machine that runs the longest is the "busiest beaver". I've tweaked the way that it is usually presented; rather than the n-bit Turing Machine that runs the longest before terminating, the "busiest beaver" is traditionally defined as the k-state Turing Machine that produces the largest output. The two characterizations are essentially equivalent though; neither version of the function is computable.
An interesting fact about the busy beaver function (either way you characterize it) is that the function grows enormously fast. It's easy to think of functions that grow quickly; even simple functions like n! or 2n grow to astronomical levels for relatively small values of n, like 100. But our busiest beaver function S(n) grows faster than any computable function. That is, think of a function that grows quickly where you could write a program to compute its value in finite time; the busiest beaver function grows faster than your function, no matter how clever you are in coming up with a fast-growing function. Do you see why? You've got all the information you need here to work it out. (****)
(*) Of course, there is nothing special about C#; it is a general-purpose programming language. We'll take as given that if there is a function that cannot be computed in C# then that function cannot be computed by any program in any programming language.
(**) Of course, we don't need to state transitions from the halting state, but, whatever. We'll ignore that unimportant detail.
(***) Of course, there could be a tie for longest, but that doesn't matter.
(****) Of course, even if the busiest beaver function did not grow absurdly quickly, the fact that it clearly grows more than exponentially is evidence that our proposed technique for solving the Halting Problem would be impractical were it not impossible. Compiling a non-trivial C# program to a Turing Machine simulator would undoubtedly produce a machine with more than, say, 100 bits of state. There are an enormous number of possible Turing Machines with 100 bits of internal state, and the one that runs the longest before it halts undoubtedly runs longer than the universe will last.
Comments
Anonymous
October 10, 2012
So you CAN solve the Halting Problem! Just for only really small computers. Correct! The Halting Problem is solvable for Turing Machines that have five internal states or fewer. But as we've seen, that's only a small handful of possible machines, so it is tractable to analyze all of them. -- EricAnonymous
October 10, 2012
Computing any infinitely long number, such as the square root of 2, will not halt. What characterizes an "infinitely long number"? The number 1.0000... is "infinitely long" as well, but it can be computed, right? I see no reason why the square root of two is any more special than any other number. -- Eric Q: Can you cover why a local variable to a property cannot be declared within the scope of a property and before the GET? This would be quite helpful for all of the XAML /WPF propeties with backing variables and onPropertyChanged? Auto properties could be greatly enhanced by syntax similar to public bool proprtyA { get; set; when_modified { code snippit } } The same reason why every unimplemented feature is not implemented: features are unimplemented by default. In order to become implemented a feature must be (1) thought of, (2) designed, (3) specified, (4) implemented, (5) tested, (6) documented and (7) shipped. Of those seven things, only the first has happened for your proposed feature. Therefore, no such feature. -- EricAnonymous
October 10, 2012
The comment has been removedAnonymous
October 10, 2012
@Gabe: sure. The computers we use are what's technically known as linear bounded automata -- just like a Turing machine, but with finite storage. Those machines have finite state, and therefore, in principle, the halting problem is decidable for programs running on real computers. Just run the program and track the distinct states you've seen (where "state" is the contents of all memory of the machine). If a state ever repeats, you know the program will never terminate (because, barring hardware failure, the computer is deterministic and one state will always lead to the same next state). Otherwise, you will surely eventually run out of unique states to see, and then the program must stop if it hasn't done so already. The problem is, of course, the "in principle", as Eric pointed out. It's absurdly impractical to solve the halting problem this way for any non-trivial program -- the number of possible states isn't just astronomical, it's far beyond even that. However, the basic approach (try to show that we will never visit the exact same state twice) is used in various techniques to prove that programs terminate. Those just don't rely on naive enumeration, but on proving more involved properties over all possible states.Anonymous
October 10, 2012
Pi and the square root of 2 are irrational numbers. Irrational numbers cannot be represented by A / B where A and B are integers, B <> 0 and also not represented by a terminating or repeating decimal. That is correct. And yet you have managed to represent both pi and root two in finitely many bits in this very comment. What explains this apparent paradox? And if you can do it, then why can't a Turing Machine do the same? -- EricAnonymous
October 10, 2012
This immediately reminded me of many themes of Douglas Hofstadter's book GEB and Godel's incompleteness theorem.Anonymous
October 10, 2012
The comment has been removedAnonymous
October 11, 2012
If your Turing machines consist of 2^(n+1) state transition rules, and each rule has 2 * 2^n * 3 possibilities, then the total number of possible machines is not (2 * 2^n * 3) * (2^(n+1)) = 3 * 2^(2n+2). It is (2 * 2^n * 3)^(2^(n+1)). For n=2 (i.e. 4 internal states), that makes 24^8 = 110075314176 machines, rather than 24*8 = 192. For five internal states, the handful would be larger yet. Whoops, you are of course correct. I was typing faster than I was thinking. I'll fix the text. -- EricAnonymous
October 12, 2012
So basically, what you're saying is that there no limit to how inefficiently a program can be written... I thought we knew this already!Anonymous
October 12, 2012
@ Oded That's hardly surprising, both Turing and Gödel's work is the development towards a proof that arithmetic is consistent (free of any internal contradictions) as posed by Hilbert (second of his 23 problems posed in 1900). The answer was that, for a sufficiently interesting arithmetic – our normal qualifies – cannot be proven to be consistent /and/ cannot be proven to be inconsistent. Hence "incompleteness". To clarify: Godel's proof shows that every logical system has at least one of these three properties: either (1) it is possible to prove both a theorem and its opposite, making the system inconsistent, (2) there exist well-formed statements in the system that have no proof, and their negations have no proof either, and therefore the system is incomplete, or (3) the system is not sufficiently powerful enough to implement integer arithmetic, and therefore the system is weak. There are no complete, consistent, strong logical systems. -- EricAnonymous
November 29, 2012
The comment has been removed