Talk:Computability theory (computer science)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

I believe the Entscheidungsproblem was first proved unsolvable by Church, and months later by Turing. Goedel's theorems don't really talk about algorithms, so they don't directly apply. Of course, Goedel's trick of Goedel numbering and the Barber paradox is really the key to the Entscheidungsproblem and halting problem, everything else is details. --AxelBoldt

OK, I've added that. --LC

The context-free languages need a stack and a non-deterministic finite-state machine; deterministic won't do. --AxelBoldt

Yes, that's what it says. Perhaps you misread the 2nd sentence of the paragraph? --LC
Yup, you're right. --AxelBoldt

The languages that are accepted by a Turing machine are exactly those that are generated by formal grammars.

This is wrong. I doubt it is all formal grammars. I expect that it is context sensitive grammars the author was thinking of. Regular grammars are FSM equivalent, as I recall. -- BenBaker

No, the context sensitive grammars generate exactly the languages accepted by linear bounded nondeterministic Turing machines (Turing machines with only a linear amount of memory). The set of languages accepted by all possible Turing machines is exactly equal to the set of languages generated by all possible formal grammars. The sentence in the article is correct. See Chomsky hierarchy for more details. --LC


I moved the following question from the article. I don't know the answer. (¿Isn't it 3 parameters?)


Hi Cwitty, I'm not sure what form Turing's definition took. But your change seems unhelpful, because it puts too much weight on the word "arbitrarily".

Chaitin's constant, for example, can be approximated from below with unlimited accuracy, simply by simulating all machines and adding in a component for those which halt. It is not computable, because there is no way of telling how close the approximation is at any point.

I'm pretty sure the "incorrect" defition you changed is actually equivalent to the "correct" one. -- Pde 06:54, 8 Nov 2003 (UTC)

I can certainly change the definition to be precise; I didn't do so because I put a precise, correct definition in computable number a couple of months ago.
There is a link to Turing's original paper at the end of computable number. Note that the link includes both the original paper and a correction to it. Turing's original definition is indeed similar to the one I removed; but in the correction, he acknowledged that this definition was flawed, and provided an alternate definition (which I can't read, due to font problems).
Here's an example which shows the problem of relying on "digits" in a definition of this form. Start with 1/2. Add 2-k for every k which is an odd perfect number. Subtract 2-k for every k which is an even number >2 that is not the sum of two primes. This number is greater than 0.5 if the first odd perfect number is smaller than the first counterexample to Goldbach's conjecture; if Goldbach's conjecture is true, and there are no odd perfect numbers, then this number is exactly 1/2.
My definition (at computable number) says that this number is computable; another way to put it is that given an epsilon, we can give a number which is within epsilon of this number. However, we cannot give a single (binary or decimal) digit of this number; we do not know whether it begins 0.4 or 0.5. (I think you might be able to produce arbitrary ternary digits of this number, though.) -- Cwitty 23:45, 8 Nov 2003 (UTC)
Gotcha. One could argue (at least momentarily) that it is acceptable to consider your number to be incomputable unless the Goldbach conjecture is resolvable. The fact that the definition is not consistent across all bases makes such an argument a little untennable, though. -- Pde 01:52, 9 Nov 2003 (UTC)

I didn't edit the page, but I believe it should read "given ANY statement in [FOL]...", not "given A statement in [FOL]"...

also, it might be helpful to add that godel's general recursive functions are also equivalent to the notion of Turing-computable functions and lambda-definable functions.

Refocus article on Computable functions[edit]

I think the article should be rewritten to focus more on computable functions instead of Turing machines. This should be done in most articles in the computability category. Of course Turing machines are easier to grasp then the abstract concept of computable function, but by focussing on the abstract concept we can eliminate the redundant discussion of lamda calculus, recursive function and Church-Turing thesis in each computability article. I guess in many cases the proofs and theorems will get shorter too. We can always add a concrete example using Turing machines if the article gets to abstract MathMartin 23:59, 9 Nov 2004 (UTC)

I have the opposite opinion. TM gives very intuitive and simple representation for computability. See the proofs for uncomputability of Busy Beaver functions. More of the results may be demonstarted on TM (or other programming language) examples. If you get the Quine program, it is easy to expand it to self-explorer program, and then using self-opposite behavior to proof easy non-stoping problem, Godel incompleteness theorem and other results. Such proofs are simple and don't use complex and hard for understanding mathematical notation. Skelet 19:16, 7 Dec 2004 (UTC)
I do not object to using Turing machines as examples, but as with any model, Turing machines only serve to illustrate certain points. Other points can be illustrated better using other models. My point is the main (technical) definition should be based on computable functions. But at the moment I am not able to devote any amount of time on this topic. So I guess for now things will stay as they are. MathMartin 19:30, 9 Dec 2004 (UTC)

What happened to this article?![edit]

As far as I remember (and correct me if I am wrong), this article used to have a lot of content. Now it is just 8 lines. Computability theory deserves more than just a short definition and two links. This is an ENCYCLOPEDIA, not a DICTIONARY. Who destroyed the old content?! ---- David November 13, 2005

Yes, this is indeed very strange! I do not know the previous article, but this topic cannot possibly be that short after 4 years of Wikipedia! Hints on what might have happened here are found at the deleted changes page. There it says that "00:11, 26 October 2005 Timwi deleted "Computability theory" (Deleted to make way for move)" Yet, I do not understand it. What moved? Where did it move to? I will go and ask Timwi. --Markus Krötzsch 21:05, 15 November 2005 (UTC)[reply]

Problematic name[edit]

It's become trendy in recent years, due to the influence of Robert I. Soare, to use the term "computability theory" to refer to what's traditionally called "recursion theory", and in fact recursion theory is a redirect here. However this article is not much about recursion theory (Turing degrees, effective descriptive set theory, that sort of thing), but more about models of in-principle-physically-realizable computation. Recursion theory is a branch of math logic, not of theory of computation.

So what do we do about this? I see there's a merge template to merge to computation. For most of the content in the current article that's probably fine. But there should also be an article about what math-logicians call computability theory or recursion theory. Perhaps we need a dab page under the name computability theory. --Trovatore 04:26, 26 November 2005 (UTC)[reply]

This article should be moved to theory of computation (which I can't do because I'm not an admin). This should be made in to a disambig to theory of computation and a to be written recursion theory. —R. Koot 04:40, 26 November 2005 (UTC)[reply]
So I've taken an awkward first step, moving the article to Computability theory (computation), which I don't propose as the permanent name, but just a good place to put this content while we sort things out. Some of the content does need to be reproduced in the (new stub) recursion theory article. Roughly speaking, recursion theory starts with what is here called "Unreasonable models of computation". Definitely the halting problem needs to be mentioned in the "recursion theory" article. --Trovatore 05:22, 26 November 2005 (UTC)[reply]

To tell the truth I'm not sure the articles should be split at all. What's clear is that, whatever article is called recursion theory (or where recursion theory redirects) has to have a very different focus from the current article here; this current article seems to be mainly about things of Turing degree 0. In the end perhaps we should have just one article, but it shouldn't spend so much time talking about Turing degree 0 in fifteen different ways, but should move quickly into the real meat of the topic, which starts with oracles. --Trovatore 06:05, 26 November 2005 (UTC)[reply]

Theory of Computation[edit]

The current article isn't really suited as the main article for all of Theory of Computation, as when I wrote it I was considered only computability theory and not the other major branch, complexity theory. So do we want separate articles on computability and complexity, or do we want to expand this article greatly to include some discussion of complexity theory as well? Clearly any article called "Theory of Computation" should at least mention P =? NP. The preceding unsigned comment was added by Readams (talk • contribs) 17:34, 26 November 2005.

That's another aspect, alright. It seems we have a multidimensional mess on our hands. Interested parties should also look at the categorization question. Yesterday I created Category:Recursion theory and have been going through the articles in Category:Theory of computation to try to figure out what belongs where. Roughly speaking, the test I used was this: Would this question be considered in a math department by people who call themselves mathematical logicians, or would it be considered in a computer science department? Of course there was a very considerable overlap. Also people may disagree with my judgment as to which articles are categorized which way by this test, and may disagree with the test itself. I'd like to see some more thoughts on this. --Trovatore 18:35, 26 November 2005 (UTC)[reply]
There is indeed a much larger overlap than you suggested at first (I suspected this, but I know nearly nothing about recusion theory, so didn't mention it). Maybe some other people want to say something aboiut this? The preceding unsigned comment was added by R.Koot (talk • contribs) .
Well, I was conservative about removing the "Theory of computation" category from articles, so there may be more overlap in the current categorization than there really should be. --Trovatore 19:03, 26 November 2005 (UTC)[reply]
I think keeping it would be best if the article would be kept seperate. Maybe we should move this page to Computability theory (computer science)? The Theory of computation could be an introductory article wich refers to automata theory, formal language theory, computability theory (computer science) and computational complexity theory. —R. Koot 18:53, 26 November 2005 (UTC)[reply]

Chaitin[edit]

For the moment, I've removed the following:

(the fact that one can follow from the other was first shown by Gregory Chaitin in his development of Chaitin's Constant, a number which can be defined but not computed)

First, it's not clear just what connection between the incompleteness theorems and the halting problem is being claimed, but whatever it is, I'd want to see a ref that Chaitin was the first one to find it. Chaitin, like Wolfram, is a bit of a self-promoter, and is good enough at self-promotion that a lot of people give him credit for things that might not have been quite as original as they think. In particular Chaitin's Omega was by no means the first non-computable number to be defined. --Trovatore 04:06, 2 December 2005 (UTC)[reply]

It's not clear to you, because you might not know the subject. Chaitin proved the connection to the halting problem, and anybody who knew something about Kolmogorov complexity would acknowledge that. Fact: Chaitin was one of the co-inventors of Kolmogorov complexity. The halting probability Omega contains all information required to solve the halting problem. So it acts as an oracle for the halting problem. How is that for a connection? --Exa 15:26, 22 January 2006 (UTC)[reply]
Omega was of course not the first oracle for the halting problem to be defined, though. The first one was, presumably, simply the set of all Gödel numbers of Turing machines that halt. You still haven't explained (or indeed even mentioned) the connection to Gödel's incompleteness theorem (which is quite distinct from the halting problem), or given a ref that Chaitin was the first one to find said connection. --Trovatore 17:13, 22 January 2006 (UTC)[reply]
On reflection I probably should have removed entirely the following two sentences from the version way back then:
Computability theory has its roots in First-order logic, and the halting problem and other non-recursive problems are closely related to Gödel's incompleteness theorem (the fact that one can follow from the other was first shown by Gregory Chaitin in his development of Chaitin's Constant, a number which can be defined but not computed). David Hilbert and Kurt Gödel laid many of the foundations for first-order logic.
It was a red flag to me that someone was dragging Chaitin in to a discussion of the incompleteness theorems, but really the entire first sentence is unsupported–what roots in first-order logic are being claimed, exactly? And if the first sentence goes, then the second, while certainly true, is out of context.
Unless someone can explain and support this passage, I'll be snipping these two sentences. --Trovatore 17:32, 22 January 2006 (UTC)[reply]

Desktop PCs are not considered to be FSMs[edit]

(Also see [1])

The author probably didn't read the enlightening discussion in the new edition of the Cinderella book.

It would take too much effort from me to explain it completely, but I am going to remove those uninformed assumptions that the tape of a turing machine is an infinitely long resource. In fact, at any moment, the turing machine has a finite ID, and thus the infinity talked of is potential infinity (much like the infinity of Z) Furthermore, the TM is a mathematical model. Every TM denotes a particular computation. The author probably doesn't recognize that. The potential infinity there is an idealization. It does not mean that physical machines cannot be TMs!

If you followed that naive logic, just because you can define finite state machines that would not fit into this universe, desktop pcs would not even be finite state machines.

Think about that for a while. What do I mean by "Every TM denotes a particular computation"?

I am hoping it is clear now, and now this bit about desktop pcs not being TMs is goodbye kansas. Why on earth do you think we are studying turing machines then?

The standard reason for the potentially infinite tape of the TM is to model the fact that an ongoing computation may extend the tape (allocate more memory). This is usually thought to be a property of desktop PCs as well. With some effort, it is possible to extend the PC with as much memory as you want. There are no hard limits (except physical limits which apply to EVERYTHING including FSMs). Did you ever connect two PCs with a network? Fine. Then you know what this means.

For the record, all of this can be shown rigorously.

--Exa 15:41, 22 January 2006 (UTC)[reply]

"It does not mean that physical machines cannot be TMs!" -- All the real HW pcs. are FSMs and, hence, are TMs.
"Every TM denotes a particular computation" -- The TMs are SW programs executed by particular HW FSM. --Javalenok 18:43, 9 June 2006 (UTC)[reply]
Except inasmuch as they behave nondeterministically, desktop machines are finite state machines. You might argue that it's more useful to treat them using the theory of Turing machines, and I'd tend to agree, but in the strictest mathematical sense they aren't Turing machines.
What you call "potential infinity" is what mathematicians call "countable infinity". Every particular integer is finite, but the set of integers is infinite. Each particular state of a Turing machine is finitely describable, but there are infinitely many such states. A finite state machine can only be in finitely many different states (each of which is, again, finitely describable). As the article (currently) says, a desktop computer cannot recognize the language S ::= e | a S b, because it is a finite state machine, while a Turing machine can recognize this language. You can extend a desktop computer with more memory, but the amount of memory, hence the count of states, is always finite.
Please don't edit the article, at least not without more discussion.
-- BenRG 22:05, 22 January 2006 (UTC)[reply]
BenRG is correct. —Ruud 22:32, 22 January 2006 (UTC)[reply]
Yes, I'm a computer scientist I know what countable infinity is but the discussion is philosophical. Please read what I say carefully, the potentially infinite tape in the TM models the fact that storage is extensible. This
is true also for ordinary desktop PCs not only in the sense that it is "useful to treat them as such", but it can be rigorously proven that you can extend a PCs memory indefinitely. Only the very physical limits apply but that applies to everything, and no, I don't think you have really read my argument. I don't know if I can make it clearer. Let's try. You need to contend first that a desktop computer is physically extensible, it is not a static device with a hard-limit on the number of its states. It has no difference from a TM. The view in the article seems to me a Platonist, inadequate and unwarranted reading into Turing's original paper in which his genius makes it clear that he is modelling human computers. This article comes around and says that he failed to do what he tried to do. I don't think so. In my opinion, it is necessary to read Turing's paper in a philosophically consistent and fully scientific way and everything will be resolved. I am perhaps guilty for trying to make a non-trivial philosophical analysis here but it was just bad philosophy and bad science in this page that has no place. The argument I am giving is a stronger version of the argument in the Cinderella book. It is not simply "useful" to treat the PC as the realization of an r.e. function, it is essential. This is because, I am saying this very carefully, each TM represents a particular computation, and NOT a class of computations. Since you are a mathematician, I believe you will appreciate the difference between a function, and a set of functions. So, each ongoing physical computation in a desktop PC corresponds to a particular TM. That a desktop PC can not "instantiate" every possible TM is a no-brainer. It is due to physical limits. But the same physical limits also apply to FSMs. There are also infinitely many particular FSMs that a given desktop PC cannot "instantiate". We do not say that a desktop PC is not a particular FSM because it cannot "instantiate" all possible FSMs! Likewise, we cannot say that a desktop PC is not a particular TM because it cannot "instantiate" all possible TMs. That is to say, you might want to revise the claim here to be precise. In the simplest language, it would read "A desktop PC that is not extensible can be seen as either a FSM or a TM. It is interesting to note that such a machine cannot implement a universal Turing machine which could simulate any possible Turing Machine". etc. But note, such statements would really make the article descend a slippery slope, that is why I suggest to remove all references to this, in my humble opinion redundant, "PCs are finite state machines, TMs don't physically exist" argument that is best kept for redundant discussions on comp.theory . As far as authority goes, I would like to refer to the Cinderella book. However, my refutation is self-contained. I am hoping this somewhat complicated but correct argument is appreciated. It usually isn't understood how much thought has gone into my analysis. I would be glad if you could spend the effort to read it. Best. --Exa 23:06, 22 January 2006 (UTC)[reply]
Could you clarify on the following points in your first post, I might be misunderstanding you.
  • Every TM denotes a particular computation. What about universal Turing machines?
  • ... can define finite state machines that would not fit into this universe, desktop pcs would not even be finite state machines. Not every finite automaton would be a desktop pc, but every desktop pc is a finite automaton?
Desktop PC's are not TMs in the sense that they cannot recognize any turing recognizable language, because as the input string becomes large, the amount of memory needed to reconize becomes larger as well. Every language which has a finite length wouls be regular. —Ruud 23:48, 22 January 2006 (UTC)[reply]
Ok, I had better read you second, and much clearer, second reply before making the comments above. I essence I do indeed agree with you, although there remains the point that physics will limit you to a "finite tape". This should be much more carefully explained in the article. —Ruud 23:58, 22 January 2006 (UTC)[reply]
My problem was that I was writing "Turing machine" but thinking "universal Turing machine". You (Exa) are right, a desktop computer is a particular FSM and is also a particular Turing machine. But I still don't see which part of the current article you disagree with. You wrote "this article comes around and says that [Turing] failed to do what he tried to do", but I don't see which passage you're referring to. And the subject line of your comp.theory post that brought me here was "another clueless wikipedia article". As far as I can tell the article doesn't actually say anything that's incorrect. I'm happy with extending it but a little leery of modifying what's there. -- BenRG 02:24, 23 January 2006 (UTC)[reply]
You may wish to read the stuff (about half of which I wrote) at Turing machine#Comparison with real machines. Deco 01:46, 23 January 2006 (UTC)[reply]
This discussion bothers me. I don't actually see what part of this article Exa wants to change: perhaps they could publishe their exact proposed changes here in the Talk, to make the discussion more concrete.
That said, Exa says some things which I also find hard to accept, and sometimes adopts a rather intemperate tone.
The expression "desktop PC" also seems unduly specific, so I shall use RC ("Real Computer").
  • Part of the problem is the formulation "Is an RC a TM?". We may model an RC with a TM (to analyse it theoretically) or model a TM with an RC (to experiment). Can an RC be modelled by some TM? Certainly! Can an RC model any TM? Certainly not! (As becomes clearer in the Exa's 2nd posting)
  • The same goes for FSAs, except that for any FSA, an RC on current principles is conceivable, in some universe where the laws of physics permit its construction.
  • No conceivable RC can model every TM, unless capable of indefinite automatic extension during computation. Aside from physical limitations, I know of none current with an indefinitely extensible addressing model: even unbounded URLs are translated to fixed-length IP-addresses.
  • "can be rigorously proven that you can extend a PCs memory indefinitely" Do it or link to it (preferrably in Wikipedia). Arguments against in [2] by Marissa Kay on Thurs, Feb 9 2006 7:48 (which I have just recognised in section 8 below) sound plausible.
  • "potentially infinite tape in the TM models the fact that storage is extensible". Could be taken as such, I doubt if it is so intended.
  • The question whether the tape is infinite or can grow unboundedly is irrelevant to how a TM works. Yet the mathematical model may be more easily defined using a function from Z to the symbol-space.
  • "What do I mean by "Every TM denotes a particular computation"? " What indeed?
  • There is an ambiguity in the usage of "TM": it could mean just the mechanism, in most proofs we mean the mechanism with a certain "programme" (transition table), it could even include the input tape. In the second case I suppose a real computer can be regarded as a TM, but with preset bounds on tape size. They cannot model universal TMs, i.e. those that can model any TM.
  • To say "each TM represents a particular computation, and NOT a class of computations" seems to require you to include the input tape as an immutable part.
  • "Why .. do .. we study TMs then?" Because real computers model the behaviour of TM's on certain ranges of input, and are better described by them than by other automata. One reason is that an FSA modelling an RC has enormously many states and conceals the information about their significance, while a TM acts far more locally.
  • "Platonist, inadequate and unwarranted reading" (of Turing) - please explain with examples!
  • "he is modelling human computers" .. Turing wrote "We may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions q1, q2, ..., qR which will be called 'm-configurations'.", but it seems to me only a motivation for the definition of an entirely abstract machine.
  • It would also help if it were possible to provide (a link to) the indicated passage in the Cinderella book, which I do not have, and to Turing's original paper (from List_of_important_publications_in_computer_science#Computational_complexity_theory).
Looking at the history since 2006-01-03, I believe that Exa's proposals have not been carried out, but that the article does not make the assertions to which he objects. Could his postings apply to a different article or to some Talk pages? BenRG's remarks seem right. The Turing machine#Comparison with real machines also seems to me very useful.
-- PJTraill 01:04, 19 February 2006 (UTC)[reply]

structure of article, recursion theory[edit]

Some time ago I proposed splitting out recursion theory as a separate article and renaming this content to computability theory (computer science). I now think that was a mistake. I felt that non-computer-science topics were being given short shrift in the article. It now seems to me that all research topics are given short shrift.

The exposition of the elementary material is too slow, and there's not enough outline of more advanced material. The tasks of defining what it means to be computable, and establishing that there are things that aren't computable, should be described briefly, with links to computable function and Turing machine for more details. Then there should be a series of sections, one for each identifiable theme within computability theory, with a blurb on each and a link to a more-detailed main article. Themes that come to mind are complexity theory (P vs NP, PSPACE, hierarchies, etc), structure of the Turing degrees, randomness (the sort of stuff Rod Downey and his former students do), and effective descriptive set theory; computer scientists may have others they want to add. --Trovatore 16:09, 8 February 2006 (UTC)[reply]

I think this might be a good idea as it does appear to be the same subject, only that computer scientists and mathematical logicians have a different focus (e.g. computer scientists start with the work of Turing and Church and stop at Oracle machines, while mathematical logians start with the work of Kleene and focus on Oracle machines and beyond). —Ruud 16:54, 8 February 2006 (UTC)[reply]
Well, so that was more or less what I was thinking back at the time of the split (in fact, I think I wrote something very much like it) but now I'm not so sure. It seems to me that mathematical logicians are interested in, say P and NP and related hierarchies (which are all solidly within "computable"), and computer scientists are interested in random sequences, even though they can't be computable. So while there's a difference in focus, there's no clear borderline. I just think the article spends too much time on computable-vs-not-computable, which is pretty cut and dried and kind of boring. We know what's computable and what's not; what's left to say? Of course lots of things are left to say, but the article doesn't really talk about them. --Trovatore 22:28, 8 February 2006 (UTC)[reply]

I'd move the first part of this article to automata theory if this wasn't for the fact that this would leave little of the article behind. —Ruud 22:55, 8 February 2006 (UTC)[reply]

Ah, that's not bad. Then there'd be room to do the sort of thing I outlined, with lots of sections that point to main articles and give brief summaries of their contents. There could be one of those sections for automata theory, too. Note that I'm not personally promising to take care of the second part of the plan, at least not soon (any large time block that I commit to WP at the moment would have to be at set theory, naive set theory, axiomatic set theory, where I've won a mild consensus for an ambitious reform). --Trovatore 23:00, 8 February 2006 (UTC)[reply]

Please do not neglect the layman (obviously "interested", a modicum of intelligence/maturity also presumed) - not only should they not go away empty-handed, they should also not come to a dead end. In other words, if one comes to the article knowing nothing of computability, one should learn something about it, and also be told where to find more information if one gets stuck. At the same time I am in favour of articles covering advanced topics and current research, though I understand that Wikipedia is not intended for publishing original research. So how does one accomodate these requirements? Are there useful templates: e.g. from now on secondary school / undergraduate maths / postgraduate mathematical maturity required. Or: this presumes understanding of article X - which might get awkward if that article keeps changing level! -- PJTraill 00:23, 20 February 2006 (UTC)[reply]

All physical systems, including PC's, are FSM's[edit]

All physical systems are finite state machines (more precisely, finite state Markov processes), the number of states being given by the exponential of the entropy (in units of Boltzmann's constant) of the system (which is, in fact, how entropy is defined); and the state transition function being that generated by the system's Hamiltonian.

The Bekenstein Bound, in turn, places a hard limit on the size of the state space and on the entropy of anything occupying a given region of space (and even the space, itself), no matter what its constituency.

More on that, below.

The finitude of the state space is, in fact, a centrally important feature of statistical and thermal physics, associated with the notions of recurrence times and ergodicity.

No machine, at the present time, uses more than a tiny fraction of the (finite) state space available to it, nor will any in the foreseeable future possibly until the full development of nanotechnology enables the employment of the atomic and subatomic structure of the physical system in question.

All systems occupying a finite volume -- including space, itself, and the fields pervading it -- no matter what their composition, will be subject to the Bekenstein Bound.

The Bekenstein Bound, more precisely, provides a hard-limit on the size of the state space associated with anything in any region of space bounded by a surface of area A. It is the exponential of A, itself, with A given in units of 1/4 of the Planck area. The hard-limit on the system's entropy is kA, where k is Boltzmann's constant and A, again, in units of 1/4 the Planck area.

It turns out that for a spherical region, the limit is exactly that which is reached the moment enough information and matter is crammed into the region to produce a black hole of the same size as the region, itself. The theoretical maximum entropy of the region is then realized as none other than what has already been asserted since the 1970's to be the entropy of the black hole, itself.

(See article on Black hole thermodynamics).

So, to make a long story short, the foregoing stands as a decisive counter-point to idea of a endlessly cramming more and more into a machine. If you cram too much storage, fields, light, electricity or matter (or even sound, pressure, tension, stress or energy) into a given machine, you'll eventually reach the point when the whole thing will collapse under its own gravity and become a black hole. At that point, the theoretical maximum will be reached for the region enclosed by the black hole.

Anything more crammed in will just make the black hole larger.

(But you'll never see it get larger, unless you fall in, nor will you see anything that's too close to the event horizon at all, since time's nearly frozen to a standstill on the event horizon and everything close to it is red shifted into invisibility).

Eventually, as per Hawking, the black hole will eventually radiate all of its energy away (and all of its information contents) as thermal radiation and evaporate away. (See Hawking Radiation and Black hole information paradox, the latter having led in just the past few years to a much deeper understanding on what an event horizon is, what it is not and its close relation to the notion of Rindler horizons).

None of the foregoing, it's worth pointing out, has any bearing on whether the universe, itself, is more than a FSM. That does not follow, even with the fact that all the finite-volumes within the universe have finite state spaces.

That is, it does not follow, unless the Universe, itself, is finite. It doesn't even need to follow that the Universe have any well-defined state space at all, in the first place, finite or infinite. As Smolin has pointed out, since the underlying state space (as per Quantum Gravity) is a certain space of 3-manifolds, and since the classification problem for 3-manifolds may very well be unsolveable, it's quite possible that there may not be any effectively defineable state space for the Universe at all. (The theory of 3-manifolds is closely related to knot theory, since a knot may be regarded as the complement of a manifold).

But the one thing that can be said is that the state space associated with the sky -- that is, the past light cone of a given spacetime point -- is most likely finite. That's because the past light cone has the structure of a hypersphere. That's true, regardless of whether the Universe is finite with positive curvature, infinite and spatially flat or infinite with negative curvature. In all cases, the sky is a hypersphere that has the Big Bang at its antipode. The Big Bang is the one and only event that is directly in the line of sight in every direction from every place in the Unverse at every time (though it is somewhat obscured by the cosmic microwave background or CMB).

Now, the classical non-trivial example of an infinite state machine (ISM) is that recognizing the Dyck language, given by the grammar

                      S -> 1; S -> u S v S

(1 = empty word); which has as its minimal state diagram

      <-- [0] -- u --> [1] -- u --> [2] -- u --> [3] ...
           <-- v --     <-- v --     <-- v --

with an infinite number of states -- easily recognized as the state diagram of a particular 1-counter machine.

In theory, a quantum system like the electromagnetic field is supposed to provide a similar infinity of states, each frequency mode admitting a set of states containing 0, 1, 2, 3 or more quanta of that frequency (see article on Quantum Electrodynamics). However, with the Bekenstein Bound in place, this will invariably break down at a high enough energy (as does the rest of quantum fieid theory). The manner of breakdown, though, is still unknown. But it exists.

Other than theoretical construct of the quantum field (which, as per the above, breaks down), there is nothing that can embody even the deceptively simple automaton depicted by the state diagram above, since it has an infinite number of states. All the more so nor any of the varieties of other infinite state automata, including the Push Down automata or Turing Machines, never mind any instance of a Universal Turing Machine.

There are, of course, instances of each of these classes where the number of states that can be reached from the start state is finite. Then, since the minimal state diagram is finite, they are equally representable as finite state machines. But in the general case -- like that for the Dyck language above -- the number of states accessible from the start state is infinite, the minimal state diagram is infinite and it is simply not representable as a finite state machine.

The utility of Turing Machines or any of the other varieties of infinite state machine formalisms (push down automata, counter automata, etc.) is that they provide a more efficient means of representing finite state machines and the physical systems they may embody. Obviously, the more powerful a formalism, the greater utility it will have -- even for those things that could theoretically be represented by lesser means.

Greater utility also means greater relevance, not the other way around!

Oracles provide a more powerful mechanism, still, and despite the fact that they are strictly more powerful than TM's, they are neither no less of an idealization than any other infinite state automaton formalism, nor any less physically relevant. But theorists haven't yet made a great deal of use of them. There is, invariably, a notion of resource bounded oracles, for each variety of oracle, and this is bound to be just as relevant for finite state machines and physical systems as is that of the resource bounded TM.

The primary mode of application of any of the varieties of ISM is through the notion of "resource-boundedness". So, it is a seriously misleading notion to even assert that the exclusion of any ISM from the real of physical reality also means their irrelevance!

The whole foundation of complexity theory, in fact, is based on this notion of resource boundedness. And that lies at the center of computer science. So, of course, an unphysical model is relevant to physical devices! But neither the 1-counter machine, push down automaton, Turing Machine, nor any variety of oracle is fully realizable within the physical world as such.

No offense, but although the question of whether all physically constructable machines are FSMs and the associated physics are interesting, we would need a well-known existing and reputable source making these claims to add anything to the article regarding this. If I had to guess, I'd guess the literature contains statements both ways (and we can cite both). Deco 10:24, 9 February 2006 (UTC)[reply]
Our long-winded anonymous friend seems to be mainly concerned with showing you can't get more than a finite state machine as a physical realization, but doesn't attempt (I think; I didn't really read it all) to show that you can get even a finite state machine. I think that'd be a problem, because FSMs are deterministic, and the physical world is not. So FSMs are idealizations too; a physical realization will always have some probability of doing the thing it's not supposed to. --Trovatore 15:35, 9 February 2006 (UTC)[reply]
The text appears almost identical to the contribution to [3] by Marissa Kay on Thurs, Feb 9 2006 7:48 -- PJTraill 01:11, 19 February 2006 (UTC)[reply]
Both had the same author. The article, itself, clearly stands on its own since the arguments made in it are sufficiently self-contained and cogent to settle the issue, subsuming prior published literature on the matter, particularly in its taking into account the finiteness imposed as a result of the Bekenstein Bound, which has been neglected in the literature and is really the critical point in deciding the issue. The question of whether the universe is deterministic or not is open. The Schroedinger equation is, in fact, deterministic and state evolution under it, likewise so. Whether the apparent non-determinism associated with "collapse" is real or only epiphenonemal and deriveable from a more fundamental, and deterministic, substrate (e.g. by decoherence, consistent histories, or some other mechanism, etc.) is really the central issue of the so-called measurement problem of quantum theory and is as-of-yet unresolved. But, the question is moot anyway: non-deterministic FSM's are not equivalent to deterministic FSM's! This equivalence only applies to deterministic vs. non-deterministic FSA's. The non-equivalence for FSM's is readily seen by the trivial counter-example of a FSM with states {S,F}, input {X}, outputs {Y,Z}, start state S, final state F and the two transitions on input X from S to F, one transition having output Y, the other having output Z. A deterministic FSM cannot yield more than one translation for any input. This one yields the two translations for the input "X": "Y" and "Z". Therefore it is not reducible to an equivalent deterministic FSM.

Universal Turing Machines[edit]

What about Universal Turing Machines? Shouldn't they be included as well? 70.111.251.203 14:42, 7 March 2006 (UTC)[reply]

Cleanup[edit]

Found use of first person. While text is clear and easy to understand, it is still bad style. 210.50.201.22 05:35, 26 May 2006 (UTC)[reply]

If you really want to try to rewrite it without the first person, and make it even remotely close to as easy to understand, be my guest. This is a very common style for proofs. --Readams 15:59, 6 June 2006 (UTC)[reply]

confusion PCs are not FSMs[edit]

How is the following statement confusing? "Actually, these are computer programs written in general-purpose programming languages which are Turing Machines (the GP languages are, therefore, called Turing-complete), while the actual computers executing the programs are FSMs." I have included it into the article even before looking into the discussion to disambiguate the popular confusion: "Poll: Are PCs Turing Machines?" Thi is the actual and vast confusion. What I see at this page proves my doubts. Please explain what is so confusing in my sentence? Peahaps it is rather wrong or inappropriate? --Javalenok 09:02, 13 June 2006 (UTC)[reply]

I don't know what "confusion" you're referring to. Your statement is accurate, but the English is difficult to understand, and I'm not sure how much it adds in this context. Dcoetzee 22:30, 30 January 2008 (UTC)[reply]

Church-Turing thesis[edit]

This discussion is copied from the user page.

You changed Church-Turing thesis to claim that it had been disproven. This is certainly not the case. Please feel free to discuss your changes on the talk page of the article. CMummert · talk 10:49, 12 April 2007 (UTC)[reply]

The Church-Turing thesis has not been disproved. However, it needs to be stately precisely which was not done in the previous version of the article. It is very important to distinguish between computable functions and computability (computations).--71.198.216.63 16:11, 12 April 2007 (UTC)[reply]

Deciding a language[edit]

I don't understand this idea. What does it mean to decide a language? Unfree (talk) 16:32, 30 January 2008 (UTC)[reply]

To decide whether a given string is in the language or not. --Trovatore (talk) 20:11, 30 January 2008 (UTC)[reply]

Attempt to merge[edit]

For anyone keeping this article on their watchlist: there is currently an attempt at merging a number of articles on (non-)recursive/(un)decidable/(un)computable sets/languages/decision problems into a single, thorough article. The experiment currently sits at Recursive languages and sets. Comments are very welcome. Pichpich (talk) 00:43, 21 February 2008 (UTC)[reply]

A paradoxical situation[edit]

This is a duplicate post from Talk:Algorithm#A_paradoxical_situation. To keep the discussion in one place, please comment there is you are interested. — Carl (CBM · talk) 20:43, 19 March 2008 (UTC)[reply]

Reorganize the Computability articles[edit]

There is a discussion currently at Talk:Recursion theory#Reorganize the Computability articles about reorganising the computability articles to get some sort of hierarchy. In particular the thinking is that Computability should be an introduction rather than a disambiguation page. Dmcq (talk) 11:26, 14 August 2009 (UTC)[reply]

The reorganization is underway. The centralized discussion is at Talk:Recursion theory#Reorganize the Computability articles. — Carl (CBM · talk) 14:52, 22 August 2009 (UTC)[reply]