# Exploiting limitations of Turing machines in Turing tests? - Theory

This is a discussion on Exploiting limitations of Turing machines in Turing tests? - Theory ; This may be a well known question or a just result of my misconceptions, but so far I haven't got any definite answer.. So maybe someone here could help or clarify things for me. ... I was recently contemplating Turing ...

1. ## Exploiting limitations of Turing machines in Turing tests?

This may be a well known question or a just result of my misconceptions,
but so far I haven't got any definite answer.. So maybe someone
here could help or clarify things for me.

...

I was recently contemplating Turing tests and Turing machines (TM) and
was wondering if the fundamental limitations of TM can be exploited
to discover whether the conversation partner in a Turing test is a
digital computer AI or a real person.

As far as I have understood the issue, we have the following points

1) any digital computer+software can in principle be reduced to a
somekind of TM. So the computer can not exceed the computational
capabilities of TM.

2) There are problems that a universal TM can't decide. Eg.
the halting problem: given TM b and input c, does the machine stop
at some point?

Now, suppose that we come up with a simple TM with input that does
not stop. Eg. it produces an endless string of aaa..'s. A human
with sufficient knowledge should be able to see that this
machine never stops.

Let's say that we pose this question to our human/AI partner,
ie. we describe our never stopping TM and ask: does it
stop?

Now a real human could provide us with a definite answer. However, any
digital computer is subjected to the limitations of TM and
therefore can not say for sure if our machine stops or not.

So my question is, can we use this kind of scheme to discover whether we
are speaking with an AI implemented on a digital computer or with
a genuine human?

(of course, an AI mimicing human behaviour would probably say
something like "get a life, smart ass, don't bore me to death", in
which case we couldn't tell )

- T.H

[ comp.ai is moderated ... your article may take a while to appear. ]

2. ## Re: Exploiting limitations of Turing machines in Turing tests?

Tero Hakala writes:

> This may be a well known question or a just result of my misconceptions,
> but so far I haven't got any definite answer.. So maybe someone
> here could help or clarify things for me.

> ..

> I was recently contemplating Turing tests and Turing machines (TM) and
> was wondering if the fundamental limitations of TM can be exploited
> to discover whether the conversation partner in a Turing test is a
> digital computer AI or a real person.

> As far as I have understood the issue, we have the following points

> 1) any digital computer+software can in principle be reduced to a
> somekind of TM. So the computer can not exceed the computational
> capabilities of TM.

> 2) There are problems that a universal TM can't decide. Eg.
> the halting problem: given TM b and input c, does the machine stop
> at some point?

So far, so good.

> Now, suppose that we come up with a simple TM with input that does
> not stop. Eg. it produces an endless string of aaa..'s. A human
> with sufficient knowledge should be able to see that this
> machine never stops.

> Let's say that we pose this question to our human/AI partner,
> ie. we describe our never stopping TM and ask: does it
> stop?

> Now a real human could provide us with a definite answer. However, any
> digital computer is subjected to the limitations of TM and
> therefore can not say for sure if our machine stops or not.

Here's where you make an error. There are indeed TM's that can detect
non-terminating TM's, as people can. 2) above doesn't say no TM can
detect non-termination of *any* TM's, it says something much more
specific, that no TM can correctly determine termination of *every*
TM/input pair. Equally, no person can reliably determine this, since
some cases would be more complex than they could possibly understand.

Indeed, if a person can do it reliably for some case, then so can a
specific TM (but that TM will fail on other cases, as the human would).
And TM's can solve termination problems that humans never could.

> So my question is, can we use this kind of scheme to discover whether we
> are speaking with an AI implemented on a digital computer or with
> a genuine human?

The short answer is no. For a long-winded one, try comp.ai.philosophy

> (of course, an AI mimicing human behaviour would probably say
> something like "get a life, smart ass, don't bore me to death", in
> which case we couldn't tell )

> - T.H

[ comp.ai is moderated ... your article may take a while to appear. ]

3. ## Re: Exploiting limitations of Turing machines in Turing tests?

In article <46fb7ff3\$1@news.unimelb.edu.au>,
David Kinny <dnk@OMIT.csse.unimelb.edu.au> wrote:
>Equally, no person can reliably determine this, since
>some cases would be more complex than they could possibly understand.

For a simple, explicit example, write a program to search for a
counterexample to the Riemann hypothesis or the Goldbach conjecture
or some similar unsolved mathematical problem, halting as soon as it
finds such a counterexample. Being able to determine whether the
machine halts is equivalent to solving the unsolved math problem.
--
Tim Chow tchow-at-alum-dot-mit-dot-edu
The range of our projectiles---even ... the artillery---however great, will
never exceed four of those miles of which as many thousand separate us from
the center of the earth. ---Galileo, Dialogues Concerning Two New Sciences

[ comp.ai is moderated ... your article may take a while to appear. ]

4. ## Re: Exploiting limitations of Turing machines in Turing tests?

Tero Hakala <tero.hakala@hut.fi> wrote on Thu, 27 Sep 2007:
> This may be a well known question or a just result of my misconceptions

> I was recently contemplating Turing tests and Turing machines (TM) and was
> wondering if the fundamental limitations of TM can be exploited to discover
> whether the conversation partner in a Turing test is a digital computer AI
> or a real person.

No, they can't. Because humans are subject to the same limitations.

> 1) any digital computer+software can in principle be reduced to a somekind
> of TM. So the computer can not exceed the computational capabilities of TM.
> 2) There are problems that a universal TM can't decide. Eg. the halting
> problem: given TM b and input c, does the machine stop at some point?

OK.

(But from later on: you clearly missed a "there exists" vs. "for all"
distinction. The "halting problem" is that you can't write a program that
correctly determines FOR ANY TM b and input c, whether it halts. Not just
for a single b & c. The latter question is easy to write a program for, at
least for many b & c's.)

> Now, suppose that we come up with a simple TM with input that does not
> stop. Eg. it produces an endless string of aaa..'s. A human with
> sufficient knowledge should be able to see that this machine never stops.
> Let's say that we pose this question to our human/AI partner, ie. we
> describe our never stopping TM and ask: does it stop? Now a real human
> could provide us with a definite answer.

That's true.

> However, any digital computer is subjected to the limitations of TM and
> therefore can not say for sure if our machine stops or not.

That's not correct. It's trivial to write a program to correctly determine
whether the all-"a"s program halts or not.

> So my question is, can we use this kind of scheme to discover whether we
> are speaking with an AI implemented on a digital computer or with a genuine
> human?

No.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ don@geddis.org
Sometimes I think I'd be better off dead. No, wait. Not me, you.
-- Deep Thoughts, by Jack Handey [1999]

[ comp.ai is moderated ... your article may take a while to appear. ]

5. ## Re: Exploiting limitations of Turing machines in Turing tests?

In comp.ai Don Geddis <don@geddis.org> wrote:

> > I was recently contemplating Turing tests and Turing machines (TM) and was
> > wondering if the fundamental limitations of TM can be exploited to discover
> > whether the conversation partner in a Turing test is a digital computer AI
> > or a real person.

> No, they can't. Because humans are subject to the same limitations.

Can you elaborate this a little bit? I was not aware of any proof
that humans can't exceed computational power of TM's. While there
are arguments that the human thought process might be understood/
modelled by a computer ****ogy, there a lot of contrasting views also.

While I see now that a given halting problem for some TM/input can
indeed be decided by some TM. Is it possible to prove that for
every possible halting problem that a human can imagine/decide,
there exists one single TM that could solve all of them?

Furthermore, I see this comparison a bit unfair. As we now consider
humans limited by their known limitations (speed, attention span, memory
capabibilities, life span etc) and on the other side we have
mathematical model of a TM with arbitrary complexity and
unlimited memory. (and we don't have similar mathematical model for
humans)

Would it be usefull to consider similarly limited TM's? Of course
these limitations would also be rather arbitrary, but as far as any
digital computer/symbolic manipulation mechanism would have to be
implemented in the physical world, they are subjected to the laws of
physics that would inevitably pose at least somekind of restrictions.
(Eg. it would be difficult to build a real-world TM that would print out
all natural numbers as this would require unlimited memory to store
the digits.. So, sooner or later this machine would halt or do something
else while there a still some numbers left to print..)

-T.H

6. ## Re: Exploiting limitations of Turing machines in Turing tests?

Tero Hakala wrote:
> In comp.ai Don Geddis <don@geddis.org> wrote:
>
>>> I was recently contemplating Turing tests and Turing machines (TM) and was
>>> wondering if the fundamental limitations of TM can be exploited to discover
>>> whether the conversation partner in a Turing test is a digital computer AI
>>> or a real person.

>> No, they can't. Because humans are subject to the same limitations.

>
> Can you elaborate this a little bit? I was not aware of any proof
> that humans can't exceed computational power of TM's. While there
> are arguments that the human thought process might be understood/
> modelled by a computer ****ogy, there a lot of contrasting views also.

There is strong evidence to the claim that anything that can be computed
with a physically realistic computer, can be computed with a Turing
machine.

For one thing, many seemingly different computational models have been
suggested. Some of them have proven strictly weaker than Turing
machines, and most of them have the same computational power as Turing
machines. None of them is stronger than Turing machines, except those
database). On the other hand, very modest machinery suffices for
obtaining Turing-strength. The class of computable functions is thus
very robust and insensitive to how it is defined. Google Church--Turing
thesis.

Also, the physical world can be simulated pretty well numerically, and
what can be computed numerically, can be simulated with a Turing
machine. The well-known chaos phenomena imply that simulation often
requires ridiculous amounts of computational power, but they do not
imply impossibility. Disclaimer: this line of argument is not presented
often, so it is probably less universally accepted than the
Church--Turing thesis.

Thus, if we believe that the brains work under the laws of physics as we
know them today, they are most likely subject to the same restrictions
as Turing machines. On the other hand, the fact that AI is very much
behind the creative capabilities of humans despite vast advances in
computational power, together with the feeling of "personal existence",
and similar arguments make many believe that the human brains have
something different that lifts them above computers.

One more thing: the human brain is apparently finite (or bounded, which
is a weaker requirement), whereas the Turing machine is unbounded
(meaning that although at any moment of time, it stores a finite amount
of information, there is no a priori bound to how big that amount of
information may grow). This gives the Turing machine an advantage.

> While I see now that a given halting problem for some TM/input can
> indeed be decided by some TM. Is it possible to prove that for
> every possible halting problem that a human can imagine/decide,
> there exists one single TM that could solve all of them?

If we believe that the human brain is bounded, then this is trivially
true. However, also real-life computers are bounded. There cannot be
more memory than fits in the universe. So there is no difference between
the brains and real-life computers in this respect. It is thus necessary
to discuss briefly why Turing machines are widely considered an
appropriate model of computation, despite their unboundedness.

If everything is finite, then, for every question, there is a big but
simple program that answers it: it simply looks the answer from a huge
vast gigantic built-in table of answers and returns it. Thus
decidability becomes trivial (in theory). To obtain a useful theory,
this "cheating" solution must be ruled out. The unboundedness convention
is a handy way of doing that. One can alternatively try to set bounds to
how big the program may be (google "Kolmogorov complexity", "algorithmic
information theory"). However, even when developing that kind of
theories, the good old theory of Turing machines proves an excellent
framework.

> Furthermore, I see this comparison a bit unfair. As we now consider
> humans limited by their known limitations (speed, attention span, memory
> capabibilities, life span etc) and on the other side we have
> mathematical model of a TM with arbitrary complexity and
> unlimited memory. (and we don't have similar mathematical model for
> humans)

The usual way of making this comparison fair is to assume unbounded

Then, given a detailed description of a Turing machine, it is easy for
the human to construct an instance of the halting problem that the
Turing machine cannot solve (cf. G\"odel's construction). On the other
hand, if we believe that the humain brains are subject to the
Church--Turing thesis, then also the opposite holds: given a detailed
description of the human brains, it is easy for the Turing machine to
construct an instance of the halting problem that the brain cannot
solve.

> Would it be usefull to consider similarly limited TM's? Of course
> these limitations would also be rather arbitrary, but as far as any
> digital computer/symbolic manipulation mechanism would have to be
> implemented in the physical world, they are subjected to the laws of
> physics that would inevitably pose at least somekind of restrictions.
> (Eg. it would be difficult to build a real-world TM that would print out
> all natural numbers as this would require unlimited memory to store
> the digits.. So, sooner or later this machine would halt or do something
> else while there a still some numbers left to print..)

Results obtained by appealing to the boundedness of real computers tend
to be artificial from the practical point of view, whereas results
obtained with the Turing machine model match well with our experience
with real computers. This is because results obtained by assuming the
bound tend to require ridiculously long computations to occur in
practise. Computers have so much memory that unbounded memory is a very
good approximation.

One relevant field here is computational complexity. It investigates how
much time, memory (and sometimes other resources) it takes to compute
something.

> -T.H

--- Antti Valmari ---

7. ## Re: Exploiting limitations of Turing machines in Turing tests?

Tero Hakala <tero.hakala@hut.fix.invalid> writes:

> In comp.ai Don Geddis <don@geddis.org> wrote:
>
>> > I was recently contemplating Turing tests and Turing machines
>> > (TM) and was wondering if the fundamental limitations of TM can
>> > be exploited to discover whether the conversation partner in a
>> > Turing test is a digital computer AI or a real person.

>> No, they can't. Because humans are subject to the same limitations.

>
> Can you elaborate this a little bit? I was not aware of any proof
> that humans can't exceed computational power of TM's. While there
> are arguments that the human thought process might be understood/
> modelled by a computer ****ogy, there a lot of contrasting views
> also.

I am not the person you asked to expand but, hey, this is Usenet...
You are right, there is no such proof. From a mathematical point of
view one could ask either side to prove the case: that an idealised
human brain either is == TM or is != TM, but from a scientific point
of view it makes more sense to expect those that image there is some
way for a wet lump of neurons to escape the limitations of formal
computation to propose how it does this, and to demonstrate the
mechanisms involved.

There is, of course, not even any proof that the restrictions
computation. This proposition is called the Church-Turing thesis.
There are models of computation that can exceed these limits, but none
capture our intuitive sense of what it means to compute or to reason
(for example, they usually have a component that "just knows" the

> While I see now that a given halting problem for some TM/input can
> indeed be decided by some TM. Is it possible to prove that for
> every possible halting problem that a human can imagine/decide,
> there exists one single TM that could solve all of them?

Not yet, and probably never. Proving things about humans is not yet
possible!

> Furthermore, I see this comparison a bit unfair. As we now consider
> humans limited by their known limitations (speed, attention span, memory
> capabibilities, life span etc) and on the other side we have
> mathematical model of a TM with arbitrary complexity and
> unlimited memory. (and we don't have similar mathematical model for
> humans)

Of course it is unfair -- that is the trick for proving convincing
upper bounds! If TMs were restricted, one could argue that just a few
more cells of tape, or another tape, or just a few more steps and the
halting problem would terminate. By having computations that can not
be programmed on an idealised, unbounded machine, we get convincing
upper bounds on what is computable.

<snip>

--
Ben.

8. ## Re: Exploiting limitations of Turing machines in Turing tests?

In article <fdqlgq\$itk\$1@news.cc.tut.fi>,
Antti Valmari <ava@.c.s...t.u.t...f.i.invalid> wrote:
[snip]

> There is strong evidence to the claim that anything that can be computed
> with a physically realistic computer, can be computed with a Turing
> machine.
>
> For one thing, many seemingly different computational models have been
> suggested. Some of them have proven strictly weaker than Turing
> machines, and most of them have the same computational power as Turing
> machines. None of them is stronger than Turing machines, except those

Not entirely true. One thing that the physical universe seems to have
that TMs do not is true randomness. But it's doubtful that that's a
significant limitation, since pseudo-randomness can come arbitrarily
close.

--
---------------------------
| BBB b \ Barbara at LivingHistory stop co stop uk
| B B aa rrr b |
| BBB a a r bbb | Quidquid latine dictum sit,
| B B a a r b b | altum viditur.
| BBB aa a r bbb |
-----------------------------

9. ## Re: Exploiting limitations of Turing machines in Turing tests?

Ben Bacarisse <ben.usenet@bsb.me.uk> wrote:

> >> No, they can't. Because humans are subject to the same limitations.

> I am not the person you asked to expand but, hey, this is Usenet...
> You are right, there is no such proof. From a mathematical point of
> view one could ask either side to prove the case: that an idealised
> human brain either is == TM or is != TM, but from a scientific point
> of view it makes more sense to expect those that image there is some
> way for a wet lump of neurons to escape the limitations of formal
> computation to propose how it does this, and to demonstrate the
> mechanisms involved.

In one thing that TM's and brains seem to be different is that the
number possible of TM's is infinite, but countable (in the sense
that natural numbers are). While in working human brains, the
neurons get their information in ****og manner from other neurons
(ie. real numbers come into play here) if we discount
quantization effects. So it seems that brains (or dynamics of brain)
are uncountable infinite and therefore exceed the number
of possible TM's. And if we take the quantum mechanics into
account, we seem to get into a bottomless swamp. (And TM's, being
deterministic, would have also some difficulty to handle quantum
aspects.. as far as QM is considered valid in the form as it is now)

Can this be considered a valid distinction between brains and TM's?

-T.H

10. ## Re: Exploiting limitations of Turing machines in Turing tests?

In article <fdt045\$k4c\$1@epityr.hut.fi>,
Tero Hakala <tero.hakala@hut.fi> wrote:
>In one thing that TM's and brains seem to be different is that the
>number possible of TM's is infinite, but countable (in the sense
>that natural numbers are). While in working human brains, the
>neurons get their information in ****og manner from other neurons
>(ie. real numbers come into play here) if we discount
>quantization effects. So it seems that brains (or dynamics of brain)
>are uncountable infinite and therefore exceed the number
>of possible TM's.

This is true only if the "****og manner" you speak about cannot be simulated
by high-precision rational numbers. It is not at all clear that if one
perturbs the inputs to a neuron by, say, 10^(-1000000) then its behavior
will change.

>And if we take the quantum mechanics into
>account, we seem to get into a bottomless swamp. (And TM's, being
>deterministic, would have also some difficulty to handle quantum
>aspects.. as far as QM is considered valid in the form as it is now)

As far as computation is concerned, it is again not clear whether
quantum mechanics buys you anything. Any *finite* sequence of "truly
random" digits can be handled by the TM model of computation. An infinite
sequence cannot, but again it is not clear that such an infinite amount
of information can be "used" in the operation of the brain.

>Can this be considered a valid distinction between brains and TM's?

Possibly, but it's not at all clear. The sticking point is that the
differences you posit rely on infinitely long sequences of numbers, or
an infinite amount of information, and it's not clear how such things
can be relevant physically. It's also not clear how we can test such
a theory in the lab, given that any experiment we can actually carry
out must take a finite amount of time and work with finite precision.

--
Tim Chow tchow-at-alum-dot-mit-dot-edu
The range of our projectiles---even ... the artillery---however great, will
never exceed four of those miles of which as many thousand separate us from
the center of the earth. ---Galileo, Dialogues Concerning Two New Sciences