
C. PENCO
eighties; e.g. Wilks, 1986)10, but to actions “with” symbols:
symbol manipulation is the ability to use symbols in context,
like using and understanding indexicals, demonstratives and
definite descriptions, shortened or not, on the ground not only
of shared presuppositions, but also of perceptual abilities of
recognizing individual objects and patterns. On the contrary, in
the original setting of the imitation game, the human is con-
ceived as an inference machine, and a program simulating a
human is thought to have a similar mastery of inferences, such
that a dialogue is possible though a computer screen and a
keyboard. This setting of the Imitation Game permits mental
experiments such as Searle’s where a system is supposed to
receive sentences as input and give sentences as output. But if
we want to implement the more basic features of language un-
derstanding (such as demonstratives and referential uses of
definite descriptions, quantifiers…) we need to rely on a dia-
logue with a shared environment. We need therefore to have a
form of interaction with real situation, mastering the use of
symbols to detect items in the environment. The challenge is to
give machine the ability to use perceptual information from the
context, and to mix it with background knowledge in order to
use the most difficult aspects of context-dependent language
use presented in the previous paragraph. The formal work done
on these new aspects of the boundary between semantics and
pragmatics is some of the most promising as regards this possi-
bility: updating the Turing test in real situations.
Actually, unnoticed by many, the first example of a possible
query of the interrogator to the unknown (man, woman or
computer) interlocutor in the original paper by Turing (1950) is:
“C: Will X please tell me the length of his or her hair?”
The question is typically considered as an application of
“his” or “her” anaphorically towards a previously restricted
domain (the supposed interlocutors). In this case the role of
indexicals is anaphoric, that is it picks the individual referred to
in the conversation (in this case X). But what will happen if the
question considers a referential use of “his” or “her”? In this
case, we need the ability to detect in the environment some-
thing which might match the use of the indexical adjective,
probably referring to a human salient in the context. We have
here the beginning of a different kind of imitation game: we
might for instance test a group of experts in front of a robot,
whose behavior could be either autonomous or directed by a
human at distance. The challenge would be to detect whether
the robot is an autonomous one or something human directed.
The ability to manipulate symbols here is the ability to interact
with symbols in a common environment. If the autonomous
robot is able to interact correctly, why not accept that, being
sufficiently “like” a human being in manipulating symbols in
context, it thinks?
In a very subtle analysis of the limitation of the Turing Test
and of other possible challenges to artificial intelligence, Cohen
2006 praises other kinds of tests, such as robot soccer competi-
tion, because of their feasibility and capacity of development in
more and more complex stages: “Turing’s test requires simul-
taneous achievement of many cognitive functions and doesn’t
offer partial credit to subsets of these functions. In contrast,
robot soccer presents a graduated series of challenges: it gets
harder each year but is never out of reach”. However a Turing
Test based on basic linguistic abilities in an actual situation
might ask for very simple symbol manipulation in understand-
ing basic actions like “pass me that ball” or “take the red can
you see near him”. Before arriving at a highly complex dia-
logue on Shakespeare (as happened at the Loebner Price where
judges were a bit more ignorant than machines on the specific
subject matter), a challenge on basic linguistic abilities in con-
text might be more easily transformed into an updated Turing
Test.
Summarizing: Searle, against Turing, suggested the idea that
understanding is not symbol manipulation. However, properly
understood, a new Turing test might be grounded on the idea
that thinking and language understanding is symbol manipula-
tion in context. Will we be able to invent an imitation game
which could constitute a new challenge for the present century?
An updated Imitation Game would consist of an interrogator
trying to understand whether she is interacting with a robot or
with a human in changing, real world situations. Given that so
many humans have stereotypical behaviors, it would be easy to
raise the doubt that one was meeting some kind of automatic
agent, and the test would be a true challenge for humans and
robots.
REFERENCES
Cappelen, H., & Lepore, E. (2004). Insensitive semantics. A defence of
semantic minimalism and speech act p lu r a l i s m. Oxford: Blackwell.
Cohen, P. (2006). If not Turing’s test, then what? AI Magazine, 26,
61-67.
Cole, D. (2009). The Chinese room argument. The Stanford Encyclope-
dia of Philosophy (Winter 2009 Ed it io n).
Copeland, B. J. (2001). The Turing test. Mind and Machines, 10, 519-
539. doi:10.1023/A:1011285919106
French, R. (2000). The Turing test: The frst fifty years. Trends in Cog-
nitive Sciences, 4, 115-121. doi:10.1016/S1364-6613(00)01453-4
Harnad, S. (1991). Other bodies, other minds: A machine incarnation of
an old philosophical problem. Mind and Machines, 1, 43-54.
Harnad, S. (2000). Mind, machines and Turing. The indistinguishability
of the indistinguishable. Mind and Mach ine s, 9, 425-445.
Hayes, P., & Ford, K. (1995). Turing test considered harmful. Pro-
ceedings of the 14th International Joint Conference on Artificial In-
telligence. San Francisco: Morgam Kaufman Publishers.
Hayes, P., & Ford, K. (1997). Talking heads—A review of speaking
minds: Interviews with twenty eminent cognitive scientists. AI Ma-
gazine, 18, 123-125.
Korta, K., & Perry, J. (2011). Critical pragmatics. Cambridge: Cam-
bridge University Press.
Guha, R., & McCarthy, J. (2003). Varieties of contexts. Lecture Notes
in Artificial Intelligence, 2116 , 290-303.
McCarthy, J. (1990). Formalizing commonsense. New York: Ablex.
Moor, J. H. (2001). The status and future of the Turing test. Mind and
Machines, 11, 73-93. doi:10.1023/A:1011218925467
Moor, J. H. (2003) The Turing test. The elusive standard of artificial
intelligence. Dordrecht: Kluwer.
Luger, G. F. (2005). Artificial intelligence: Structures and st r ategies for
complex problem solving (5th ed.) . Boston: Addison-Wesley.
Penco, C. (2010). Essentially incomplete descriptions. European Jour-
nal for Analytic Philosophy, 6, 47-66.
Penco, C., & Vignolo, M. (2005). Converging towards what? Pragmatic
and semantic competence. In P. Bouquet, & L. Serafini (Eds.), Con-
text representation and reasoning ( Vol. 136). CEUR-WS.
http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol
-136
Perry, J. (1997). Indexicals and demonstratives. In R. Hale, & C.
Wright (Eds.), Companion to the philosophy of language (pp. 586-
612). Oxford: Blackwell.
Rajaraman, V. ( 1997). Turing test and after. Resonance, 2, 50-59.
doi:10.1007/BF02835001
Recanati, F. (2007). Perspectival thought. Oxford: Oxford University
Press. doi:10.1093/acprof:oso/9780199230532.001.0001
Recanati, F. (2010). Truth conditional pragmatics. Oxford: Clarendon
Copyright © 2012 SciRes. 193