Perhaps, however, we’re closer than we think to “true” AI. After the Wright Brothers’s aeroplane lifted off in 1903, sceptics continued to debate whether we were “really” flying – an argument that simply faded away. It may be like that with AI. As [Pat] Hayes argues, “You could argue we’ve already passed the Turing test”. If someone from 1950 could talk to Siri, he says, they’d think they were talking to a human being. “There’s no way they could imagine it was a machine – because no machine could do anything like that in 1950. So I think we’ve passed the Turing test, but we don’t know it.”I sort of like this idea, but I don't buy it. The Turing test has enough problems without adding the requirement that we try to think as if we were in the 1950s. And if no machine has ever fooled people for as little as 100 seconds (or a bit less even) then we seem quite far from passing the Turing test in fact. Not that we haven't made progress, of course.
Tuesday, August 21, 2012
"We've passed the Turing test, but we don't know it"
This is interesting. Apparently a machine almost passed the Turing test this year, which involves fooling people into thinking a machine is human 30% of the time for 5 minutes. Here's the final paragraph:
Subscribe to:
Post Comments (Atom)
It's a good article overall, but yeah, I don't buy that last idea either. (Or if we have passed the Turing test, that doesn't settle anything.)
ReplyDeleteYes, if we've passed the test but don't it that doesn't really help.
ReplyDeleteNowadays self-regulating air-condition systems are commonplace, and few persons feel like calling them intelligent. But when such systems were first introduced to the market, people were amazed by them, thinking they could now have intelligent homes. Of course, these systems would never pass the Turing test: they never fooled anyone into mistaking them for people. But sixty years ago no-one could imagine a computer as sophisticated as Siri, so if someone from the 1950s were to judge, we have now passed that test.
ReplyDeleteThis argument, and these examples, don’t settle much, I agree, but they do, I think, point at something worth pointing at. As I read it, the gist of the argument isn’t ”adding the requirement that we try to think as if we were in the 1950s,” but rather that what we feel like calling artificial intelligence changes with time, and with the world we live in. This goes to show that the idea of ”true” AI isn’t as clear as many of its spokesmen seem to believe.
People treat the question of whether AI is possible as if it were a question of what we can make computers do. But I sense deep confusion here. There is philosophical disagreement about whether computers can think or not, but this isn't really disagreement about what technology is possible or what the future may bring, but a disagreement about what we can say about technology whose actual or possible existence no-one doubts. Can this computer think or can it not? The answer to this question does not depend on what the computer is able do, but on what it makes sense for us to say about such performances. If I were fooled by a computer, I would no doubt treat it as I would a thinking agent, at least as long as I was under the impression that I was conversing a human being. Would I continue to say that my partner was intelligent and thinking if I realised that I was in fact talking to a computer? I might. There are uses of the words ”intelligent” and ”think” that could still have a function. Maybe I took pleasure in trying to trick the computer into reviling answers, but was impressed by the complexity of the software: ”Wow, it is really intelligent!” Or I could say: ”When I write..., it answers back...”, or: ”Look, now I'll make it believe that...”. On the other hand, there are uses of the word ”think” that hardly would be comprehensible anymore: ”What are you thinking, stupid!”, ”Now, don’t rush it. Thake your time and think it through before you answer”, or: ”She’s a real thinker, this one”.
Looking at different cases, it is in some circumstances clear why we would regard a machine as thinking (etc), in others it will be utterly pointless. Summarising this discussion we may be tempted to draw different conclusions. One may be so impressed by all the ways a computer can be regarded as thinking, that only the words ”the computer is thinking” does justice to his amazement. Another may be less impressed, and summarize the discussion by saying that ”in the end the computer isn’t really thinking”. This is a question of verbal preferences. Now, that might seem like a meagre conclusion. Still, it doesn't make whole AI-discussion futile and senseless. It can help us shed light on questions like, what it is to be an individual, or what it is for someone to have something to say, etc.
This is my take on the AI-philosophy. I am often flabbergasted by the enormous progress made by computer technology, but the philosophically interesting questions lies elsewere, I think. The big question, to my mind, isn’t what the future may bring in computer software. Some day it may be so good that no-one can tell the difference between the computer and a human being. They may be able to carry out all kinds of complicated tasks and conversations. This will certainly be impressive. But the question is why we should care to listen. What on earth could this machine possibly have to tell us? I think Rush Rhees said something like this, somewhere.
I forgot to mention that some of my formulations of the central points in the middle are more or less quotes from an online paper in swedish by Lars Hertzberg.
ReplyDeleteThanks, vh (and Lars Hertzberg too). This all sounds right to me. It suggests something curious, namely that there might not be any such thing as what we call thinking or what we mean by thinking such that we can then measure machines against this and say whether they are doing it or not. The question is not can machines think, because really there is no question about what they can do. Nor, as you say, is the question what might they be able to do in future. We can imagine them doing all kinds of things. The question (or the most interesting question, the one that seems to puzzle people) is what should we say about artificial replicants, machines that pass even the hardest conceivable version of the Turing test. And this is not a question of what it would be true to say, although we could put it that way. It's an ethical question, having to do with what it means to have something to say, to be an individual or person, and so on. And the question is not about what we would rightly say in the future but about what we mean now, what is lurking already in our concepts. If we ask what intelligence is or what thinking is or what we really mean by 'thinking', the question is, so to speak, normative. Although another way to put it would be to say that the question is what do we want to call thinking. And that sounds positive. But the question is an invitation to think and judge. It's like a more intellectual version of "Do you want some cake?" If someone answers "I don't know" then they are struggling to decide. It's not as if their mind is made up but they can't work out what it says. And some kind of decision seems to be involved in figuring out what we count as thinking or intelligence. Not that this is all that's required. There is also reminding people (perhaps oneself too) about what of course we would or would not say about this or that case. You give some good examples along those lines.
ReplyDeleteBut cannot "We've passed the Turing test, but we don't know it" be rephrased as "Had we had this machine in 1950, it would have passed the Turing test, but unfortunately the criteria have become more stringent in the meantime"?
ReplyDeleteI find the evocation of intertemporal change in acceptability to be very relevant. It's one of those insights I instantly wished I'd come up with myself. As a matter of fact, there seems to be practically no lower limit to the primitiveness of what can be mistaken for a human being momentarily, provided that it is a novelty at the time. When the phonograph first emerged in the 1870s, people were completely awestruck (to the extent of fainting or falling down on their knees) by the fact that a human voice could be disembodied and yet be audible. We find it completely ridiculous that the faint and tinny sound of the phonograph, covered in crackles and hiss, could fool anyone for a second. But people at the time simply hadn't had any objects of comparison – ever in their history. The idea of a disembodied voice was only familiar from a religious context – as the voice of God, or of the Devil, or of conscience. It sounded completely uncanny and creepy. Similarly, people wouldn't have had objects of comparison for contemporary AI machines in 1950, when Turing was writing, although we do in 2012.
People to whom I am the only philosopher of their acquaintance have repeatedly asked me about thought experiments where a machine passes the Turing test. My stock response has been that I cannot predict beforehand at all whether or not I would personally call such a machine conscious. And I mean this response seriously: I really cannot. He who waits shall live to see, or perhaps not.
Right. So something might pass the Turing test momentarily, or even for several years, and then fail to do so, once people get used to that kind of thing and come to see (or decide?) that it isn't really intelligent or thinking after all.
ReplyDeleteHere's what I'm inclined to say. You are right that there is no knowing what, for instance, I might say about certain machines in the future. Nor whether I might later change my mind about what to say. So there isn't much point in asking those questions. But we might come to understand ourselves better if we think about what would be most consistent with our current use of the relevant concepts and the values or preferences inherent or implicit in those uses.