A Collaborative Approach to the Turing Test

Despite that its importance may be debated, the Turing Test at least poses a very hard and interesting computer science problem: How to build a program that can engage in a text conversation with a human being so that the human cannot tell if it is a computer or another human it is talking to?

This is at least the common interpretation of the Turing Test although the imitation game that Turing put forth in “Computing Machinery and Intelligence” actually involved a text conversation where a computer would do as well as a male human in making an interrogator believe that it/he was a woman (which is actually a bit different as it means that both players are imitating – claiming to be something they’re not).

This problem has proven harder to solve than probably even Turing himself realized, and many different solutions have been attempted, most of them failing quite miserably (this is the state of the art). Well, everybody has their own plan on how to get rich – that fails – so here’s my suggestion.

In his paper, Turing actually went as far as to predict how close computer scientists would be to solving the problem in 50 years. As the paper was written in 1950 that basically means – give or take a few years – NOW.

Turing’s prediction: “an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.”

Now, “70 per cent”, “five minutes” and “an average interrogator” give us some leeway here. Given that “average” must mean that the interrogator has not had the time to prepare especially for the task ahead, most people are actually likely to ask very similar questions during such a short interview. By recording a number of such interrogation sessions where a human would be answering the questions and then applying to the conversations similar statistical methods as researchers have recently been using for machine translations and paraphrasing sentences, I believe it would actually take rather short time to gather a collection of sentences and sentence templates that would do the job 70% of the time.

It surely doesn’t imply any real “understanding” on the system’s behalf, and may therefore not be anything close to what Turing envisioned, but it nevertheless meets his criteria.

This approach reminds me of a story that allegedly happened in the Department of Computer Science at the University of Iceland some 20 years ago. One of the computer science students claimed that he had made a program that could intelligently answer any question that users posed. Despite disbelief he managed to convince a group of people to give it a try. All that users had to do was that for every question asked, they had in return to answer one posed by the computer, allegedly teaching the system new facts. And it sure worked. Even when the computer didn’t know the right answer, it at least gave relatively intelligent and well formed answers.

Bear in mind that this was in the early days of networked computing and the trick was therefore not as obvious as it might seem today. Of course all that was happening was that users were answering each others’ questions.

In my example it is almost the same, except that we are collecting the answers and relying on the predictability of an average interrogator.

2 comments

  1. How about building an Oracle of sorts, you would ask the oracle a question and the oracle would post the question on a multitude of chatrooms and return the most probable answer to the interrogator. This Oracle could of course not apply for the Turing test but would be a neat networking/collective intellect study.

  2. Gummi: You might be interested in Technology Review’s Innovation Futures. It allows people to trade (in fake money) options on predictions of answers to various questions, the theory being that the “market price” of each prediction actually has a predictive value. Pretty cool!

Comments are closed.