Artificial intelligence

AI Actors in Lord of the Rings

As “The Return of the King” is taking the world by storm I think it is suiting to point to this entertaining and informative article from Popular Science last November (after the release of “The Two Towers“). The article explains the AI system that is used to control the characters in the massive battle scenes.

Each orc in a battle has a “mind of his own” making decisions and responding to unexpected happenings. The same goes for every elf, man, uruk-hai, ghost, troll, ent and what have you. The decision trees that are used in the AI system are quite complex, and even though “intelligence” is hardly a word one would directly associate with an average orc, the AI orcs perhaps turned out to be more clever than intended:

“In another early simulation, Jackson and Regelous watched as several thousand characters fought like hell while, in the background, a small contingent of combatants seemed to think better of it and run away. They weren’t programmed to do this. It just happened.” (page 4)

A Collaborative Approach to the Turing Test

Despite that its importance may be debated, the Turing Test at least poses a very hard and interesting computer science problem: How to build a program that can engage in a text conversation with a human being so that the human cannot tell if it is a computer or another human it is talking to?

This is at least the common interpretation of the Turing Test although the imitation game that Turing put forth in “Computing Machinery and Intelligence” actually involved a text conversation where a computer would do as well as a male human in making an interrogator believe that it/he was a woman (which is actually a bit different as it means that both players are imitating – claiming to be something they’re not).

This problem has proven harder to solve than probably even Turing himself realized, and many different solutions have been attempted, most of them failing quite miserably (this is the state of the art). Well, everybody has their own plan on how to get rich – that fails – so here’s my suggestion.
Continue reading

The Turing Test and Extrasensory Perception

Having been interested in Artificial Intelligence for a long time, I’m almost embarrassed to admit that I hadn’t read Alan Turing’s famous article: “Computing Machinery and Intelligence” until today. This is the article where the Imitation Game – later known as the Turing Test – is put forward.

Although I was familiar with most of Turing’s arguments there, reading it was nevertheless truly inspiring (more on that in a moment). Turing’s writing style is brilliant.

In part 6 of the article, Turing explores several contrary views to the notion that machines can think. Many of these are still today hot debates in AI discussions. One of them however, and actually the one that Turing seems to find the strongest one, struck me as quite odd. How could a thinking machine ever account for extrasensory perception such as “telepathy, clairvoyance, precognition and psychokinesis” for which “the statistical evidence, at least for telepathy, is overwhelming”?
Continue reading

When People are Cheaper than Technology

Technologically minded people tend to look for technological solutions to the problems they face. Naturally so, but every technological solution can be improved. There is always another solution, simpler and better than the current one. Most inventors will admit that they know a lot of ways to improve on their solutions – an optimization here a redesign there, etc. The solutions in use are comprimises between the optimal and the practical. Necessarily so, to keep down cost.

But the improvements the designers know about are usually still within the framework originally proposed by the inventor. He or she is too involved in the work to be able to see the big picture and think of a radically new way of attacking the problem. One often overlooked solution: Use people instead of a complex technological solution.
Continue reading

Translation Tools

Machine translations have been somewhat of a holy grail in AI and language technologies for decades. And for a good reason. In a world of ever increasing international business and cooperation, effective communication is crucial. Fast and reliable, automated translations would therefore be of tremendous value, but despite serious efforts it is still far from realization.

When I was in high-school, I started thinking about this problem and decided to give it a try – making a program that would translate sentences from Icelandic to English and vice versa. It couldn’t be that hard, could it? So off I went – happily ignorant of the enormity of the task. After spending some 2 or 3 months of free-time programming on the task (a lot at the time), I began to realize what I was getting myself into. There had definitely been progress, but the goal seemed to have moved dramatically further away.

The last thing I want to do is to scare off somebody that wants to give it a try, but I thought it could help somebody to share my experience and some of the things I’ve learned since from various sources, mostly from people that have given the problem a lot more time and thought than I have. Remember that this is one of the problems where not knowing it can’t be done is the only way to succeed.
Continue reading

Human vs. Computer – so we haven’t lost at chess?

It came as a shock to many of us when Deep Blue defeated Kasparov in a 6 game match in 1997. A computer had beaten the best human player in this game that to many is a defining symbol of human intellect. Even though it was “only chess”, it had to be a sign of the inevitable. The machines would soon be taking over man’s role as earth’s ruling race.

But wait a minute. The extremely high profile of the Deep Blue match may have been misleading. In an article at ChessBase, Jeff Sonas explains that the battle is not lost yet. In fact, top chess players have taken on computers 7 times since 1997, all of them ending with a draw. Was it just a stroke of luck for Deep Blue back then?
Continue reading

Gathering common sense

As stated in the glossary, one of the problems in Artificial Intelligence is software’s lack of common-sense knowledge about the world. Of course AI is a wide field and lack of common sense does not hurt Deep Blue’s chess playing abilities or the capabilities of an OCR program to recognize characters, both of which are the subjects of certain subcategories of AI. In communicating with humans and making sense of natural language on the other hand, this lack of common-sense is the main reason for computers’ lousy performance.

Several projects are attempting to solve this problem and, using different methods, trying to teach computers common-sense. This article discusses many of these projects, their approaches and the problems they face.
Continue reading