We have completed maintenance on DiscoverMagazine.com and action may be required on your account. Learn More

How Human Are You? A New Turing Test Relies on Spatial Relations

By Veronique Greenwood
Sep 15, 2011 12:56 AMNov 20, 2019 3:24 AM


Sign up for our email newsletter for the latest science news

Where is the cup? THERE IS NO CUP.

What's the News: Ever since Alan Turing, the father of modern computers, proposed that sufficiently advanced computers could pass as human in a conversation, the classic Turing test

has involved what's essentially instant messaging. Computers designed to imitate human conversational patterns are often entered by their designers in competitions where they aim to fool people in front of a distant monitor into thinking they're human---and they do a pretty good job, although some human mimics, like chatbots, sound like crazed children on their first spin in cyberspace ("I'm not a robot, I'm a unicorn!

"). But scientists have noticed that humans describe where objects are in space in a specific way, taking into account what spatial relationships would be most useful for a human listener. Artificial intelligences, even fairly sophisticated ones, talk about space differently, and the difference is large enough that it can form the basis of a new type of Turing test, British scientists reported at a conference in April. Now, New Scientist has developed an interactive version

 of the test, which lets you see for yourself what statements about space set off your silicon-lifeform alarms. So what's behind it? How the Heck:

  • The team created a series of 20 computer-generated scenes or still-lifes, with people and objects like trees, knives, and books. We're not talking high art here, but each scene was designed so that there were a number of different ways to describe the location of a given object.

  •  Then, they asked both human subjects and an artificial intelligence where certain things, like a character named John or a yellow book, lay in relation to the other objects around them. In five of the scenes, the software and the humans gave similar answers. But in the fifteen others, clear differences arose. Where a human would say that the coffee cup in the image above is on the table or on the mat, the software referred to it as "left of the lamp" or "in front of the chair." Technically, that's not wrong, but it's not the kind of response a human would give.

  • As the researchers wrote in a 2008 paper exploring this kind of spatial logic in artificial intelligences, a human recognition of the usefulness of a description is required to perform this task well. "A cup may be usefully described as ‘on the table’ even if it is actually ‘on’ a saucer which is ‘on’ a mat which is ‘on’ a tablecloth which is ‘on’ the table," they write. "The cup might not usefully be described as ‘on’ the saucer, since the saucer is as mobile as the cup and does not help a listener find the cup."

What's the Context:

  • Making such spatial statements is a pretty sophisticated mental process. Humans exclude certain variations---like "in" the saucer or "over" the table---on the basis of an intuitive understanding of physics, how the objects are used, and how pairs or groups of objects usually interact with each other. But these patterns of "in" and "on" are reminiscent of certain linguistic differences.

  • In Korean, for example, when two objects touch, the speaker specifies whether they are linked tightly or loosely, rather than using a word like "in" or "on." If the scientists (who are based in the UK) were to do similar tests with native speakers of languages where the terms are different, would it be easier or harder for a machine intelligence to impersonate a human? In the Korean example, which depends heavily on an understanding of how objects interact in the physical world, an AI's responses might be even more obvious than they are in English.

  • The test's not foolproof, of course---we're not so mysterious that a machine can't spoof us at least some of the time. Explore the interactive version yourself, and see whether you recognize a machine response 100% of the time---I'm betting you can't.

The Future Holds:

  • This team's detailed characterization of human and machine descriptions of relationships could be useful in a variety of ways. For instance, if we can get machines to think more like people in this respect, they could perform certain spatial tasks better, perhaps giving driving directions that rely on the big old oak tree on the left and the red barn on the right rather than "slight lefts" and names of roads that, in the real world, aren't marked.

  • Realistically, it's unlikely that you'll need to whip out these scenes to test whether someone you met in a chatroom is man or machine. But the fact that it is possible to tell a intelligence's true nature from a description of space is indicative of how truly elaborate our brains are, and of how high the hurdles are on the way to computers that can recreate every aspect of them.

[via New Scientist


Image courtesy of Barclay et al, via New Scientist

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!


Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Our List

Sign up for our weekly science updates.

To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.