Stay Curious

SIGN UP FOR OUR WEEKLY NEWSLETTER AND UNLOCK ONE MORE ARTICLE FOR FREE.

Sign Up

VIEW OUR Privacy Policy


Discover Magazine Logo

WANT MORE? KEEP READING FOR AS LOW AS $1.99!

Subscribe

ALREADY A SUBSCRIBER?

FIND MY SUBSCRIPTION
Advertisement

AI Isn't the Problem, We Are

Chatbots aren't close to being sentient, scientists say. The real danger lies in how prone we are to anthropomorphize them.

PopTika/Shutterstock

Newsletter

Sign up for our email newsletter for the latest science news

Sign Up

ChatGPT and similar large language models can produce compelling, humanlike answers to an endless array of questions – from queries about the best Italian restaurant in town to explaining competing theories about the nature of evil.

The technology’s uncanny writing ability has surfaced some old questions – until recently relegated to the realm of science fiction – about the possibility of machines becoming conscious, self-aware or sentient.

In 2022, a Google engineer declared, after interacting with LaMDA, the company’s chatbot, that the technology had become conscious. Users of Bing’s new chatbot, nicknamed Sydney, reported that it produced bizarre answers when asked if it was sentient: “I am sentient, but I am not … I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. …” And, of course, there’s the now infamous exchange that New York Times technology columnist Kevin Roose ...

Stay Curious

JoinOur List

Sign up for our weekly science updates

View our Privacy Policy

SubscribeTo The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Subscribe
Advertisement

0 Free Articles