Having an accomplice makes malicious behavior easier, especially when one person pulls the strings while someone else does the dirty work. That way, the mastermind gets what they want while distancing themselves from the consequences. But what happens when that accomplice isn’t human, but a machine?
“Using AI creates a convenient moral distance between people and their actions — it can induce them to request behaviors they wouldn’t necessarily engage in themselves, nor potentially request from other humans,” said Zoe Rahwan of the Max Planck Institute for Human Development in a statement.
Rahwan and a team of researchers from Germany and France recently put this to the test in a study published in Nature. Across four experiments and nearly 7,000 participants, they found people were far more likely to act dishonestly when teaming up with AI agents compared to working with other humans.
The results suggest a troubling rise in ...