The ethical rules that govern our behavior have evolved over thousands of years, perhaps millions. They are a complex tangle of ideas that differ from one society to another and sometimes even within societies. It’s no surprise that the resulting moral landscape is sometimes hard to navigate, even for humans.
The challenge for machines is even greater now that artificial intelligence now faces some of the same moral dilemmas that tax humans. AI is now being charged with tasks ranging from assessing loan applications to controlling lethal weapons. Training these machines to make good decisions is not just important, it is a matter of life and death for some people.
And that raises the question of how to teach machines to behave ethically.
Today we get an answer of sorts thanks to the work of Liwei Jiang and colleagues at the Allen Institute of Artificial Intelligence and the University of Washington, both in Seattle. This team has created a comprehensive database of moral dilemmas along with crowdsourced answers and then used it to train a deep learning algorithm to answer questions of morality.