A system that interprets brain signals enables human operators to correct the robot's choice in real-time. Credit: Jason Dorfman, MIT CSAIL Baxter the robot can tell the difference between right and wrong actions without its human handlers ever consciously giving a command or even speaking a word. The robot's learning success relies upon a system that interprets the human brain's "oops" signals to let Baxter know if a mistake has been made. The new twist on training robots comes from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University. Researchers have long known that the human brain generates certain error-related signals when it notices a mistake. They created machine-learning software that can recognize and classify those brain oops signals from individual human volunteers within 10 to 30 milliseconds—a way of creating instant feedback for Baxter the robot when it sorted paint cans and wire spools into two different bins in front of the humans. "Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word," said Daniela Rus, director of CSAIL at MIT, in a press release. "A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars and other technologies we haven't even invented yet." https://www.youtube.com/watch?v=Zd9WhJPa2Ok The human volunteers wore electroencephalography (EEG) caps that can detect those oops signals when they see Baxter the robot making a mistake. Each volunteer first underwent a short training session where the machine-learning software learned to recognize their brains' specific "oops" signals. But once that was completed, the system was able to start giving Baxter instant feedback on whether each human handler approved or disapproved of the robot's actions. It's still far from a perfect system, or even a 90-percent accuracy system when performing in real time. But researchers seem confident based on the early trials. The MIT and Boston University researchers also discovered that they could improve the system's offline performance by focusing on stronger oops signals that the brain generates when it notices so-called "secondary errors." These errors came up when the system misclassified the human brain signals by either falsely detecting an oops signal when the robot was making the correct choice, or when the system failed to detect the initial oops signal when the robot was making the wrong choice. By incorporating the oops signals from secondary errors, researchers succeeded in boosting the system's overall performance by almost 20 percent. The system cannot yet process the oops signals from secondary errors in actual live training sessions with Baxter. But once it can, researchers expect to boost the overall system accuracy beyond 90 percent. The research also stands out because it showed how people who had never tried the EEG caps before could still learn to train Baxter the robot without much trouble. That bodes well for the possibilities of humans intuitively relying on EEG to train their future robot cars, robot humanoids or similar robotic systems. (The study is detailed in a paper that was recently accepted by the IEEE International Conference on Robotics and Automation (ICRA) scheduled to take place in Singapore this May.) Such lab experiments may still seem like a far cry from future human customers instantaneously correcting their household robots or robot car chauffeurs. But it could become a more practical approach for real-world robot training as researchers tweak the system's accuracy and EEG cap technology becomes more user-friendly outside of lab settings. Next up for the researchers: Using the oops system to train Baxter on making right choices with multiple choice situations.