Robotics and its ongoing impact on humanity, particularly the workforce, is a frequent topic of discussion for Constellation Research. Now, a team at MIT's Computer Science and Artificial Intelligence lab, along with Boston University, is developing a technology that targets an important sub-topic within the robotics debate: How humans may interact with them in the future. Here are the key details from an MIT report:
What if we could develop robots that were a more natural extension of us and that could actually do whatever we are thinking?
A feedback system developed at MIT enables human operators to correct a robot's choice in real-time using only brain signals.
Using data from an electroencephalography (EEG) monitor that records brain activity, the system can detect if a person notices an error as a robot performs an object-sorting task. The team’s novel machine-learning algorithms enable the system to classify brain waves in the space of 10 to 30 milliseconds.
While the system currently handles relatively simple binary-choice activities, the paper’s senior author says that the work suggests that we could one day control robots in much more intuitive ways.
“Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word,” says CSAIL Director Daniela Rus. “A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven’t even invented yet.”
EEG has been implemented as part of robot control systems in the past, but required human operators to organize their thoughts in a regimented manner. The MIT system takes a different approach:
Rus’ team wanted to make the experience more natural. To do that, they focused on brain signals called “error-related potentials” (ErrPs), which are generated whenever our brains notice a mistake. As the robot indicates which choice it plans to make, the system uses ErrPs to determine if the human agrees with the decision.
“As you watch the robot, all you have to do is mentally agree or disagree with what it is doing,” says Rus. “You don’t have to train yourself to think in a certain way — the machine adapts to you, and not the other way around.”
The researchers used a Baxter robot from Rethink Robotics in the project. A human participant watched as the robot sorted items, and when the system detected ErrPs, the robot would correct its movements.
While the work is in its early days, it certainly shows promise. And as the researchers refine and develop their system, advancements in commercial EEG headsets will occur in parallel.
The researchers' paper, which is available here and worth a read, will be presented at a robotics conference in Singapore this May.
24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.