Professor Dorsa Sadigh and her team have integrated algorithms in a novel way that makes controlling assistive robotic arms faster and easier. The team hopes their research will enable people with disabilities to conduct everyday tasks on their own– for example, cooking and eating.
Dorsa's team, which included engineering graduate student Hong Jun Jeon and computer science postdoctoral scholar Dylan P. Losey, developed a controller that blends two artificial intelligence algorithms. The first, which was developed by Dorsa's group, enables control in two dimensions on a joystick without the need to switch between modes. It uses contextual cues to determine whether a user is reaching for a doorknob or a drinking cup, for example. Then, as the robot arm nears its destination, the second algorithm kicks in to allow more precise movements, with control shared between the human and the robot.
In shared autonomy, the robot begins with a set of "beliefs" about what the controller is telling it to do and gains confidence about the goal as additional instructions are given. Since robots aren't actually sentient, these beliefs are really just probabilities. For example, faced with two cups of water, a robot might begin with a belief that there's an even chance it should pick up either one. But as the joystick directs it toward one cup and away from the other, the robot gains confidence about the goal and can begin to take over – sharing autonomy with the user to more precisely control the robot arm. The amount of control the robot takes on is probabilistic as well: If the robot has 80 percent confidence that it's going to cup A rather than cup B, it will take 80 percent of the control while the human still has 20 percent, explains Professor Dorsa Sadigh.
Excerpted from HAI (Human-Centered Artificial Intelligence), "Assistive Feeding: AI Improves Control of Robot Arms"
Video, "Shared Autonomy with Learned Latent Actions"