When a robot fails, the system uses an algorithm to generate counterfactual explanations that describe what needed to change for the robot to succeed. For instance, maybe the robot would have been able to pick up the mug if the mug were a certain color. The researchers tested this technique in simulations and found that it could teach a robot more efficiently than other methods. If a user tries to teach a robot to pick up a mug, but demonstrates with a white mug, the robot could learn that all mugs are white. And this counterfactual step is what allows human reasoning to be translated into robot reasoning in a way that makes sense,” she says.