by Catie Cuan
·
Jun 30, 2022
This piece was first published on LinkedIn.
At Everyday Robots, we're developing robots that help us with tasks in our everyday lives. As part of this work, we're confronting some of the most challenging problems in robotics, including how to make decisions in the unstructured environments of the real world. A common approach to help robots learn is to train them to perceive, think, and act in discrete increments. However, this can make them seem clumsy because they need to stop moving whenever they start to think!
This may be fine in simulated environments and turn-based-games like GO, where reinforcement-learning (RL) agents can take time while the world stands still. Our world, however, is constantly changing. For example, if a cup fell and rolled along a surface, a robot would need to respond by steering its hand toward that moving target. To help us with tasks in our daily lives, robots will need to be able to think and act concurrently.
In a recent X, the moonshot factory, collaboration with our friends from #GoogleAI, published at #ICLR2020, we were able to show how common RL methods can be extended to make robots think and move in parallel. For example, our robots were able to move smoothly and grasp up to 50% faster. Just look at the difference: