Autonomous Mobile Manipulation with Legged Robots

Legged robots have the potential to be superior to their wheeled counterparts when operating in unstructured environments or traversing rough terrains. Recent advances from Boston Dynamics, Ghost Robotics, ANYbotics, Unitree Robotics and other companies show that legged robots are getting better and better at moving around, and have even recently had some commercial success. However, legged robots should also have the ability to interact with their environment, in order to perform useful tasks. Given that they could use their limbs as manipulators (an affordance not available by other platforms), they should be able to appropriately affect and modify their surroundings when needed.


In this project, we seek to develop algorithms with provable guarantees, for accomplishing mobile manipulation tasks with legged robots. More specifically, the robot’s behaviors are planned and executed by a three-layer hierarchical architecture consisting of: an offline symbolic task and motion planner; a reactive layer that tracks the reference output of the deliberative layer and avoids unanticipated obstacles sensed online; and a gait layer that realizes the abstract commands from the reactive module through appropriately coordinated joint level torque feedback loops.


Moreover, we seek to push the limits of the reactive layer by rendering the robot capable of recognizing unanticipated conditions during execution time and successfully recovering from them. We expect this feature to be of particular importance in unstructured environments where communication with the offline task planner is difficult or impossible, and enhance the autonomous capabilities of agents having to cooperate with and use their environment in order to satisfy their goals.