Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics. While manually-designed controllers have been able to emulate many complex behaviors, building such controllers involves a timeconsuming and difficult development process, often requiring substantial expertise of the nuances of each skill. Reinforcement learning provides an appealing alternative for automating the manual effort involved in the development of controllers. However, designing learning objectives that elicit the desired behaviors from an agent can also require a great deal of skillspecific expertise. In this work, we present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals. We show that by leveraging reference motion data, a single learning-based approach is able to automatically synthesize controllers for a diverse repertoire behaviors for legged robots. By incorporating sample efficient domain adaptation techniques into the training process, our system is able to learn adaptive policies in simulation that can then be quickly adapted for real-world deployment. To demonstrate the effectiveness of our system, we train an 18-DoF quadruped robot to perform a variety of agile behaviors ranging from different locomotion gaits to dynamic hops and turns.

Learn more (opens external site)

 

Comments are closed.

Submit a Team Connection

Click here to submit a new Bioinspired Design Connection (you must be logged in first).

Browse Team Connections

Choose by category, team or week:

BioDesign Connections by Category (2020 – 2022)

by Team (2022 only)

by Week (2022 only)

Most Recent Connections

Connection Interactions

Recent Comments

  1. to reduce the impact of car accidents, it may be possible to study the force diverting physics of cockroaches to…

Top Voted Connections