MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations

Nicklas Hansen,  Yixin Lin,  Hao Su,  Xiaolong Wang,
Vikash Kumar,  Aravind Rajeswaran

Meta AI, UC San Diego

Success rate (%) in sparse reward tasks. Given only 5 human demonstrations and a limited amount of online interaction, our method solves 21 hard robotics tasks from pixels, including dexterous manipulation, pick-and-place, and locomotion, while baselines fail to solve most tasks with limited data.

Abstract

Poor sample efficiency continues to be the primary challenge for deployment of deep Reinforcement Learning (RL) algorithms for real-world applications, and in particular for visuo-motor control. Model-based RL has the potential to be highly sample efficient by concurrently learning a world model and using synthetic rollouts for planning and policy improvement. However, in practice, sample-efficient learning with model-based RL is bottlenecked by the exploration challenge. In this work, we find that leveraging just a handful of demonstrations can dramatically improve the sample-efficiency of model-based RL. Simply appending demonstrations to the interaction dataset, however, does not suffice. We identify key ingredients for leveraging demonstrations in model learning -- policy pretraining, targeted exploration, and oversampling of demonstration data -- which forms the three phases of our model-based RL framework. We empirically study three complex visuo-motor control domains and find that our method is 160%-250% more successful in completing sparse reward tasks compared to prior approaches in the low data regime (100K interaction steps, 5 demonstrations).

Results

Our model-based method, MoDem, solves challenging visuo-motor control tasks with sparse rewards and high-dimensional action spaces in 100K interaction steps given only 5 demonstrations, outperforming prior state-of-the-art methods by a large margin in this setting.

Below, we visualize trajectories generated by our method for a subset of the 18 sparse reward tasks that we consider.

Method

We find that leveraging just a handful of demonstrations can dramatically improve the sample-efficiency of model-based RL. Simply appending demonstrations to the interaction dataset, however, does not suffice. We identify key ingredients for leveraging demonstrations in model learning -- policy pretraining, targeted exploration, and oversampling of demonstration data -- which forms the three phases of our model-based RL framework. Concretely, our framework consists of the following phases:

Citation

If you use our method or code in your research, please consider citing the paper as follows:

@article{hansen2022modem, title={MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations}, author={Nicklas Hansen and Yixin Lin and Hao Su and Xiaolong Wang and Vikash Kumar and Aravind Rajeswaran}, journal={arXiv preprint}, year={2022} }
Correspondence to Nicklas Hansen. Website based on TD-MPC and Nerfies.