Self-Supervised Policy Adaptation
during Deployment

Nicklas Hansen

Yu Sun

Pieter Abbeel

Alexei A. Efros

Lerrel Pinto

Xiaolong Wang

PaperCode

Abstract

In most real world scenarios, a policy trained by reinforcement learning in one environment needs to be deployed in another, potentially quite different environment. However, generalization across different environments is known to be hard. A natural solution would be to keep training after deployment in the new environment, but this cannot be done if the new environment offers no reward signal. Our work explores the use of self-supervision to allow the policy to continue training after deployment without using any rewards. While previous methods explicitly anticipate changes in the new environment, we assume no prior knowledge of those changes yet still obtain significant improvements. Empirical evaluations are performed on diverse environments from DeepMind Control suite and ViZDoom. Our method improves generalization in 25 out of 30 environments across various tasks, and outperforms domain randomization on a majority of environments.

Non-stationary environments

We evaluate on a collection of natural video backgrounds and show that Policy Adaptation during Deployment (PAD) continuously adapts to changes in the environment. We here compare our method to the non-adaptive SAC trained with an inverse dynamics model (denoted SAC+IDM), as well as CURL (Srinivas et al.), a recently proposed contrastive method.

------SAC+IDM
--CURL (Srinivas et al.)
SAC+IDM (PAD)

Stationary environments

We evaluate on randomized environments and show that Policy Adaptation during Deployment (PAD) outperforms both CURL (Srinivas et al.) and the non-adaptive SAC trained with an inverse dynamics model (denoted SAC+IDM) on a majority of tasks while impacting performance in the original (training) environment minimally.

------SAC+IDM
--CURL (Srinivas et al.)
SAC+IDM (PAD)

Additionally, we find that the relative improvement from PAD increases over time, measured across a wide range of environments and tasks:

For experimental details and results on both DeepMind Control and ViZDoom, please refer to our paper.

Method

During real world deployment of learned policies, reward signals are typically inaccessible. We propose a framework for Policy Adaptation during Deployment using self-supervision, which does not require a reward signal and allows for fast adaptation to diverse and continuously changing environments. Our method requires no prior knowledge about the nature of environment changes and - crucially - we also show that our method generally does not degrade the performance of a policy in its original training environment.

Paper

View on arXiv

Bibtex

@article{hansen2020deployment, title={Self-Supervised Policy Adaptation during Deployment}, author={Nicklas Hansen and Yu Sun and Pieter Abbeel and Alexei A. Efros and Lerrel Pinto and Xiaolong Wang}, year={2020}, eprint={2007.04309}, archivePrefix={arXiv}, primaryClass={cs.LG} }
Correspondence to Nicklas Hansen