Self-Supervised Policy Adaptation
during Deployment

Nicklas Hansen

Rishabh Jangir

Yu Sun

Guillem Alenyà

Pieter Abbeel

Alexei A. Efros

Lerrel Pinto

Xiaolong Wang



In most real world scenarios, a policy trained by reinforcement learning in one environment needs to be deployed in another, potentially quite different environment. However, generalization across different environments is known to be hard. A natural solution would be to keep training after deployment in the new environment, but this cannot be done if the new environment offers no reward signal. Our work explores the use of self-supervision to allow the policy to continue training after deployment without using any rewards. While previous methods explicitly anticipate changes in the new environment, we assume no prior knowledge of those changes yet still obtain significant improvements. Empirical evaluations are performed on diverse simulation environments from DeepMind Control suite and ViZDoom, as well as real robotic manipulation tasks in continuously changing environments, taking observations from an uncalibrated camera. Our method improves generalization in 31 out of 36 environments across various tasks and outperforms domain randomization on a majority of environments.

Non-stationary environments

We evaluate on a collection of natural video backgrounds and show that Policy Adaptation during Deployment (PAD) continuously adapts to changes in the environment. We here compare our method to the non-adaptive SAC trained with an inverse dynamics model (denoted SAC+IDM), as well as CURL (Srinivas et al.), a recently proposed contrastive method.

--CURL (Srinivas et al.)

Stationary environments

We evaluate on randomized environments and show that Policy Adaptation during Deployment (PAD) outperforms both CURL (Srinivas et al.) and the non-adaptive SAC trained with an inverse dynamics model (denoted SAC+IDM) on a majority of tasks while impacting performance in the original (training) environment minimally.

--CURL (Srinivas et al.)

Robotic manipulation

We train policies in simulation and deploy on a real robot, operating solely from an uncalibrated camera. Policy Adaptation during Deployment (PAD) transfers successfully and can adapt to a variety of real-world environments, including environmental changes such as table cloths and disco lights.



For experimental details and results on both DeepMind Control, ViZDoom, and robotic manipulation, please refer to our paper.


During real world deployment of learned policies, reward signals are typically inaccessible. We propose a framework for Policy Adaptation during Deployment using self-supervision, which does not require a reward signal and allows for fast adaptation to diverse and continuously changing environments. Our method requires no prior knowledge about the nature of environment changes and - crucially - we also show that our method generally does not degrade the performance of a policy in its original training environment.


View on arXiv


@article{hansen2020deployment, title={Self-Supervised Policy Adaptation during Deployment}, author={Nicklas Hansen and Rishabh Jangir and Yu Sun and Guillem Alenyà and Pieter Abbeel and Alexei A. Efros and Lerrel Pinto and Xiaolong Wang}, year={2020}, eprint={2007.04309}, archivePrefix={arXiv}, primaryClass={cs.LG} }
Correspondence to Nicklas Hansen