Adversarial Reinfrocement Learning
Check out this paper ”Robust Adversarial Reinforcement Learning”
In robust adversarial reinforcement learning (RARL), we train an agent to operate in the presence of a destabilizing adversary that applies disturbance forces to the system. The jointly trained adversary is reinforced — that is, it learns an optimal destabilization policy.