RILe - Reinforced Imitation Learning

1ETH Zürich    2Max Planck Institute - Intelligent Systems

(a) Reinforcement Learning (RL): learning a policy that maximizes hand-defined reward function; (b) Inverse RL (IRL): learning a reward function from data. IRL has two stages: 1. training a policy with frozen reward function, and 2. updating the reward function by comparing the converged policy with data. These stages repeated several times; (C) Generative Adversarial Imitation Learning (GAIL) + Adversarial IRL (AIRL): using discriminator as a reward function. GAIL trains both policy and the discriminator at the same time. AIRL implements a new structure on the discriminator, seperating reward from environment dynamics by using two networks under the discriminator (see additional terms in green). (D) RILe: similar to IRL, learning a reward function from data. RILe learns the reward function at the same time with the policy, using a discriminator as a guide for learning the reward.

Abstract

Acquiring complex behaviors is essential for artificially intelligent agents, yet learning these behaviors in high-dimensional settings poses a significant challenge due to the vast search space. Traditional reinforcement learning (RL) requires extensive manual effort for reward function engineering. Inverse reinforcement learning (IRL) uncovers reward functions from expert demonstrations but relies on an iterative process that is often computationally expensive. Imitation learning (IL) provides a more efficient alternative by directly comparing an agent’s actions to expert demonstrations; however, in high-dimensional environments, such direct comparisons often offer insufficient feedback for effective learning. We introduce RILe (Reinforced Imitation Learning), a framework that combines the strengths of imitation learning and inverse reinforcement learning to learn a dense reward function efficiently and achieve strong performance in high-dimensional tasks. RILe employs a novel trainer–student framework: the trainer learns an adaptive reward function, and the student uses this reward signal to imitate expert behaviors. By dynamically adjusting its guidance as the student evolves, the trainer provides nuanced feedback across different phases of learning. Our framework produces high-performing policies in high-dimensional tasks where direct imitation fails to replicate complex behaviors. We validate RILe in challenging robotic locomotion tasks, demonstrating that it significantly outperforms existing methods and achieves near-expert performance across multiple settings.

Reinforced Imitation Learning (RILe). The framework consists of three key components: a student agent, a trainer agent, and a discriminator. The student agent learns a policy by interacting with an environment, and the trainer agent learns a reward function as a policy. (1) The student receives the environment state. (2) The student takes an action, forwards it to the environment which is updated based on the action. (3) The student forwards its state and action to the trainer, whose state is the state action pair of the student. (4) Trainer evaluates the state action pair of the student agent and chooses a scalar action that then becomes the reward of the student agent. (5) The trainer agent forwards the state action pair of the student to the discriminator. (6) Discriminator compares student state-action pair with expert demonstrations. (7) Discriminator gives reward to the trainer, based on the similarity between student- and expert-behavior.

BibTeX

@misc{albaba2025rilereinforcedimitationlearning,
      title={RILe: Reinforced Imitation Learning}, 
      author={Mert Albaba and Sammy Christen and Thomas Langarek and Christoph Gebhardt and Otmar Hilliges and Michael J. Black},
      year={2025},
      eprint={2406.08472},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2406.08472}, 
    }