Improving planning and MBRL with temporally-extended actions

Indiana University, Bloomington
MBRL results

Abstract

Continuous time systems are often modeled using discrete time dynamics but this requires a small simulation step to maintain accuracy. In turn, this requires a large planning horizon which leads to computationally demanding planning problems and reduced performance.

Previous work in model free reinforcement learning has partially addressed this issue using action repeats where a policy is learned to determine a discrete action duration. Instead we propose to control the continuous decision timescale directly by using temporally-extended actions and letting the planner treat the duration of the action as an additional optimization variable along with the standard action variables.

This additional structure has multiple advantages. It speeds up simulation time of trajectories and, importantly, it allows for deep horizon search in terms of primitive actions while using a shallow search depth in the planner. In addition, in the model based reinforcement learning (MBRL) setting, it reduces compounding errors from model learning and improves training time for models.

We show that this idea is effective and that the range for action durations can be automatically selected using a multi-armed bandit formulation and integrated into the MBRL framework. An extensive experimental evaluation both in planning and in MBRL, shows that our approach yields faster planning, better solutions, and that it enables solutions to problems that are not solved in the standard formulation.

Key Ideas

  • How can we plan if we have access to a temporally-extended dynamics function?
    Let the planner treat the duration of the action as an additional optimization variable along with the standard action variables. Now, the planner can choose both the action and its duration at each step.
  • How do we obtain a temporally-extended dynamics function?
    Option 1: Simply iterate over the primitive dynamics function. But evaluation using such a dynamics function can be expensive.
    Option 2: Learn an approximate temporally-extended dynamics function using MBRL. While collecting data, iterate over the primitive simulator as required. Train an ensemble of neural networks using this collected data to predict the next state due to a temporally-extended action.
Algorithm

BibTeX

@article{chatterjee2025improving,
        title={Improving planning and MBRL with temporally-extended actions},
        author={Chatterjee, Palash and Khardon, Roni},
        journal={Advances in Neural Information Processing Systems},
        year={2025}
      }