Skip to content

(RA-L 2025) VILP: Imitation Learning with Latent Video Planning

License

Notifications You must be signed in to change notification settings

ZhengtongXu/VILP

Repository files navigation

VILP: Imitation Learning with Latent Video Planning

Accepted by IEEE RA-L

arXiv | Summary Video

teaser

Installation

For installation, please run

$ cd VILP
$ mamba env create -f conda_environment.yaml && bash install_custom_packages.sh

Please note that in the install_custom_packages.sh script, the following command is executed

$ source ~/miniforge3/etc/profile.d/conda.sh

This command is generally correct. However, if your Conda environments are not located in the ~/miniforge3 directory, please adjust the command to match the path of your environment.

Example

Try the simulation Push-T task with VILP!

First step: image compression training

Activate conda environment

$ conda activate vilpenv

Then launch the training by running

$ python train.py --config-dir=./VILP/config --config-name=train_vq_pushT.yaml

The pretrained models will be saved in /vq_models

Second step: video planning training

All logs from training will be uploaded to wandb. Login to wandb (if you haven't already)

$ wandb login

Then launch the training by running

$ python train.py --config-dir=./VILP/config --config-name=train_vilp_pushT_state_planning.yaml hydra.run.dir=data/outputs/your_folder_name

Please note that you need to specify the path to the pretrained VQVAE in the YAML config file.

After the model is fully trained (It usually requires at least several hours, which depends on your GPU), run the following command line to export the model from the checkpoint

$ python train.py --config-dir=./VILP/config --config-name=save_vilp_pushT_state_planning.yaml hydra.run.dir=data/outputs/the_checkpoint_folder

If you training the planning model without low dimentional observations (use train_vilp_pushT_planning.yaml), you should directly see some generated videos on wandb during training!

Third step: policy training and rollout

Launch the job by running

$ python train.py --config-dir=./VILP/config --config-name=train_vilp_pushT_state_policy.yaml hydra.run.dir=data/outputs/your_folder_name

All results will be uploaded to wandb!

BibTex

If you find this codebase useful, consider citing:

@misc{xu2025vilp,
  title={VILP: Imitation Learning with Latent Video Planning},
  author={Zhengtong Xu and Qiang Qiu and Yu She},
  year={2025},
  eprint={2502.01784},
  archivePrefix={arXiv},
  primaryClass={cs.RO},
  url={https://arxiv.org/abs/2502.01784},
}

About

(RA-L 2025) VILP: Imitation Learning with Latent Video Planning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages