Official Pytorch implementation of PCGS: Progressive Compression of 3D Gaussian Splatting.
It enables reuse of existing bitstreams for enhenced fidelity when dynamic bandwidth or diversion storage occurs.
Yihang Chen*, Mengyao Li*, Qianyi Wu, Weiyao Lin, Mehrtash Harandi, Jianfei Cai
You are welcomed to check a series of works from our group on 3D radiance field representation compression as listed below:
- π CNC [CVPR'24]: efficient NeRF compression! [
Paper
] [Arxiv
] [Project
] - π HAC [ECCV'24]: efficient 3DGS compression! [
Paper
] [Arxiv
] [Project
] - πͺ HAC++ [ARXIV'25]: an enhanced compression method over HAC! [
Arxiv
] [Project
] - π FCGS [ICLR'25]: fast optimization-free 3DGS compression! [
Paper
] [Arxiv
] [Project
] - πͺ PCGS [ARXIV'25]: progressive 3DGS compression! [
Arxiv
] [Project
]
We propose PCGS (Progressive Compression of 3D Gaussian Splatting), which adaptively controls both the quantity and quality of Gaussians (or anchors) to enable effective progressivity for on-demand applications. Specifically, for quantity, we introduce a progressive masking strategy that incrementally incorporates new anchors while refining existing ones to enhance fidelity. For quality, we propose a progressive quantization approach that gradually reduces quantization step sizes to achieve finer modeling of Gaussian attributes. Furthermore, to compact the incremental bitstreams, we leverage existing quantization results to refine probability prediction, improving entropy coding efficiency across progressive levels.
Diamond markers
The installation process follows that of HAC++.
We tested our code on a server with Ubuntu 20.04.1, cuda 11.8, gcc 9.4.0.
- Unzip files
cd submodules
unzip diff-gaussian-rasterization.zip
unzip gridencoder.zip
unzip simple-knn.zip
unzip arithmetic.zip
cd ..
- Install environment
conda env create --file environment.yml
conda activate HAC_env
- Install
tmc3
(for GPCC)
- Please refer to tmc3 github for installation.
- Don't forget to add
tmc3
to your environment variable, otherwise you must manually specify its location in our code. - Tips:
tmc3
is commonly located at/PATH/TO/mpeg-pcc-tmc13/build/tmc3
.
First, create a data/
folder inside the project path by
mkdir data
The data structure will be organised as follows:
data/
βββ dataset_name
βΒ Β βββ scene1/
βΒ Β βΒ Β βββ images
βΒ Β βΒ Β βΒ Β βββ IMG_0.jpg
βΒ Β βΒ Β βΒ Β βββ IMG_1.jpg
βΒ Β βΒ Β βΒ Β βββ ...
βΒ Β βΒ Β βββ sparse/
βΒ Β βΒ Β βββ0/
βΒ Β βββ scene2/
βΒ Β βΒ Β βββ images
βΒ Β βΒ Β βΒ Β βββ IMG_0.jpg
βΒ Β βΒ Β βΒ Β βββ IMG_1.jpg
βΒ Β βΒ Β βΒ Β βββ ...
βΒ Β βΒ Β βββ sparse/
βΒ Β βΒ Β βββ0/
...
- For instance:
./data/blending/drjohnson/
- For instance:
./data/bungeenerf/amsterdam/
- For instance:
./data/mipnerf360/bicycle/
- For instance:
./data/nerf_synthetic/chair/
- For instance:
./data/tandt/train/
Public Data (We follow suggestions from Scaffold-GS)
- The BungeeNeRF dataset is available in Google Drive/ηΎεΊ¦η½η[ζεη :4whv].
- The MipNeRF360 scenes are provided by the paper author here. And we test on its entire 9 scenes
bicycle, bonsai, counter, garden, kitchen, room, stump, flowers, treehill
. - The SfM datasets for Tanks&Temples and Deep Blending are hosted by 3D-Gaussian-Splatting here. Download and uncompress them into the
data/
folder.
For custom data, you should process the image sequences with Colmap to obtain the SfM points and camera poses. Then, place the results into data/
folder.
To train scenes, we provide the following training scripts:
- Tanks&Temples:
run_shell_tnt.py
- MipNeRF360:
run_shell_mip360.py
- BungeeNeRF:
run_shell_bungee.py
- Deep Blending:
run_shell_db.py
- Nerf Synthetic:
run_shell_blender.py
run them with
python run_shell_xxx.py
The code will automatically run the entire process of: training, encoding, decoding, testing.
- Multiple rate points will be run in one training process.
- Training log will be recorded in
output.log
of the output directory. Results of detailed fidelity, detailed size, detailed time across different rate points will all be recorded. - Encoded bitstreams will be stored in
./bitstreams
of the output directory. - After encoding, the script will automatically decode the bitstreams into multiple models across progressive levels and store them into the folder
./decoded_model
. - Evaluated output images will be saved in
./test_ss{lambda_idx}
of the output directory.
- Yihang Chen: yhchen.ee@sjtu.edu.cn
If you find our work helpful, please consider citing:
@article{pcgs2025,
title={PCGS: Progressive Compression of 3D Gaussian Splatting},
author={Chen, Yihang and Li, Mengyao and Wu, Qianyi and Lin, Weiyao and Harandi, Mehrtash and Cai, Jianfei},
journal={arXiv preprint arXiv:2503.08511},
year={2025}
}
Please follow the LICENSE of 3D-GS.
- We thank all authors from 3D-GS for presenting such an excellent work.
- We thank all authors from Scaffold-GS for presenting such an excellent work.
- We thank Xiangrui's help on GPCC codec.