Skip to content

Commit 24b2bbe

Browse files
committed
add animation codes, update dependency
1 parent 2e04f0d commit 24b2bbe

8 files changed

+2110
-2
lines changed

README.md

+61-1
Original file line numberDiff line numberDiff line change
@@ -12,13 +12,19 @@ Realistic 3D human generation from text prompts is a desirable yet challenging t
1212
<img src='./content/teaser-1.png' width=800>
1313
<img src='./content/teaser-2.png' width=800>
1414

15+
## News
16+
17+
* [2023-12-05] Update the real-time animation demo and codes!
18+
19+
* [2023-11-28] Upload the paper. Release all the training codes and pretrained models!
20+
1521
## Installation
1622
```
1723
# clone the github repo
1824
git clone https://github.com/alvinliu0/HumanGaussian.git
1925
cd HumanGaussian
2026
21-
pip install torch==2.0.1+cu117 torchvision==0.15.2+cu117 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu117
27+
pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
2228
pip install -r requirements.txt
2329
2430
# a modified gaussian splatting (+ depth, alpha rendering)
@@ -52,6 +58,60 @@ python launch.py --config configs/test.yaml --train --gpu 0 system.prompt_proces
5258

5359
Though the **HumanGaussian framework** is trained on a single body pose at the training stage, it can be animated with unseen pose sequences in a zero-shot manner, *i.e.*, we can use a sequence of SMPL-X pose parameters to animate the pre-trained avatars w/o further finetuning.
5460

61+
<img src='./content/animation-realtime.gif' width=800>
62+
63+
### Animation-Related Installation
64+
We rely on some extra dependencies:
65+
```bash
66+
# for GUI
67+
pip install dearpygui
68+
69+
# cubvh
70+
pip install git+https://github.com/ashawkey/cubvh
71+
72+
# a modified gaussian splatting (+ depth, alpha rendering)
73+
git clone --recursive https://github.com/ashawkey/diff-gaussian-rasterization
74+
pip install ./diff-gaussian-rasterization
75+
76+
# nvdiffrast
77+
pip install git+https://github.com/NVlabs/nvdiffrast/
78+
79+
# kiuikit
80+
pip install -U git+https://github.com/ashawkey/kiuikit
81+
82+
# smplx
83+
pip install smplx[all]
84+
# please also download SMPL-X files to ./smplx_models/smplx/*.pkl
85+
```
86+
87+
### Animation-Related Usage
88+
89+
Gaussian files are generated by HumanGaussian and saved as ply files.
90+
Motions should follow the SMPL-X body pose format (21 joints), which could read by:
91+
```python
92+
motion = np.load('content/amass_test_17.npz')['poses'][:, 1:22, :3]
93+
# motion = np.load('content/Aeroplane_FW_part9.npz')['poses'][:, 1:22, :3]
94+
```
95+
96+
Then, perform the zero-shot animating by:
97+
```bash
98+
# visualize in gui
99+
python animation.py --ply <path/to/ply> --motion <path/to/motion> --gui
100+
101+
# play motion and save to videos/xxx.mp4
102+
python animation.py --ply <path/to/ply> --motion <path/to/motion> --play
103+
104+
# also self-rotate during playing
105+
python animation.py --ply <path/to/ply> --motion <path/to/motion> --play --rotate
106+
```
107+
108+
For example, you can animate the "A boy with a beanie wearing a hoodie and joggers" avatar with some sample motions as:
109+
```bash
110+
# play motion and save to .mp4 for "A boy with a beanie wearing a hoodie and joggers" case
111+
python animation.py --ply "content/sample.ply" --motion "content/Aeroplane_FW_part9.npz" --play
112+
```
113+
114+
55115
## Acknowledgement
56116
This work is built on many amazing research works and open-source projects, including [Threestudio](https://github.com/threestudio-project/threestudio), [3DGS](https://github.com/graphdeco-inria/gaussian-splatting), [diff-gaussian-rasterization](https://github.com/graphdeco-inria/diff-gaussian-rasterization), [GaussianDreamer](https://github.com/hustvl/GaussianDreamer). Thanks a lot to all the authors for sharing!
57117

0 commit comments

Comments
 (0)