Skip to content

Commit a6e7202

Browse files
authoredMar 11, 2025··
Merge pull request #334 from MrNeRF/update-html
Update generated HTML
2 parents 366ce49 + 4d0e432 commit a6e7202

File tree

1 file changed

+49
-0
lines changed

1 file changed

+49
-0
lines changed
 

‎index.html

+49
Original file line numberDiff line numberDiff line change
@@ -1346,6 +1346,31 @@ <h2 class="paper-title">Lifting by Gaussians: A Simple, Fast and Flexible Method
13461346
</div>
13471347
</div>
13481348
</div>
1349+
<div class="paper-row" data-id="lin2025omniphysgs" data-title="OmniPhysGS: 3D Constitutive Gaussians for General Physics-Based Dynamics Generation" data-authors="Yuchen Lin, Chenguo Lin, Jianjin Xu, Yadong Mu" data-year="2025" data-tags='["Code", "Dynamic", "Physics", "Project", "Video"]'>
1350+
<div class="paper-card">
1351+
<input type="checkbox" class="selection-checkbox" onclick="handleCheckboxClick(event, 'lin2025omniphysgs', this)">
1352+
<div class="paper-number"></div>
1353+
<div class="paper-thumbnail">
1354+
<img data-src="assets/thumbnails/lin2025omniphysgs.jpg" data-fallback="None" alt="Paper thumbnail for OmniPhysGS: 3D Constitutive Gaussians for General Physics-Based Dynamics Generation" class="lazy" loading="lazy"/>
1355+
</div>
1356+
<div class="paper-content">
1357+
<h2 class="paper-title">OmniPhysGS: 3D Constitutive Gaussians for General Physics-Based Dynamics Generation <span class="paper-year">(2025)</span></h2>
1358+
<p class="paper-authors">Yuchen Lin, Chenguo Lin, Jianjin Xu, Yadong Mu</p>
1359+
<div class="paper-tags"><span class="paper-tag">Code</span>
1360+
<span class="paper-tag">Dynamic</span>
1361+
<span class="paper-tag">Physics</span>
1362+
<span class="paper-tag">Project</span>
1363+
<span class="paper-tag">Video</span></div>
1364+
<div class="paper-links"><a href="https://arxiv.org/pdf/2501.18982.pdf" class="paper-link" target="_blank" rel="noopener">📄 Paper</a>
1365+
<a href="https://wgsxm.github.io/projects/omniphysgs/" class="paper-link" target="_blank" rel="noopener">🌐 Project</a>
1366+
<a href="https://github.com/wgsxm/omniphysgs" class="paper-link" target="_blank" rel="noopener">💻 Code</a>
1367+
<a href="https://wgsxm.github.io/videos/omniphysgs.mp4" class="paper-link" target="_blank" rel="noopener">🎥 Video</a>
1368+
<button class="abstract-toggle" onclick="toggleAbstract(this)">📖 Show Abstract</button>
1369+
<div class="paper-abstract">Recently, significant advancements have been made in the reconstruction and generation of 3D assets, including static cases and those with physical interactions. To recover the physical properties of 3D assets, existing methods typically assume that all materials belong to a specific predefined category (e.g., elasticity). However, such assumptions ignore the complex composition of multiple heterogeneous objects in real scenarios and tend to render less physically plausible animation given a wider range of objects. We propose OmniPhysGS for synthesizing a physics-based 3D dynamic scene composed of more general objects. A key design of OmniPhysGS is treating each 3D asset as a collection of constitutive 3D Gaussians. For each Gaussian, its physical material is represented by an ensemble of 12 physical domain-expert sub-models (rubber, metal, honey, water, etc.), which greatly enhances the flexibility of the proposed model. In the implementation, we define a scene by user-specified prompts and supervise the estimation of material weighting factors via a pretrained video diffusion model. Comprehensive experiments demonstrate that OmniPhysGS achieves more general and realistic physical dynamics across a broader spectrum of materials, including elastic, viscoelastic, plastic, and fluid substances, as well as interactions between different materials. Our method surpasses existing methods by approximately 3% to 16% in metrics of visual quality and text alignment.
1370+
</div></div>
1371+
</div>
1372+
</div>
1373+
</div>
13491374
<div class="paper-row" data-id="lin2025diffsplat" data-title="DiffSplat: Repurposing Image Diffusion Models for Scalable Gaussian Splat Generation" data-authors="Chenguo Lin, Panwang Pan, Bangbang Yang, Zeming Li, Yadong Mu" data-year="2025" data-tags='["Diffusion", "Project"]'>
13501375
<div class="paper-card">
13511376
<input type="checkbox" class="selection-checkbox" onclick="handleCheckboxClick(event, 'lin2025diffsplat', this)">
@@ -3817,6 +3842,30 @@ <h2 class="paper-title">Generating 3D-Consistent Videos from Unposed Internet Ph
38173842
</div>
38183843
</div>
38193844
</div>
3845+
<div class="paper-row" data-id="joseph2024gradientweighted" data-title="Gradient-Weighted Feature Back-Projection: A Fast Alternative to Feature Distillation in 3D Gaussian Splatting" data-authors="Joji Joseph, Bharadwaj Amrutur, Shalabh Bhatnagar" data-year="2024" data-tags='["Code", "Editing", "Language Embedding", "Project", "Segmentation"]'>
3846+
<div class="paper-card">
3847+
<input type="checkbox" class="selection-checkbox" onclick="handleCheckboxClick(event, 'joseph2024gradientweighted', this)">
3848+
<div class="paper-number"></div>
3849+
<div class="paper-thumbnail">
3850+
<img data-src="assets/thumbnails/joseph2024gradientweighted.jpg" data-fallback="None" alt="Paper thumbnail for Gradient-Weighted Feature Back-Projection: A Fast Alternative to Feature Distillation in 3D Gaussian Splatting" class="lazy" loading="lazy"/>
3851+
</div>
3852+
<div class="paper-content">
3853+
<h2 class="paper-title">Gradient-Weighted Feature Back-Projection: A Fast Alternative to Feature Distillation in 3D Gaussian Splatting <span class="paper-year">(2024)</span></h2>
3854+
<p class="paper-authors">Joji Joseph, Bharadwaj Amrutur, Shalabh Bhatnagar</p>
3855+
<div class="paper-tags"><span class="paper-tag">Code</span>
3856+
<span class="paper-tag">Editing</span>
3857+
<span class="paper-tag">Language Embedding</span>
3858+
<span class="paper-tag">Project</span>
3859+
<span class="paper-tag">Segmentation</span></div>
3860+
<div class="paper-links"><a href="https://arxiv.org/pdf/2411.15193.pdf" class="paper-link" target="_blank" rel="noopener">📄 Paper</a>
3861+
<a href="https://jojijoseph.github.io/3dgs-backprojection/" class="paper-link" target="_blank" rel="noopener">🌐 Project</a>
3862+
<a href="https://github.com/JojiJoseph/3dgs-gradient-backprojection" class="paper-link" target="_blank" rel="noopener">💻 Code</a>
3863+
<button class="abstract-toggle" onclick="toggleAbstract(this)">📖 Show Abstract</button>
3864+
<div class="paper-abstract">We introduce a training-free method for feature field rendering in Gaussian splatting. Our approach back-projects 2D features into pre-trained 3D Gaussians, using a weighted sum based on each Gaussian's influence in the final rendering. While most training-based feature field rendering methods excel at 2D segmentation but perform poorly at 3D segmentation without post-processing, our method achieves high-quality results in both 2D and 3D segmentation. Experimental results demonstrate that our approach is fast, scalable, and offers performance comparable to training-based methods.
3865+
</div></div>
3866+
</div>
3867+
</div>
3868+
</div>
38203869
<div class="paper-row" data-id="fang2024minisplatting2" data-title="Mini-Splatting2: Building 360 Scenes within Minutes via Aggressive Gaussian Densification" data-authors="Guangchi Fang, Bing Wang" data-year="2024" data-tags='["Acceleration", "Code", "Densification"]'>
38213870
<div class="paper-card">
38223871
<input type="checkbox" class="selection-checkbox" onclick="handleCheckboxClick(event, 'fang2024minisplatting2', this)">

0 commit comments

Comments
 (0)
Please sign in to comment.