You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<img data-src="assets/thumbnails/lin2025omniphysgs.jpg" data-fallback="None" alt="Paper thumbnail for OmniPhysGS: 3D Constitutive Gaussians for General Physics-Based Dynamics Generation" class="lazy" loading="lazy"/>
1355
+
</div>
1356
+
<div class="paper-content">
1357
+
<h2 class="paper-title">OmniPhysGS: 3D Constitutive Gaussians for General Physics-Based Dynamics Generation <span class="paper-year">(2025)</span></h2>
<button class="abstract-toggle" onclick="toggleAbstract(this)">📖 Show Abstract</button>
1369
+
<div class="paper-abstract">Recently, significant advancements have been made in the reconstruction and generation of 3D assets, including static cases and those with physical interactions. To recover the physical properties of 3D assets, existing methods typically assume that all materials belong to a specific predefined category (e.g., elasticity). However, such assumptions ignore the complex composition of multiple heterogeneous objects in real scenarios and tend to render less physically plausible animation given a wider range of objects. We propose OmniPhysGS for synthesizing a physics-based 3D dynamic scene composed of more general objects. A key design of OmniPhysGS is treating each 3D asset as a collection of constitutive 3D Gaussians. For each Gaussian, its physical material is represented by an ensemble of 12 physical domain-expert sub-models (rubber, metal, honey, water, etc.), which greatly enhances the flexibility of the proposed model. In the implementation, we define a scene by user-specified prompts and supervise the estimation of material weighting factors via a pretrained video diffusion model. Comprehensive experiments demonstrate that OmniPhysGS achieves more general and realistic physical dynamics across a broader spectrum of materials, including elastic, viscoelastic, plastic, and fluid substances, as well as interactions between different materials. Our method surpasses existing methods by approximately 3% to 16% in metrics of visual quality and text alignment.
@@ -3817,6 +3842,30 @@ <h2 class="paper-title">Generating 3D-Consistent Videos from Unposed Internet Ph
3817
3842
</div>
3818
3843
</div>
3819
3844
</div>
3845
+
<div class="paper-row" data-id="joseph2024gradientweighted" data-title="Gradient-Weighted Feature Back-Projection: A Fast Alternative to Feature Distillation in 3D Gaussian Splatting" data-authors="Joji Joseph, Bharadwaj Amrutur, Shalabh Bhatnagar" data-year="2024" data-tags='["Code", "Editing", "Language Embedding", "Project", "Segmentation"]'>
<img data-src="assets/thumbnails/joseph2024gradientweighted.jpg" data-fallback="None" alt="Paper thumbnail for Gradient-Weighted Feature Back-Projection: A Fast Alternative to Feature Distillation in 3D Gaussian Splatting" class="lazy" loading="lazy"/>
3851
+
</div>
3852
+
<div class="paper-content">
3853
+
<h2 class="paper-title">Gradient-Weighted Feature Back-Projection: A Fast Alternative to Feature Distillation in 3D Gaussian Splatting <span class="paper-year">(2024)</span></h2>
<button class="abstract-toggle" onclick="toggleAbstract(this)">📖 Show Abstract</button>
3864
+
<div class="paper-abstract">We introduce a training-free method for feature field rendering in Gaussian splatting. Our approach back-projects 2D features into pre-trained 3D Gaussians, using a weighted sum based on each Gaussian's influence in the final rendering. While most training-based feature field rendering methods excel at 2D segmentation but perform poorly at 3D segmentation without post-processing, our method achieves high-quality results in both 2D and 3D segmentation. Experimental results demonstrate that our approach is fast, scalable, and offers performance comparable to training-based methods.
3865
+
</div></div>
3866
+
</div>
3867
+
</div>
3868
+
</div>
3820
3869
<div class="paper-row" data-id="fang2024minisplatting2" data-title="Mini-Splatting2: Building 360 Scenes within Minutes via Aggressive Gaussian Densification" data-authors="Guangchi Fang, Bing Wang" data-year="2024" data-tags='["Acceleration", "Code", "Densification"]'>
0 commit comments