Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit 724d042

Browse files
committedJan 13, 2025
Update generated HTML
1 parent 93ce897 commit 724d042

File tree

1 file changed

+40
-0
lines changed

1 file changed

+40
-0
lines changed
 

‎index.html

+40
Original file line numberDiff line numberDiff line change
@@ -2162,6 +2162,28 @@ <h2 class="paper-title">Gaussian Billboards: Expressive 2D Gaussian Splatting wi
21622162
</div>
21632163
</div>
21642164
</div>
2165+
<div class="paper-row" data-id="wu20243dgut" data-title="3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting" data-authors="Qi Wu, Janick Martinez Esturo, Ashkan Mirzaei, Nicolas Moenne-Loccoz, Zan Gojcic" data-year="2024" data-tags='["Perspective-correct", "Project", "Video"]'>
2166+
<div class="paper-card">
2167+
<input type="checkbox" class="selection-checkbox" onclick="handleCheckboxClick(event, 'wu20243dgut', this)">
2168+
<div class="paper-number"></div>
2169+
<div class="paper-thumbnail">
2170+
<img data-src="assets/thumbnails/wu20243dgut.jpg" data-fallback="None" alt="Paper thumbnail for 3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting" class="lazy" loading="lazy"/>
2171+
</div>
2172+
<div class="paper-content">
2173+
<h2 class="paper-title">3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting <span class="paper-year">(2024)</span></h2>
2174+
<p class="paper-authors">Qi Wu, Janick Martinez Esturo, Ashkan Mirzaei, Nicolas Moenne-Loccoz, Zan Gojcic</p>
2175+
<div class="paper-tags"><span class="paper-tag">Perspective-correct</span>
2176+
<span class="paper-tag">Project</span>
2177+
<span class="paper-tag">Video</span></div>
2178+
<div class="paper-links"><a href="https://arxiv.org/pdf/2412.12507.pdf" class="paper-link" target="_blank" rel="noopener">📄 Paper</a>
2179+
<a href="https://research.nvidia.com/labs/toronto-ai/3DGUT/" class="paper-link" target="_blank" rel="noopener">🌐 Project</a>
2180+
<a href="https://research.nvidia.com/labs/toronto-ai/3DGUT/res/3DGUT_ready_compressed.mp4" class="paper-link" target="_blank" rel="noopener">🎥 Video</a>
2181+
<button class="abstract-toggle" onclick="toggleAbstract(this)">📖 Show Abstract</button>
2182+
<div class="paper-abstract">3D Gaussian Splatting (3DGS) has shown great potential for efficient reconstruction and high-fidelity real-time rendering of complex scenes on consumer hardware. However, due to its rasterization-based formulation, 3DGS is constrained to ideal pinhole cameras and lacks support for secondary lighting effects. Recent methods address these limitations by tracing volumetric particles instead, however, this comes at the cost of significantly slower rendering speeds. In this work, we propose 3D Gaussian Unscented Transform (3DGUT), replacing the EWA splatting formulation in 3DGS with the Unscented Transform that approximates the particles through sigma points, which can be projected exactly under any nonlinear projection function. This modification enables trivial support of distorted cameras with time dependent effects such as rolling shutter, while retaining the efficiency of rasterization. Additionally, we align our rendering formulation with that of tracing-based methods, enabling secondary ray tracing required to represent phenomena such as reflections and refraction within the same 3D representation.
2183+
</div></div>
2184+
</div>
2185+
</div>
2186+
</div>
21652187
<div class="paper-row" data-id="murai2024mast3rslam" data-title="MASt3R-SLAM: Real-Time Dense SLAM with 3D Reconstruction Priors" data-authors="Riku Murai, Eric Dexheimer, Andrew J. Davison" data-year="2024" data-tags='["3ster-based", "SLAM", "Video"]'>
21662188
<div class="paper-card">
21672189
<input type="checkbox" class="selection-checkbox" onclick="handleCheckboxClick(event, 'murai2024mast3rslam', this)">
@@ -2505,6 +2527,24 @@ <h2 class="paper-title">ReCap: Better Gaussian Relighting with Cross-Environment
25052527
</div>
25062528
</div>
25072529
</div>
2530+
<div class="paper-row" data-id="lyu2024resgs" data-title="ResGS: Residual Densification of 3D Gaussian for Efficient Detail Recovery" data-authors="Yanzhe Lyu, Kai Cheng, Xin Kang, Xuejin Chen" data-year="2024" data-tags='["Densification"]'>
2531+
<div class="paper-card">
2532+
<input type="checkbox" class="selection-checkbox" onclick="handleCheckboxClick(event, 'lyu2024resgs', this)">
2533+
<div class="paper-number"></div>
2534+
<div class="paper-thumbnail">
2535+
<img data-src="assets/thumbnails/lyu2024resgs.jpg" data-fallback="None" alt="Paper thumbnail for ResGS: Residual Densification of 3D Gaussian for Efficient Detail Recovery" class="lazy" loading="lazy"/>
2536+
</div>
2537+
<div class="paper-content">
2538+
<h2 class="paper-title">ResGS: Residual Densification of 3D Gaussian for Efficient Detail Recovery <span class="paper-year">(2024)</span></h2>
2539+
<p class="paper-authors">Yanzhe Lyu, Kai Cheng, Xin Kang, Xuejin Chen</p>
2540+
<div class="paper-tags"><span class="paper-tag">Densification</span></div>
2541+
<div class="paper-links"><a href="https://arxiv.org/pdf/2412.07494.pdf" class="paper-link" target="_blank" rel="noopener">📄 Paper</a>
2542+
<button class="abstract-toggle" onclick="toggleAbstract(this)">📖 Show Abstract</button>
2543+
<div class="paper-abstract">Recently, 3D Gaussian Splatting (3D-GS) has prevailed in novel view synthesis, achieving high fidelity and efficiency. However, it often struggles to capture rich details and complete geometry. Our analysis highlights a key limitation of 3D-GS caused by the fixed threshold in densification, which balances geometry coverage against detail recovery as the threshold varies. To address this, we introduce a novel densification method, residual split, which adds a downscaled Gaussian as a residual. Our approach is capable of adaptively retrieving details and complementing missing geometry while enabling progressive refinement. To further support this method, we propose a pipeline named ResGS. Specifically, we integrate a Gaussian image pyramid for progressive supervision and implement a selection scheme that prioritizes the densification of coarse Gaussians over time. Extensive experiments demonstrate that our method achieves SOTA rendering quality. Consistent performance improvements can be achieved by applying our residual split on various 3D-GS variants, underscoring its versatility and potential for broader application in 3D-GS-based applications.
2544+
</div></div>
2545+
</div>
2546+
</div>
2547+
</div>
25082548
<div class="paper-row" data-id="tang2024mvdust3r" data-title="MV-DUSt3R+: Single-Stage Scene Reconstruction from Sparse Views In 2 Seconds" data-authors="Zhenggang Tang, Yuchen Fan, Dilin Wang, Hongyu Xu, Rakesh Ranjan, Alexander Schwing, Zhicheng Yan" data-year="2024" data-tags='["3ster-based", "Code", "Project", "Sparse", "Video"]'>
25092549
<div class="paper-card">
25102550
<input type="checkbox" class="selection-checkbox" onclick="handleCheckboxClick(event, 'tang2024mvdust3r', this)">

0 commit comments

Comments
 (0)
Please sign in to comment.