-
Notifications
You must be signed in to change notification settings - Fork 412
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for NeRFs and Gaussian splatting #4529
Comments
Here's a fun web-based real-time renderer for Gaussian splats in case you haven't seen it: https://github.com/antimatter15/splat |
I have a branch |
This looks a pretty good gaussian splat impl, better than the ones floating around on slack and discord so far. |
I was looking into visualizing 3D Gaussian Splatting reconstructions with rerun.io and discovered this issue. My suggestion would be to use the GPU Raxis sort if WGPU is available. Otherwise, use a Bitonic sorter on the CPU as a fallback (like poly.cam or this implementation GaussianSplats3D). |
Thank you @KeKsBoTer I'll get back to you on that :) |
I created a separate crate for our radix sort implementation: https://crates.io/crates/wgpu_sort. |
nice!! love it. So good to have this as a separate, well documented and even benchmarked library! |
@KeKsBoTer getting a bit offtopic of the original ticket here, but the bit about subgroup handling in the sorting algorithm gives me pause. I was scrolling a little bit in the shader code to understand the exact subgroup size dependency. As understand it's that https://github.com/KeKsBoTer/wgpu_sort/blob/master/src/radix_sort.wgsl#L267 assumes that any atomic writes by the same subgroup are immediately visible to any other subgroup member upon atomic load? |
@Wumpf Yes you are correct. I mention this in the Limitations of the package REAME. As long as wgpu has no subgroup control (which it will have soon hopefully) it can potentially break. To fix this problem in the meantime you can simply set the subgroup size to 1 when compiling the shader.
The sorting is roughly 3x slower but still more than fast enough to sort Gaussian Splatting scenes that typically have around 1 to 5 million points. I hope this answers your questions / concerns. |
thanks for clearing this up! also nice benchmarks there again, super cool that you can test it that quickly as well :) |
Neural radiance fields (NeRFs) and Gaussian splatting recently received a lot of attention. These are 3D representations that can be optimized from posed image collections via differentiable rendering yielding near-photorealistic results.
There are a large number of follow-up works that adopt the main idea from the original works, but modify the network architecture, sampling procedure, exact rendering equation, relax the assumption of posed images, etc.
This makes it more difficult to support these directions out-of-the-box.
Describe a solution you'd like
In my opinion the best (and maybe only) way to add support is via plugins that allow custom datatypes (e.g., logging the 3D Gaussians, network weights, or whatever is the underlying representation) and custom rendering (e.g., given the camera parameters for the 3D view, and the logged data, let the plugin render RGB image + Z buffer, which are then combined with supported primitives on the Rerun side).
NeRFs can typically not be viewed in real-time, thus an adaptive rendering scheme should be implementable (i.e., resolution could easily be increased when the camera does not move, I believe nerfstudio does this).
Describe alternatives you've considered
An option might be to support whatever comes closest to a reference implementation for Gaussian splatting (e.g. this one). But I'm not convinced this is a viable solution at this point, when there's still a lot of research at the renderer level going on.
The text was updated successfully, but these errors were encountered: