Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvements from TheForge #99257

Conversation

darksylinc
Copy link
Contributor

@darksylinc darksylinc commented Nov 14, 2024

The work was performed by collaboration of TheForge and Google. I am merely splitting it up into smaller PRs and cleaning it up.

This is the most "risky" PR so far because the previous ones have been miscellaneous stuff aimed at either improve debugging (e.g. device lost), improve Android experience (add Swappy for better Frame Pacing + Pre-Transformed Swapchains for slightly better performance), or harmless ASTC improvements (better performance by simply toggling a feature when available).

However this PR contains larger modifications aimed at improving performance or reducing memory fragmentation. With greater modifications, come greater risks of bugs or breakage.

Changes introduced by this PR:

Transient memory

TBDR GPUs (e.g. most of Android + iOS + M1 Apple) support rendering to Render Targets that are not backed by actual GPU memory (everything stays in cache). This works as long as load action isn't LOAD, and store action must be DONT_CARE. This saves VRAM (it also makes painfully obvious when a mistake introduces a performance regression). Of particular usefulness is when doing MSAA and keeping the raw MSAA content is not necessary.

Immutable samplers

Some GPUs get faster when the sampler settings are hard-coded into the GLSL shaders (instead of being dynamically bound at runtime). This required changes to the GLSL shaders, PSO creation routines, Descriptor creation routines, and Descriptor binding routines.

Toggle

  • bool immutable_samplers_enabled = true

Setting it to false enforces the old behavior. Useful for debugging bugs and regressions.

Immutable samplers requires that the samplers stay... immutable, hence this boolean is useful if the promise gets broken. We might want to turn this into a GLOBAL_DEF setting.

Linear Descriptor Pools

Instead of creating dozen/hundreds/thousands of VkDescriptorSet every frame that need to be freed individually when they are no longer needed, they all get freed at once by resetting the whole pool. Once the whole pool is no longer in use by the GPU, it gets reset and its memory recycled. Descriptor sets that are created to be kept around for longer or forever (i.e. not created and freed within the same frame) must not use linear pools. There may be more than one pool per frame. How many pools per frame Godot ends up with depends on its capacity, and that is controlled by rendering/rendering_device/vulkan/max_descriptors_per_pool.

  • Possible improvement for later: It should be possible for Godot to adapt to how many descriptors per pool are needed on a per-key basis (i.e. grow their capacity like std::vector does) after rendering a few frames; which would be better than the current solution of having a single global value for all pools (max_descriptors_per_pool) that the user needs to tweak.

Toggle

  • bool linear_descriptor_pools_enabled = true

Setting it to false enforces the old behavior. Useful for debugging bugs and regressions. Setting it to false is required when workarounding driver bugs (e.g. Adreno 730).

Reset Command Pools

A ridiculous optimization. Ridiculous because the original code should've done this in the first place. Previously Godot was doing the following:

  1. Create a command buffer pool. One per frame.
  2. Create multiple command buffers from the pool in point 1.
  3. Call vkBeginCommandBuffer on the cmd buffer in point 2. This resets the cmd buffer because Godot requests the VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT flag.
  4. Add commands to the cmd buffers from point 2.
  5. Submit those commands.
  6. On frame N + 2, recycle the buffer pool and cmd buffers from pt 1 & 2, and repeat from step 3.

The problem here is that step 3 resets each command buffer individually. Initially Godot used to have 1 cmd buffer per pool, thus the impact is very low.

But not anymore (specially with Adreno workarounds to force splitting compute dispatches into a new cmd buffer, more on this later). However Godot keeps around a very low amount of command buffers per frame.

The recommended method is to reset the whole pool, to reset all cmd buffers at once. Hence the new steps would be:

  1. Create a command buffer pool. One per frame.
  2. Create multiple command buffers from the pool in point 1.
  3. Call vkBeginCommandBuffer on the cmd buffer in point 2, which is already reset/empty (see step 6).
  4. Add commands to the cmd buffers from point 2.
  5. Submit those commands.
  6. On frame N + 2, recycle the buffer pool and cmd buffers from pt 1 & 2, call vkResetCommandPool and repeat from step 3.

Possible issues: @DarioSamo added transfer_worker which creates a command buffer pool:

transfer_worker->command_pool = driver->command_pool_create(transfer_queue_family, RDD::COMMAND_BUFFER_TYPE_PRIMARY);

As expected, validation was complaining that command buffers were being reused without being reset (that's good, we now know Validation Layers will warn us of wrong use). I fixed it by adding:

void RenderingDevice::_wait_for_transfer_worker(TransferWorker *p_transfer_worker) {
	driver->fence_wait(p_transfer_worker->command_fence);
	driver->command_pool_reset(p_transfer_worker->command_pool); // ! New line !

Secondary cmd buffers are subject to the same issue but I didn't alter them. I talked this with Dario and he is aware of this. Secondary cmd buffers are currently disabled due to other issues (it's disabled on master).

Toggle

  • bool RenderingDeviceCommons::command_pool_reset_enabled

Setting it to false enforces the old behavior. Useful for debugging bugs and regressions.

There's no other reason for this boolean. Possibly once it becomes well tested, the boolean could be removed entirely.

Descriptor set batched binding

Adds command_bind_render_uniform_sets and add_draw_list_bind_uniform_sets (+ compute variants).

It performs the same as add_draw_list_bind_uniform_set (notice singular vs plural), but on multiple consecutive uniform sets, thus reducing graph and draw call overhead.

Toggle

  • bool descriptor_set_batching = true;

Setting it to false enforces the old behavior. Useful for debugging bugs and regressions.

There's no other reason for this boolean. Possibly once it becomes well tested, the boolean could be removed entirely.

Do not wait so long for swapchain

Godot currently does the following:

  1. Fill the entire cmd buffer with commands.
  2. submit()
    • Wait with a semaphore for the swapchain.
    • Trigger a semaphore to indicate when we're done (so the swapchain can submit).
  3. present()

The optimization opportunity here is that 95% of Godot's rendering is done offscreen. Then a fullscreen pass copies everything to the swapchain. Godot doesn't practically render directly to the swapchain.

The problem with this is that the GPU has to wait for the swapchain to be released to start anything, when we could start much earlier. Only the final blit pass must wait for the swapchain.

TheForge changed it to the following (more complicated, I'm simplifying the idea):

  1. Fill the entire cmd buffer with commands.
  2. In screen_prepare_for_drawing do submit()
    • There are no semaphore waits for the swapchain.
    • Trigger a semaphore to indicate when we're done.
  3. Fill a new cmd buffer that only does the final blit to the swapchain.
  4. submit()
    • Wait with a semaphore for the submit() from step 2.
    • Wait with a semaphore for the swapchain (so the swapchain can submit).
    • Trigger a semaphore to indicate when we're done (so the swapchain can submit).
  5. present()

Dario discovered this problem independently while working on a different platform.

However TheForge's solution had to be rewritten from scratch: The complexity to achieve the solution was high and quite difficult to maintain with the way Godot works now (after Übershaders PR). But on the other hand, re-implementing the solution became much simpler because Dario already had to do something similar: To fix an Adreno 730 driver bug, he had to implement splitting command buffers. This is exactly what we need!. Thus it was re-written using this existing functionality for a new purpose.

To achieve this, I added a new argument, bool p_split_cmd_buffer, to RenderingDeviceGraph::add_draw_list_begin, which is only set to true by RenderingDevice::draw_list_begin_for_screen.

The graph will split the draw list into its own command buffer.

Toggle

  • bool split_swapchain_into_its_own_cmd_buffer = true;

Setting it to false enforces the old behavior. This might be necessary for consoles which follow an alternate solution to the same problem. If not, then we should consider removing it.

Free Shader memory

PR #90993 added shader_destroy_modules() but it was not actually in use.

This PR adds several places where shader_destroy_modules() is called after initialization to free up memory of SPIR-V structures that are no longer needed.

What's missing?

The following improvements from TheForge were left out from this PR:

  1. Render Pass Optimizations.
    • They were mostly good but I need more time to analyze what they do and that they actually help.
  2. Replace Push Constants by UBOs.
    • They were not included as they were a bit controversial since point 3 was left out too (continue reading)
  3. UMA buffers. They were the most anticipated feature, but unfortunately full of race conditions. It's the feature that needs the most work. Without this feature, it's unclear if replacing push constants by UBOs are a net win or a net loss.

CI failures

  • Deal with DISABLE_DEPRECATED removing INITIAL_ACTION_CLEAR_REGION & co.
  • Fix D3D12.
  • Fix Metal.

@darksylinc darksylinc force-pushed the matias-TheForge-pr04-excluded-ubo+render_opt branch 5 times, most recently from e6736cd to 646adc0 Compare November 15, 2024 22:00
@darksylinc darksylinc requested a review from a team as a code owner November 15, 2024 22:00
@darksylinc darksylinc force-pushed the matias-TheForge-pr04-excluded-ubo+render_opt branch from 646adc0 to 5752dae Compare November 15, 2024 22:07
Copy link
Contributor

@tetrapod00 tetrapod00 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming correctness, the docs seem fine.

@@ -2861,6 +2861,9 @@
[b]Note:[/b] Some platforms may restrict the actual value.
</member>
<member name="rendering/rendering_device/vulkan/max_descriptors_per_pool" type="int" setter="" getter="" default="64">
The number of descriptors per pool. Godot's Vulkan backend uses linear pools for descriptors that will be created and destroyed within a single frame. Instead of destroying every single descriptor every frame, they all can be destroyed at once by resetting the pool they belong to.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The number of descriptors per pool. Godot's Vulkan backend uses linear pools for descriptors that will be created and destroyed within a single frame. Instead of destroying every single descriptor every frame, they all can be destroyed at once by resetting the pool they belong to.
The number of descriptors per pool. The Vulkan rendering driver uses linear pools for descriptors that will be created and destroyed within a single frame. Instead of destroying every single descriptor every frame, they all can be destroyed at once by resetting the pool they belong to.

Try to avoid ambiguous "backend", prefer explicit "renderer/rendering method" or "rendering driver", see #98744.

@darksylinc darksylinc force-pushed the matias-TheForge-pr04-excluded-ubo+render_opt branch 2 times, most recently from 526702f to d4524f0 Compare November 20, 2024 21:34
@darksylinc darksylinc marked this pull request as draft November 26, 2024 19:35
@darksylinc darksylinc force-pushed the matias-TheForge-pr04-excluded-ubo+render_opt branch from 278e569 to 78364b7 Compare November 26, 2024 21:39
@darksylinc darksylinc marked this pull request as ready for review November 26, 2024 21:40
@darksylinc
Copy link
Contributor Author

OK the bug was fixed.

As I suspected, I simply failed to apply the Reverse Depth correction to the immutable samplers. It's ready for review/merge again.

@KeyboardDanni
Copy link
Contributor

Am I correct in assuming that this would fix the additional frame of lag caused by waiting on the swapchain before submitting commands? Even with a frame queue size of 1, it looks like the existing behavior essentially ends up waiting on two V-Syncs before the results end up on-screen:

2024-11-27 19_25_55-PIX on Windows - Timing 1 wpix

By both creating and executing the commands before the upcoming V-blank, we'd end up saving a frame.

@KeyboardDanni
Copy link
Contributor

So I tested this PR. The new swapchain wait behavior almost works. It looks like it's still waiting on the next V-Blank before it submits the first command list (though the second one goes off without a hitch):

2024-11-27 20_56_09-PIX on Windows - Timing 1 wpix
2024-11-27 20_56_15-PIX on Windows - Timing 1 wpix

@darksylinc
Copy link
Contributor Author

darksylinc commented Nov 28, 2024

Am I correct in assuming that this would fix the additional frame of lag caused by waiting on the swapchain before submitting commands?

No. This PR would eliminate pipeline bubbles between drawing and presenting, which results in improved performance thanks to the bubble removal.

Those measurements you're posting looks like something to send to NVIDIA. The capture you're posting shows the GPU literally doing nothing and waiting on nothing only to present two VBlanks later.

@KeyboardDanni
Copy link
Contributor

No. This PR would eliminate pipeline bubbles between drawing and presenting, which results in improved performance thanks to the bubble removal.

Wouldn't this just move the bubble? The drawing and presenting were previously being done in one command list. Now it's being done in two, creating a bubble there, while removing the one between creating and submitting the command list. From your description:

Do not wait so long for swapchain

Godot currently does the following:

  1. Fill the entire cmd buffer with commands.

  2. submit()

    • Wait with a semaphore for the swapchain.
    • Trigger a semaphore to indicate when we're done (so the swapchain can submit).
  3. present()

The optimization opportunity here is that 95% of Godot's rendering is done offscreen. Then a fullscreen pass copies everything to the swapchain. Godot doesn't practically render directly to the swapchain.

The problem with this is that the GPU has to wait for the swapchain to be released to start anything, when we could start much earlier. Only the final blit pass must wait for the swapchain.

Which mirrors what I'm seeing in these PIX traces. It's waiting until the swapchain is available, which doesn't happen until V-Sync. Only after the V-Sync marker you'll see the command list being executed. The proposed fix changes it to the following:

  1. Fill the entire cmd buffer with commands.

  2. In screen_prepare_for_drawing do submit()

    • There are no semaphore waits for the swapchain.
    • Trigger a semaphore to indicate when we're done.
  3. Fill a new cmd buffer that only does the final blit to the swapchain.

  4. submit()

    • Wait with a semaphore for the submit() from step 2.
    • Wait with a semaphore for the swapchain (so the swapchain can submit).
    • Trigger a semaphore to indicate when we're done (so the swapchain can submit).
  5. present()

In particular, the lack of wait on the swapchain in step 2.

Those measurements you're posting looks like something to send to NVIDIA. The capture you're posting shows the GPU literally doing nothing and waiting on nothing only to present two VBlanks later.

But it's waiting on the swapchain, no? Aren't we controlling the synchronization primitives? Isn't that the purpose of APIs like Vulkan and D3D12?

@KeyboardDanni
Copy link
Contributor

@darksylinc Just to confirm, are we drawing to the same framebuffer every frame, and using that same framebuffer as the source when we copy it to the swapchain image? If so, it's likely that the framebuffer is locked waiting on the swapchain, meaning we still have to wait for an available swapchain image before we can run any commands. We might be able to fix this by having one framebuffer for each swapchain image, though if consistency of framebuffer contents between frames is important we'd have to add an extra copy, which might add some overhead.

@darksylinc
Copy link
Contributor Author

Just to confirm, are we drawing to the same framebuffer every frame, and using that same framebuffer as the source when we copy it to the swapchain image?

Yes.

If so, it's likely that the framebuffer is locked waiting on the swapchain, meaning we still have to wait for an available swapchain image before we can run any commands.

No, because the swapchain was not acquired by the present engine during that time.
Rendering is roughly the following:

  1. Draw into RTT0.
  2. Wait until Swapchain Semaphore is released by the Present engine.
  3. Copy RTT0 into Swapchain.
  4. Present's engine must wait for the copy to release the execution semaphore so it can reacquire the Swapchain.
  5. We render the next frame, to RTT0 (i.e. back to step 1 again). We only must wait on the semaphore released in step 3, not the one that was acquired in step 4.

The copy from step 3 can roughly take 0.5ms (give or take depending how fast the GPU is and screen resolution).
The presentation itself can take anywhere from 0 to 33.33ms to be released.

While present engine is still holding on to the Swapchain, we can start rendering to RTT0. Hopefully by the time we're done rendering to RTT0, the Swapchain was already released, causing no unnecessary bubbles.

If the swapchain hasn't yet been released, there will be a bubble. But it will be smaller than if we would have to wait just to even start rendering to RTT0.

@KeyboardDanni
Copy link
Contributor

No, because the swapchain was not acquired by the present engine during that time.

But if all the images in the swapchain are currently waiting on the next V-Sync interval, how do we hand off the framebuffer? We can't reuse it until we perform the copy, and we need something to copy to.

The copy from step 3 can roughly take 0.5ms (give or take depending how fast the GPU is and screen resolution). The presentation itself can take anywhere from 0 to 33.33ms to be released.

This is my main concern. In cases where the swapchain is full (which is going to be all the time if the GPU load is light), we cannot perform that copy until we acquire a new image. Because of this wait, we can't execute the command list for nearly a full V-Sync period. And because the command execution is delayed, we have to wait another V-Sync to get the final results on-screen instead of doing everything within the same V-Sync period.

If the swapchain hasn't yet been released, there will be a bubble. But it will be smaller than if we would have to wait just to even start rendering to RTT0.

I'd think the bubble would be between executing the command list and presenting. The fact that it isn't, and command execution is still waiting on V-Sync (at least on my system) seems to imply that it's not quite working as expected.

@KeyboardDanni
Copy link
Contributor

Thinking about this some more, it's quite possible that this may just be solved with waitable swapchain, since we don't even bother to check if a swapchain image is available before we poll input and run game logic for the next frame. But we might get more throughput if we can ensure there's always an available framebuffer.

The work was performed by collaboration of TheForge and Google. I am
merely splitting it up into smaller PRs and cleaning it up.

This is the most "risky" PR so far because the previous ones have been
miscellaneous stuff aimed at either [improve
debugging](godotengine#90993) (e.g. device
lost), [improve Android
experience](godotengine#96439) (add Swappy
for better Frame Pacing + Pre-Transformed Swapchains for slightly better
performance), or harmless [ASTC
improvements](godotengine#96045) (better
performance by simply toggling a feature when available).

However this PR contains larger modifications aimed at improving
performance or reducing memory fragmentation. With greater
modifications, come greater risks of bugs or breakage.

Changes introduced by this PR:

TBDR GPUs (e.g. most of Android + iOS + M1 Apple) support rendering to
Render Targets that are not backed by actual GPU memory (everything
stays in cache). This works as long as load action isn't `LOAD`, and
store action must be `DONT_CARE`. This saves VRAM (it also makes
painfully obvious when a mistake introduces a performance regression).
Of particular usefulness is when doing MSAA and keeping the raw MSAA
content is not necessary.

Some GPUs get faster when the sampler settings are hard-coded into the
GLSL shaders (instead of being dynamically bound at runtime). This
required changes to the GLSL shaders, PSO creation routines, Descriptor
creation routines, and Descriptor binding routines.

 - `bool immutable_samplers_enabled = true`

Setting it to false enforces the old behavior. Useful for debugging bugs
and regressions.

Immutable samplers requires that the samplers stay... immutable, hence
this boolean is useful if the promise gets broken. We might want to turn
this into a `GLOBAL_DEF` setting.

Instead of creating dozen/hundreds/thousands of `VkDescriptorSet` every
frame that need to be freed individually when they are no longer needed,
they all get freed at once by resetting the whole pool. Once the whole
pool is no longer in use by the GPU, it gets reset and its memory
recycled. Descriptor sets that are created to be kept around for longer
or forever (i.e. not created and freed within the same frame) **must
not** use linear pools. There may be more than one pool per frame. How
many pools per frame Godot ends up with depends on its capacity, and
that is controlled by
`rendering/rendering_device/vulkan/max_descriptors_per_pool`.

- **Possible improvement for later:** It should be possible for Godot
to adapt to how many descriptors per pool are needed on a per-key basis
(i.e. grow their capacity like `std::vector` does) after rendering a few
frames; which would be better than the current solution of having a
single global value for all pools (`max_descriptors_per_pool`) that the
user needs to tweak.

 - `bool linear_descriptor_pools_enabled = true`

Setting it to false enforces the old behavior. Useful for debugging bugs
and regressions.
Setting it to false is required when workarounding driver bugs (e.g.
Adreno 730).

A ridiculous optimization. Ridiculous because the original code
should've done this in the first place. Previously Godot was doing the
following:

  1. Create a command buffer **pool**. One per frame.
  2. Create multiple command buffers from the pool in point 1.
3. Call `vkBeginCommandBuffer` on the cmd buffer in point 2. This
resets the cmd buffer because Godot requests the
`VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT` flag.
  4. Add commands to the cmd buffers from point 2.
  5. Submit those commands.
6. On frame N + 2, recycle the buffer pool and cmd buffers from pt 1 &
2, and repeat from step 3.

The problem here is that step 3 resets each command buffer individually.
Initially Godot used to have 1 cmd buffer per pool, thus the impact is
very low.

But not anymore (specially with Adreno workarounds to force splitting
compute dispatches into a new cmd buffer, more on this later). However
Godot keeps around a very low amount of command buffers per frame.

The recommended method is to reset the whole pool, to reset all cmd
buffers at once. Hence the new steps would be:

  1. Create a command buffer **pool**. One per frame.
  2. Create multiple command buffers from the pool in point 1.
3. Call `vkBeginCommandBuffer` on the cmd buffer in point 2, which is
already reset/empty (see step 6).
  4. Add commands to the cmd buffers from point 2.
  5. Submit those commands.
6. On frame N + 2, recycle the buffer pool and cmd buffers from pt 1 &
2, call `vkResetCommandPool` and repeat from step 3.

**Possible issues:** @DarioSamo added `transfer_worker` which creates a
command buffer pool:

```cpp
transfer_worker->command_pool =
driver->command_pool_create(transfer_queue_family,
RDD::COMMAND_BUFFER_TYPE_PRIMARY);
```

As expected, validation was complaining that command buffers were being
reused without being reset (that's good, we now know Validation Layers
will warn us of wrong use).
I fixed it by adding:

```cpp
void RenderingDevice::_wait_for_transfer_worker(TransferWorker
*p_transfer_worker) {
	driver->fence_wait(p_transfer_worker->command_fence);
	driver->command_pool_reset(p_transfer_worker->command_pool); //
! New line !
```

**Secondary cmd buffers are subject to the same issue but I didn't alter
them. I talked this with Dario and he is aware of this.**
Secondary cmd buffers are currently disabled due to other issues (it's
disabled on master).

 - `bool RenderingDeviceCommons::command_pool_reset_enabled`

Setting it to false enforces the old behavior. Useful for debugging bugs
and regressions.

There's no other reason for this boolean. Possibly once it becomes well
tested, the boolean could be removed entirely.

Adds `command_bind_render_uniform_sets` and
`add_draw_list_bind_uniform_sets` (+ compute variants).

It performs the same as `add_draw_list_bind_uniform_set` (notice
singular vs plural), but on multiple consecutive uniform sets, thus
reducing graph and draw call overhead.

 - `bool descriptor_set_batching = true;`

Setting it to false enforces the old behavior. Useful for debugging bugs
and regressions.

There's no other reason for this boolean. Possibly once it becomes well
tested, the boolean could be removed entirely.

Godot currently does the following:

 1. Fill the entire cmd buffer with commands.
 2. `submit()`
    - Wait with a semaphore for the swapchain.
- Trigger a semaphore to indicate when we're done (so the swapchain
can submit).
 3. `present()`

The optimization opportunity here is that 95% of Godot's rendering is
done offscreen.
Then a fullscreen pass copies everything to the swapchain. Godot doesn't
practically render directly to the swapchain.

The problem with this is that the GPU has to wait for the swapchain to
be released **to start anything**, when we could start *much earlier*.
Only the final blit pass must wait for the swapchain.

TheForge changed it to the following (more complicated, I'm simplifying
the idea):

 1. Fill the entire cmd buffer with commands.
 2. In `screen_prepare_for_drawing` do `submit()`
    - There are no semaphore waits for the swapchain.
    - Trigger a semaphore to indicate when we're done.
3. Fill a new cmd buffer that only does the final blit to the
swapchain.
 4. `submit()`
    - Wait with a semaphore for the submit() from step 2.
- Wait with a semaphore for the swapchain (so the swapchain can
submit).
- Trigger a semaphore to indicate when we're done (so the swapchain
can submit).
 5. `present()`

Dario discovered this problem independently while working on a different
platform.

**However TheForge's solution had to be rewritten from scratch:** The
complexity to achieve the solution was high and quite difficult to
maintain with the way Godot works now (after Übershaders PR).
But on the other hand, re-implementing the solution became much simpler
because Dario already had to do something similar: To fix an Adreno 730
driver bug, he had to implement splitting command buffers. **This is
exactly what we need!**. Thus it was re-written using this existing
functionality for a new purpose.

To achieve this, I added a new argument, `bool p_split_cmd_buffer`, to
`RenderingDeviceGraph::add_draw_list_begin`, which is only set to true
by `RenderingDevice::draw_list_begin_for_screen`.

The graph will split the draw list into its own command buffer.

 - `bool split_swapchain_into_its_own_cmd_buffer = true;`

Setting it to false enforces the old behavior. This might be necessary
for consoles which follow an alternate solution to the same problem.
If not, then we should consider removing it.

PR godotengine#90993 added `shader_destroy_modules()` but it was not actually in
use.

This PR adds several places where `shader_destroy_modules()` is called
after initialization to free up memory of SPIR-V structures that are no
longer needed.
@darksylinc darksylinc force-pushed the matias-TheForge-pr04-excluded-ubo+render_opt branch from 78364b7 to c77cbf0 Compare December 9, 2024 14:50
@@ -4549,6 +4629,22 @@ void RenderingDevice::draw_list_draw_indirect(DrawListID p_list, bool p_use_indi
_check_transfer_worker_buffer(buffer);
}

void RenderingDevice::draw_list_set_viewport(DrawListID p_list, const Rect2 &p_rect) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is nice to have, but it seems like it is unused (I think it was originally part of the render pass optimizations right?). Let's leave it in at any rate as I want to use it for an optimization I have in mind for PointLight2Ds

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked into this, and you're right. This function of code was added by the Render Pass Optimization which was not included in this PR.

Anyway, soon another PR will be incoming w/ it.

Copy link
Member

@clayjohn clayjohn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Let's get this merged ASAP so we get lots of testing in and don't risk another big conflict

Copy link
Contributor

@arkology arkology left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stealing some of @AThousandShips job again 😄

Comment on lines +1386 to +1389
// Workaround a driver bug on Adreno 730 GPUs that keeps leaking memory on each call to vkResetDescriptorPool.
// Which eventually run out of memory. in such case we should not be using linear allocated pools
// Bug introduced in driver 512.597.0 and fixed in 512.671.0
// Confirmed by Qualcomm
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Workaround a driver bug on Adreno 730 GPUs that keeps leaking memory on each call to vkResetDescriptorPool.
// Which eventually run out of memory. in such case we should not be using linear allocated pools
// Bug introduced in driver 512.597.0 and fixed in 512.671.0
// Confirmed by Qualcomm
// Workaround a driver bug on Adreno 730 GPUs that keeps leaking memory on each call to vkResetDescriptorPool.
// Which eventually run out of memory. In such case we should not be using linear allocated pools.
// Bug introduced in driver 512.597.0 and fixed in 512.671.0.
// Confirmed by Qualcomm.

// VUID-VkImageCreateInfo-usage-00963 :
// If usage includes VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT,
// then bits other than VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT, VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT,
// and VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT must not be set
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// and VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT must not be set
// and VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT must not be set.

@@ -3782,6 +3829,7 @@ Error RenderingDevice::screen_create(DisplayServer::WindowID p_screen) {
Error RenderingDevice::screen_prepare_for_drawing(DisplayServer::WindowID p_screen) {
_THREAD_SAFE_METHOD_

// After submitting work, acquire the swapchain image(s)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// After submitting work, acquire the swapchain image(s)
// After submitting work, acquire the swapchain image(s).

UniformSet *uniform_set = uniform_set_owner.get_or_null(dl->state.sets[i].uniform_set);
_uniform_set_update_shared(uniform_set);
if (!dl->state.sets[i].bound) {
// Batch contiguous descriptor sets in a single call
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Batch contiguous descriptor sets in a single call
// Batch contiguous descriptor sets in a single call.

if (descriptor_set_batching) {
// All good, see if this requires re-binding.
if (i - last_set_index > 1) {
// If the descriptor sets are not contiguous, bind the previous ones and start a new batch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// If the descriptor sets are not contiguous, bind the previous ones and start a new batch
// If the descriptor sets are not contiguous, bind the previous ones and start a new batch.

cl->state.sets[i].bound = true;
}
}

// Bind the remaining batch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Bind the remaining batch
// Bind the remaining batch.

if (!cl->state.sets[i].bound) {
// All good, see if this requires re-binding.
draw_graph.add_compute_list_bind_uniform_set(cl->state.pipeline_shader_driver_id, cl->state.sets[i].uniform_set_driver_id, i);
if (i - last_set_index > 1) {
// If the descriptor sets are not contiguous, bind the previous ones and start a new batch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// If the descriptor sets are not contiguous, bind the previous ones and start a new batch
// If the descriptor sets are not contiguous, bind the previous ones and start a new batch.

valid_set_count = 1;
valid_descriptor_ids[0] = cl->state.sets[i].uniform_set_driver_id;
} else {
// Otherwise, keep storing in the current batch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Otherwise, keep storing in the current batch
// Otherwise, keep storing in the current batch.

cl->state.sets[i].bound = true;
}
}

// Bind the remaining batch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Bind the remaining batch
// Bind the remaining batch.

@@ -191,7 +191,12 @@ class RenderingDevice : public RenderingDeviceCommons {
Error _buffer_initialize(Buffer *p_buffer, const uint8_t *p_data, size_t p_data_size, uint32_t p_required_align = 32);

void update_perf_report();

// flag for batching descriptor sets
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// flag for batching descriptor sets
// Flag for batching descriptor sets.

@AThousandShips
Copy link
Member

Please don't tag me randomly though, it just creates a lot of unnecessary noise (also I had already made these corrections lol)

@@ -3798,7 +3798,9 @@ static void _add_descriptor_count_for_uniform(RenderingDevice::UniformType p_typ
}
}

RDD::UniformSetID RenderingDeviceDriverD3D12::uniform_set_create(VectorView<BoundUniform> p_uniforms, ShaderID p_shader, uint32_t p_set_index) {
RDD::UniformSetID RenderingDeviceDriverD3D12::uniform_set_create(VectorView<BoundUniform> p_uniforms, ShaderID p_shader, uint32_t p_set_index, int p_linear_pool_index) {
// p_linear_pool_index = -1; // TODO:? Linear pools not implemented or not supported by API backend.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// p_linear_pool_index = -1; // TODO:? Linear pools not implemented or not supported by API backend.
//p_linear_pool_index = -1; // TODO:? Linear pools not implemented or not supported by API backend.

Commented out code should have no space

@@ -2524,7 +2524,9 @@ void deserialize(BufReader &p_reader) {
/**** UNIFORM SET ****/
/*********************/

RDD::UniformSetID RenderingDeviceDriverMetal::uniform_set_create(VectorView<BoundUniform> p_uniforms, ShaderID p_shader, uint32_t p_set_index) {
RDD::UniformSetID RenderingDeviceDriverMetal::uniform_set_create(VectorView<BoundUniform> p_uniforms, ShaderID p_shader, uint32_t p_set_index, int p_linear_pool_index) {
// p_linear_pool_index = -1; // TODO:? Linear pools not implemented or not supported by API backend.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// p_linear_pool_index = -1; // TODO:? Linear pools not implemented or not supported by API backend.
//p_linear_pool_index = -1; // TODO:? Linear pools not implemented or not supported by API backend.

@@ -5728,6 +5871,15 @@ RenderingDeviceDriverVulkan::~RenderingDeviceDriverVulkan() {
}
vmaDestroyAllocator(allocator);

// Destroy linearly allocated descriptor pools
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Destroy linearly allocated descriptor pools
// Destroy linearly allocated descriptor pools.

Copy link
Contributor

@Repiteo Repiteo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Expediting the merge process on this one; style nitpicks can be handled in a followup PR

@Repiteo Repiteo merged commit 66dea15 into godotengine:master Dec 10, 2024
20 checks passed
@Repiteo
Copy link
Contributor

Repiteo commented Dec 10, 2024

Thanks!

darksylinc added a commit to darksylinc/godot that referenced this pull request Dec 12, 2024
Minor fixes for changes introduced in godotengine#99257 that could not be fixed in
time as the PR needed to be expedited.
tGautot pushed a commit to tGautot/godot that referenced this pull request Feb 5, 2025
Minor fixes for changes introduced in godotengine#99257 that could not be fixed in
time as the PR needed to be expedited.
tGautot pushed a commit to tGautot/godot that referenced this pull request Feb 5, 2025
Minor fixes for changes introduced in godotengine#99257 that could not be fixed in
time as the PR needed to be expedited.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants