-
-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for particle recording and playback #7085
Comments
Here are some ideas for implementation: 1st: Implement particle recording and playback in RenderingServerThe first thing to note is that we have to store the particle capture in a format that is efficient to compress to disk, and efficient to read to memory. If we use raw floats, the amount of particle information we can capture will be severely limited and usage will be enormous. So, some small FAQ:Q: Do we need to capture just the particle transform or the whole particle state? Q: Particles run independent of framerate in Godot, do we need to capture them at a fixed FPS? Q: How will these be saved to disk? Proposed capture format:For memory (replay):The idea here is to save normalized values in a range of 0-1 and encode each channel to 16 bits for position, scale, velocity and color for each particle lifetime track. Format in detail: RGBA16UI
(Note, for 2D particles, Scale can be just 32:32 uncompresssed, Velocity can be just 16:16). Add to this the normalization ranges (min/max of particle position, scale, velocity and color), thisis a RGBA32F of size: 2,[number of particles]. This means that a particle recording is these two textures. So, without userdata, a particle snapshot in memory takes up: 8+8+8+8+16 = 48 bytes. One doubt that may still remain is how many particles to capture. (Y height), we know the total particles that are spawned in lifetime, so we can see how many lifetimes enter in the time requested for capture and kind of guess the amount of particles (and allocate a bit more just in case). For saving to disk, conversion to bitwidth delta compression can be used, which results in very large savings (almost 10x). This code already exists in the animation compressor, so it just needs to be copied over. This means that a 15mb capture will be only 1.5mb. APIs to be added:// Begin recording particles, return an RID. This RID is a texture that contains recording information: On Y Axis the time, on X axis the time. The format is RGBA32F and it contains:
RID RenderingServer::particles_recording_create();
void RenderingServer::particles_recording_set_data(RID p_particles_recording, const Vector<uint8_t>& p_compressed_buffer);
Vector<uint8_t> RenderingServer::particles_recording_get_data(RID p_particles_recording) const;
void RenderingServer::particles_recording_begin(RID p_particles_recording,RID p_particles_instance,double p_length_sec,int p_hz);
void RenderingServer::particles_recording_end(RID p_particles);
////
void RenderingServer::instance_play_particles_recording(RID p_particles_instance, RID p_particles_recording,float p_time_scale);
void RenderingServer::instance_pause_particles_recording(RID p_particles_instance, bool p_paused);
void RenderingServer::instance_seek_particles_recording(RID p_particles_instance, double p_time);
void RenderingServer::instance_stop_particles_recording(RID p_particles_instance);
// (Note, similar API will have have to be implemented for 2D particles)
The capture process should be more or less straightforward, capture happens uncompressed to a RGBA32F texture (we know the length in seconds and hz so the texture can be preallocated). Once finished (particles_recording_end) what was recorded gets compressed using a compute shader. The compute shader basically does something like this:
Compressing/decompressing bitwidth can have when getting/setting from RenderingDevice. Initially this can be unimplemented for a first PR as long as the format contemplates this being enabled later on. 2nd: Recording resource
3rd: Implement particle recording in AnimationThe Animation resource will need an extra type of track: Particles During playback, If no recording exists, the track will start and stop the animation emission. If a recording exists, the track uses the Seeking will not work on this track until there is a recording, in which case the recording will be seeked. 4th: EditorThe editor will allow you to create particle tracks and insert keyframes (which are a playback range during which the particle system is active. As mentioned before, when baking, if not already the case, the editor will ask the user to save the AnimationLibrary to a separate file, simlar to when baking lightmapping. |
I can't comment on the technical implementation part but I want to comment on the final workflow goals
|
Describe the project you are working on
The Godot Engine and VFX
Describe the problem or limitation you are having in your project
Art workflow is very limiting (i.e. Particle just goes off when testing animations). Need to be able to pause particles together with animations to ensure they're properly synced.
To develop advanced particle effects with multiple Particles nodes, you need to be able to scrub through the particle effect going backward and forward and pausing as necessary, from all particles at the same time.
Running CPU Particles is often needed for performance reasons and to target low-end devices, but falling back to CPUParticles is only possible when using the built in ParticlesProcessMaterial, which severely limits art workflows
Describe the feature / enhancement and how it helps to overcome the problem or limitation
We will create a new Resource Type called "BakedParticlesData" which will contain prerecorded data for a GPUParticles node.
The GPUParticles nodes will have an extra property for the BakedParticlesData. When the BakedParticlesData is set, instead of running the PrcoessMaterial, the particles will update from the baked data. This will allow the GPUParticles to trivially update on either the GPU or on the CPU.
Using BakedParticlesData also allows the artist to trivially scroll through the particle effect as they work on the design of the particle system.
The editor need to contain a button to bake the particle effect into a BakedaParticlesData resource. This would work the same as baking a VoxelGI or Lightmap GI node. To support this button, a new rendering server function will need to be added that allows running and recording a particle system with specified frames and frame timing.
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
The BakedParticlesData resource will contain 3 arrays 1) transform, 2) color, 3) custom. This data will be expressed in term of the particles' lifetime and the positions expressed as offset from initial position (i.e. deltas). This will allow compatibility with local/global space and offer flexibility of initial spawn position (many cases where you want to keep the same particle motion but change the emission size. Could consider separating color as well in the baking?)
The GPUParticles will have new added properties:
baked_particle_data
process_time (for scrolling through the effect)
playback mode: (simulate, playback)
process_time_mode: (automatic - manual). This will allow particles to be scrubbed back and forth, or to just advance their process time automatically when emitting is set to on. This will be useful when creating editor tools to work with the node
target_mode (GPU, CPU)
When using the CPU backend, GPUParticles will read from those arrays and interpolate between them based on the current process time. Internally, a multi mesh will be updated with the relevant transform, color, and custom values.
When using the GPU backend, the arrays will be uploaded as storage buffers and the particle data can be interpolated and set entirely on the GPU which will be much faster on modern devices.
The interpolation can happen within the normal particle copy step (usually this is only done if we need to adjust the space of the particle, or sort the particles) to avoid any extra compute shader passes.
Editor tooling:
Particles will be represented by the animation player in the similar way as audio, marking each loop of the emitter

1.When scrubbing the timeline in the animation player, particles will play baked data. If no data has been baked, the particles will bake on the fly and keep the baked data until parameters that affect baked properties are changed.
It should be possible to enable particles in one animation and them continuing to emit even after the animation is over.
Looping in the animation player should not cause the particles to be cut off in their playback. If the animation finishes at half of the loop of the emitter, the animation looping should not cause the particles playback time to snap back. However ..
It should be possible to keyframe process time on its own. This should auto-set the particle system in playback + manual mode for time.
None of the editor internal functioning should compromise the saved scene. If i am scrubbing the timeline and i hit save and my particles are in simulate mode, they should not be set to manual because i was scrubbing the timeline at save time.
Considerations on scene interaction
GPUParticles have the ability to collide with objects, spawn sub-emitters and interact with attractors. Collisions and sub-emitter should be left out of the scope of this initial work. Attractors will not be supported at any point. Alternatively, Collision and Attraction can be baked in, but there will be no
For collision, only destroy on collision or stop on collision should be supported. Baking should allow to bake both with and without collisions.
If this enhancement will not be used often, can it be worked around with a few lines of script?
This cannot be worked around with a script. However, the full CPU-only implementation could be prototyped as an extension/addon
Is there a reason why this should be core and not an add-on in the asset library?
This proposal requires modifications to the GPUParticles node and internal particle so it can't be an addon. It could be protoyped in an addon however.
The text was updated successfully, but these errors were encountered: