Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Hybrid PIC External Magnetic Field #5329

Closed

Conversation

kli-jfp
Copy link
Contributor

@kli-jfp kli-jfp commented Sep 25, 2024

Feature Description: External Magnetic Field in Hybrid Solver

A static curl-free external magnetic field has been integrated into the Hybrid solver, analogous to the external fields applied to particles in the fully kinetic case.

Key Features:

  1. Analytical Function Parser for J and B: Parses analytical functions for both current density (J) and magnetic field (B). Issues a warning if the magnetic field is time-dependent (this is future work).
  2. File Input for J and B: Allows reading current density (J) and magnetic field (B) from a file. Read-from-file does not support time-dependent fields.

CI test (ohm_solver_guiding_field): This is a copy of the magnetic reconnection test, except that the guiding field has been taken away from the initial magnetic field and set as an external field.

@kli-jfp kli-jfp assigned kli-jfp and unassigned kli-jfp Sep 25, 2024
@kli-jfp kli-jfp force-pushed the kli/hybrid_external_particle_field branch from 52ce954 to 08b79b3 Compare September 25, 2024 16:15
@kli-jfp
Copy link
Contributor Author

kli-jfp commented Sep 25, 2024

Have to fix CI ohm solver tests that have broken. If you have an idea of what is wrong let me know. I also don't see the new CI guiding field test in the azure pipeline.

@kli-jfp kli-jfp changed the title Hybrid PIC External Magnetic Field [WIP]Hybrid PIC External Magnetic Field Sep 25, 2024
@kli-jfp kli-jfp changed the title [WIP]Hybrid PIC External Magnetic Field [WIP] Hybrid PIC External Magnetic Field Sep 25, 2024
@EZoni
Copy link
Member

EZoni commented Sep 25, 2024

I have not looked into why some CI tests are failing in this PR yet, but to answer this

I also don't see the new CI guiding field test in the azure pipeline.

I think you need to add the line

add_subdirectory(ohm_solver_guiding_field)

in https://github.com/ECP-WarpX/WarpX/blob/development/Examples/Tests/CMakeLists.txt, because Examples/Tests/ohm_solver_guiding_field is a new test directory that did not exist before.

We recently added this to the docs: https://warpx.readthedocs.io/en/latest/developers/testing.html#how-to-add-automated-tests.

This should also allow you to run the test locally through CTest, which might make it easier to debug.

Copy link
Member

@roelof-groenewald roelof-groenewald left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this PR! I've left a couple of comments / suggestions below.

I also want to make sure I understand the use case for which this is needed. My reading of the code changes is that it adds functionality for the user to specify an additional B-field which is included in both the Lorentz force applied to the particles and in the calculation of the E-field from Ohm's law, but not in the calculation of the total current from mu_0 J = curl x B. Am I missing anything in that?
If not, assuming the additionally applied B-field has zero curl, it wouldn't make any difference to directly apply it to the initial B-field, right? And if it has non-zero curl, the same result can be obtained by setting J_ext = curl x B_ext / mu0, right? Considering only the affected term in Ohm's law, the above basically says:

$enE = (\nabla\times B_{sim}/\mu_0 - J_i)\times(B_{sim} + B_{ext}) = (\nabla\times(B_{sim} + B_{ext})/\mu_0 - J_i - J_{ext})\times(B_{sim} + B_{ext}) $

if $ \nabla\times B_{ext} / \mu_0 = J_{ext} $.

I think it would make more sense to instead create functionality that will calculate J_ext given a B_ext and keep the simulation B-field as the full B? Does that make sense? What do you think? In that case we would allow the user to either specify B_ext or J_ext (but not both) and if B_ext is specified the code would calculate J_ext during initialization. This would then make it so that we don't need to add the auxiliary B-field.

Btw, I'm hoping to implement finite electron mass terms in the not too distant future in which case we would add a term $dJ_ext / dt$ which would easily capture inductive effects. Implementing that from a time-varying external B-field would be more trouble (I think).

Comment on lines +347 to +345
template <FieldType T>
void HybridPICModel::GetExternalField (
std::array< amrex::ParserExecutor<4>, 3> const& F_external,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a nice generalization! With it we can move all of this repeated functionality out of the hybrid-PIC class and instead replace WarpX::InitializeExternalFieldsOnGridUsingParser with this more general function and then just call it directly there. Would you mind making that change?

Comment on lines 574 to +563
Efield, current_fp_ampere, Jfield, current_fp_external,
Bfield, rhofield,
Bfield, bfield_fp_external, rhofield,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea with creating the m_fields container was that functions like this would now be able to just take m_fields as an argument and extract themselves all the fields they need. We don't have to make this change as part of this PR but if you were so inclined you could switch this instead to just take m_fields and then grab all the needed fields when they are actually used.

@kli-jfp
Copy link
Contributor Author

kli-jfp commented Sep 26, 2024

I think you need to add the line

add_subdirectory(ohm_solver_guiding_field)

Thanks EZoni for the tip. I can see it now.

@kli-jfp
Copy link
Contributor Author

kli-jfp commented Sep 26, 2024

I also want to make sure I understand the use case for which this is needed. My reading of the code changes is that it adds functionality for the user to specify an additional B-field which is included in both the Lorentz force applied to the particles and in the calculation of the E-field from Ohm's law, but not in the calculation of the total current from mu_0 J = curl x B. Am I missing anything in that?

Yes, that's correct roelof-groenewald! This feature assumes a static curl-free B field, which is specifically interesting for magnetically confined fusion where your coils are outside of the vacuum vessel. This allows you to only simulate the plasma region of the device with the PIC algorithm.

I'm concerned that including B_ext in the initial condition would not give the same results as explicitly setting it as a static field throughout the simulation. If I set a high beta plasma in a mirror, and then the plasma escapes, would the resulting total field be B_sim = B_ext? I haven't really understood this issue properly yet :) My current understanding is that if you put the external field as initial conditions then the external field can change, unless the coils and current is included to make sure that a static field is being sustained throughout the simulation.

@kli-jfp
Copy link
Contributor Author

kli-jfp commented Sep 26, 2024

Btw, I'm hoping to implement finite electron mass terms in the not too distant future in which case we would add a term
... which would easily capture inductive effects. Implementing that from a time-varying external B-field would be more trouble (I think).

That's great! I think a time-depdent B_ext can be included if Faradays law is changed to
dB_sim/dt = -curl E - dB_ext/dt

@clarkse
Copy link
Member

clarkse commented Sep 26, 2024

I have a PR working toward time dependent split fields in the Hybrid solver here: #5275 It is getting closer to prime time and I think a gridded version with an associated time function could be added fairly easily. Additionally AMReX now can compute complete elliptic functions on GPU for doing analytical current loops and sheets, etc. We should sync up on how we want to proceed with adding this functionality.

@roelof-groenewald
Copy link
Member

roelof-groenewald commented Sep 26, 2024

I'm concerned that including B_ext in the initial condition would not give the same results as explicitly setting it as a static field throughout the simulation. If I set a high beta plasma in a mirror, and then the plasma escapes, would the resulting total field be B_sim = B_ext?

Yes, only the part of the B-field that comes from plasma current will decay. That is to say that you can consider the system to have a total B-field $B = B_{plasma} + B_{external}$ where $B_{plasma}$ is the B-field due to the current given by $J_{plasma} = \nabla\times B / \mu_0 - J_{ext}$ and only $B_{plasma}$ will change during the simulation. This requires, though, that $J_{ext} = \nabla\times B_{external} / \mu_0$. If you consider the coils that generate the external field to be outside of your domain that is totally fine - you just have $J_{ext} = 0$, and that external field still would not change as the simulation advances (this is because in that case $\nabla\times B_{external} = 0$). This is why I suggest we change this PR to be able to calculate $J_{ext} = \nabla\times B_{external} / \mu_0$ given an external field, instead of applying the external field as a separate field - the two approaches are mathematically equivalent but, I think, the former is cleaner.

@kli-jfp
Copy link
Contributor Author

kli-jfp commented Sep 27, 2024

Yes, only the part of the B-field that comes from plasma current will decay. That is to say that you can consider the system to have a total B-field B = B p l a s m a + B e x t e r n a l where B p l a s m a is the B-field due to the current given by J p l a s m a = ∇ × B / μ 0 − J e x t and only B p l a s m a will change during the simulation. This requires, though, that J e x t = ∇ × B e x t e r n a l / μ 0 . If you consider the coils that generate the external field to be outside of your domain that is totally fine - you just have J e x t = 0 , and that external field still would not change as the simulation advances (this is because in that case ∇ × B e x t e r n a l = 0 ). This is why I suggest we change this PR to be able to calculate J e x t = ∇ × B e x t e r n a l / μ 0 given an external field, instead of applying the external field as a separate field - the two approaches are mathematically equivalent but, I think, the former is cleaner.

If this is true then I 100% agree that B_ext should be in the initial conditions and J_ext should be computed as you said. I will see if I can verify this (if you have any reference let me know I'm curious). If a substantial increase in spatial and temporal resolution is required in order for the B_ext to be static throughout the simulation in that case then perhaps it is still worth to include a split field implementation as a user-defined feature to decrease the computional cost.

I noticed that the fully-kinetic case has a split field implementation. Would the external particle fields be uneccessary there as well?

P.s. I tried implementing J_ext from external field etc but I failed to get a stable simulation going. A lot of numerical noise occurs around the coils. I will continue to look into this.

@kli-jfp
Copy link
Contributor Author

kli-jfp commented Sep 27, 2024

I have a PR working toward time dependent split fields in the Hybrid solver here: #5275 It is getting closer to prime time and I think a gridded version with an associated time function could be added fairly easily. Additionally AMReX now can compute complete elliptic functions on GPU for doing analytical current loops and sheets, etc. We should sync up on how we want to proceed with adding this functionality.

That's great! What are your thoughts on using split fields vs using the total field as initial conditions (and using J_ext)?

@kli-jfp kli-jfp force-pushed the kli/hybrid_external_particle_field branch from 1a233ab to 0fed0ea Compare September 27, 2024 13:59
@clarkse
Copy link
Member

clarkse commented Sep 27, 2024

I have a PR working toward time dependent split fields in the Hybrid solver here: #5275 It is getting closer to prime time and I think a gridded version with an associated time function could be added fairly easily. Additionally AMReX now can compute complete elliptic functions on GPU for doing analytical current loops and sheets, etc. We should sync up on how we want to proceed with adding this functionality.

That's great! What are your thoughts on using split fields vs using the total field as initial conditions (and using J_ext)?

I kind of had to go the route of the split field for a time varying external field. This is due to the improper propagation of the EM wave through the vacuum/low density region in the hybrid code. There are ways to handle this via other methods described in Lipatov's monograph, e.g. magnetoinductive model evolving A with appropriate vacuum operators to handle time varying BCs, or potentially using a solver with electron inertia or displacement currents. With the split field and a slowly varying external field the assumption of instantaneous propagation works out, then the plasma responds to it via the Ohm's law. I think for a static case the split field vs. non-split should be exactly the same. Since the Ohm's law is only valid for the total E and B that the plasma sees, split vs. not for a static case should yield the exact same solution.

Where the split static external field may be useful is when you have a PEC boundary, but you have a B field that is static and has soaked through the metal. I.e. only the fast varying plasma dynamics obey the PEC boundary, which includes effective induced mirror currents and charges. You can do this when the time scale of this simulation is much faster than the timescale of B field diffusion through your metal. Then the external magnets can be present and the internal dynamics obey the BCs.

I currently plan to augment my time varying fields to automatically compute the induced E field instead of having to input it manually, but needs a bit more development work and should be ready by next week.

Would you mind if I cherry pick some of your changes for adding the OpenPMD loader into my PR? I will add an additional input field for a time varying analytical function that could be coupled with a loaded file. This will be useful for scrubbing B field divergence from the input, so I will probably use that route. I think if it accepts a list of external fields and a list of corresponding time values that a series of external fields with different time variations could be added that way.

After this gets pushed through I am planning to add time varying solenoids to the Accelerator Lattice and integrate with the split field handling.

@kli-jfp kli-jfp closed this Oct 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants