Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement moving to device and changing dtype #209

Merged
merged 9 commits into from
Jul 7, 2024

Conversation

jank324
Copy link
Member

@jank324 jank324 commented Jul 4, 2024

Description

Implements that elements and beams can easily be moved to different devices and their beam types can be changes, like you would expect from a normal torch.nn.Module.

Motivation and Context

This is normal for torch.nn.Module and would make working with different devices and dtypes in Cheetah much easier. Closes #113.

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)

Checklist

  • I have updated the changelog accordingly (required).
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.
  • I have reformatted the code and checked that formatting passes (required).
  • I have have fixed all issues found by flake8 (required).
  • I have ensured that all pytest tests pass (required).
  • I have run pytest on a machine with a CUDA GPU and made sure all tests pass (required).
  • I have checked that the documentation builds (required).

Note: We are using a maximum length of 88 characters per line

@jank324 jank324 added the enhancement New feature or request label Jul 4, 2024
@jank324 jank324 linked an issue Jul 4, 2024 that may be closed by this pull request
@jank324
Copy link
Member Author

jank324 commented Jul 4, 2024

@cr-xu @ansantam if we do something like this in an Element's __init__

self.register_buffer(
    "k1",
    (
        torch.as_tensor(k1, **factory_kwargs)
        if k1 is not None
        else torch.zeros_like(self.length)
    ),
)

everything just works. It almost like the PyTorch people thought of this. 😄💡

@jank324
Copy link
Member Author

jank324 commented Jul 4, 2024

Can lines like

factory_kwargs = {"device": device, "dtype": dtype}

then be removed?

@jank324
Copy link
Member Author

jank324 commented Jul 4, 2024

Should Aperture.is_active, Aperture.shape and Aperture.lost_particles be registered as buffers?

In general, should is_active be registered if it's not dynamically computed?

Should BPM.reading be registered?

Also what about read_beam and cached_reading of Screen?

@jank324
Copy link
Member Author

jank324 commented Jul 4, 2024

Can lines like

factory_kwargs = {"device": device, "dtype": dtype}

then be removed?

In torch.nn.Linear, factory_kwargs are also used (in fact that's where I got it from). So I guess it should be considered PyTorch best practice.

@jank324 jank324 requested a review from cr-xu July 4, 2024 13:51
@jank324 jank324 marked this pull request as ready for review July 4, 2024 13:51
@jank324
Copy link
Member Author

jank324 commented Jul 4, 2024

@cr-xu please review carefully. This seems too easy.

Copy link
Member

@cr-xu cr-xu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. Since all the tests are passing let's go with it. I guess we'll see in practice if anything breaks.

@cr-xu cr-xu merged commit 383ff83 into master Jul 7, 2024
11 checks passed
@cr-xu cr-xu mentioned this pull request Jul 10, 2024
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Moving elements and beams to devices doesn't work
2 participants