Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential GPU and CUDA support? #7060

Open
zazabap opened this issue Mar 8, 2025 · 1 comment
Open

Potential GPU and CUDA support? #7060

zazabap opened this issue Mar 8, 2025 · 1 comment
Labels
enhancement ✨ New feature or request

Comments

@zazabap
Copy link
Contributor

zazabap commented Mar 8, 2025

Feature details

If I am understanding the number of qubits available for emulation correctly, the qubits number largely depend on the RAM size available. Since more and more people may have access to GPU computing cluster (some Nvidia GPUs, RTX 3090TI, A6000, H100, B200), is it possible to shift some computation unit to GPU or providing GPU version of computation?

Implementation

I found possible solution from cupy to replace the numpy wrapper. Also the previous issue mentioned pytorch for tensor computation. For more details, overall review of the code structure might be necessary.

How important would you say this feature is?

2: Somewhat important. Needed this quarter.

Additional information

No response

@zazabap zazabap added the enhancement ✨ New feature or request label Mar 8, 2025
@jzaia18
Copy link

jzaia18 commented Mar 10, 2025

Hi @zazabap thank you for opening an issue! Have you looked into PennyLane-Lightning yet? I believe it has the features you're looking for.

These links may be of interest:

Though, the default.qubit jax and pytorch interfaces also work with GPU data and (depending on the type of problem) may be a better fit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement ✨ New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants