You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I am understanding the number of qubits available for emulation correctly, the qubits number largely depend on the RAM size available. Since more and more people may have access to GPU computing cluster (some Nvidia GPUs, RTX 3090TI, A6000, H100, B200), is it possible to shift some computation unit to GPU or providing GPU version of computation?
Implementation
I found possible solution from cupy to replace the numpy wrapper. Also the previous issue mentioned pytorch for tensor computation. For more details, overall review of the code structure might be necessary.
How important would you say this feature is?
2: Somewhat important. Needed this quarter.
Additional information
No response
The text was updated successfully, but these errors were encountered:
Feature details
If I am understanding the number of qubits available for emulation correctly, the qubits number largely depend on the RAM size available. Since more and more people may have access to GPU computing cluster (some Nvidia GPUs, RTX 3090TI, A6000, H100, B200), is it possible to shift some computation unit to GPU or providing GPU version of computation?
Implementation
I found possible solution from cupy to replace the numpy wrapper. Also the previous issue mentioned pytorch for tensor computation. For more details, overall review of the code structure might be necessary.
How important would you say this feature is?
2: Somewhat important. Needed this quarter.
Additional information
No response
The text was updated successfully, but these errors were encountered: