Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update matplotlib #1413

Merged
merged 2 commits into from
Aug 7, 2024
Merged

Update matplotlib #1413

merged 2 commits into from
Aug 7, 2024

Conversation

alecandido
Copy link
Member

@alecandido alecandido marked this pull request as draft August 7, 2024 06:43
@alecandido alecandido marked this pull request as ready for review August 7, 2024 06:47
@alecandido
Copy link
Member Author

Tests are not passing, but the problems are all caused by PyTorch on Windows.

Any idea? @BrunoLiegiBastonLiegi @renatomello @Simone-Bordoni

@alecandido
Copy link
Member Author

Note that this branch bumped PyTorch from 2.3.1 to 2.4.0, but it was allowed by the specified range in pyproject.toml (which is unchanged).

If we do not support PyTorch 2.4.0 (or not on Windows), we should update the pyproject.toml.

@BrunoLiegiBastonLiegi
Copy link
Contributor

I was experiencing similar problems with some tests of qiboml involving expectation_from_samples under windows. Actually, what I noticed, in my case, was that a similar discrepancy in the expectation value was present in other platforms as well, but with smaller magnitude, small enough to be covered by an atol=1e-1. However, this was happening for all the backends PytorchBackend, TensorflowBackend and JaxBackend. In my case it was not really due to the torch version as I was experiencing the same error with 2.3.1. I am not sure this is relevant to this case though...

@alecandido
Copy link
Member Author

@BrunoLiegiBastonLiegi I remembered of your troubles with Windows in QiboML (though I also remember you ended up deactivating the tests on Windows in qiboteam/qiboml#20 😞). That's why I asked you as well.

In any case, thanks for your answer. Let's wait for @renatomello and @Simone-Bordoni, who spent the most time with the torch backend.

@BrunoLiegiBastonLiegi
Copy link
Contributor

BrunoLiegiBastonLiegi commented Aug 7, 2024

Just to add on my previous comment I tested this simple example

from qibo import gates, hamiltonians
from qibo.quantum_info import random_clifford
from qibo.symbols import Z
from qibo.backends import PyTorchBackend

backend = PyTorchBackend()
nqubits = 5
c = random_clifford(nqubits, backend=backend)
c.add(gates.M(*range(nqubits)))
observable = hamiltonians.SymbolicHamiltonian(
    sum([(i+1)**2*Z(i) for i in range(nqubits)]),
    nqubits=nqubits,
    backend=backend,
)

for _ in range(10):
    print(observable.expectation_from_samples(backend.execute_circuit(c).frequencies()))

which results in a widely different expectation value every time:

tensor(0.0820+0.j, dtype=torch.complex128, requires_grad=True)
tensor(0.3380+0.j, dtype=torch.complex128, requires_grad=True)
tensor(-0.3940+0.j, dtype=torch.complex128, requires_grad=True)
tensor(0.8480+0.j, dtype=torch.complex128, requires_grad=True)
tensor(1.2320+0.j, dtype=torch.complex128, requires_grad=True)
tensor(0.4580+0.j, dtype=torch.complex128, requires_grad=True)
tensor(0.3940+0.j, dtype=torch.complex128, requires_grad=True)
tensor(-0.6740+0.j, dtype=torch.complex128, requires_grad=True)
tensor(-1.5440+0.j, dtype=torch.complex128, requires_grad=True)
tensor(-0.5180+0.j, dtype=torch.complex128, requires_grad=True)

The TensorflowBackend seems slightly more stable but still problematic I'd say:

tf.Tensor((-0.25000000000000067+0j), shape=(), dtype=complex128)
tf.Tensor((0.40000000000000013+0j), shape=(), dtype=complex128)
tf.Tensor((-0.4860000000000004+0j), shape=(), dtype=complex128)
tf.Tensor((-0.08000000000000018+0j), shape=(), dtype=complex128)
tf.Tensor((0.784+0j), shape=(), dtype=complex128)
tf.Tensor((0.006000000000000075+0j), shape=(), dtype=complex128)
tf.Tensor((-0.8439999999999996+0j), shape=(), dtype=complex128)
tf.Tensor((0.19599999999999973+0j), shape=(), dtype=complex128)
tf.Tensor((0.07400000000000027+0j), shape=(), dtype=complex128)
tf.Tensor((0.01999999999999999+0j), shape=(), dtype=complex128)

This is a rough patch, to attempt limiting the issues discussed in #1413 (comment)
@alecandido
Copy link
Member Author

@scarrazza should we keep an open issue about the PyTorch version?

To be fair, if yes, we should even move it to Qiboml, since the torch backend won't be here forever...

@scarrazza
Copy link
Member

Yes, I think so.

Copy link

codecov bot commented Aug 7, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 99.94%. Comparing base (6d67625) to head (eb19847).
Report is 7 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #1413   +/-   ##
=======================================
  Coverage   99.94%   99.94%           
=======================================
  Files          78       78           
  Lines       11222    11225    +3     
=======================================
+ Hits        11216    11219    +3     
  Misses          6        6           
Flag Coverage Δ
unittests 99.94% <ø> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@alecandido
Copy link
Member Author

Yes, I think so.

Issue opened qiboteam/qiboml#31

@scarrazza
Copy link
Member

Thanks.

@scarrazza scarrazza merged commit 751b544 into master Aug 7, 2024
27 checks passed
@stavros11 stavros11 deleted the matplotlib-update branch August 7, 2024 19:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants