-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implements the structure of a general qiboml
model
#20
Merged
+1,528
−363
Merged
Changes from 1 commit
Commits
Show all changes
100 commits
Select commit
Hold shift + click to select a range
420ac26
feat: sketched the general structure of a qml model
BrunoLiegiBastonLiegi 7e828ff
fix: removed cache files
BrunoLiegiBastonLiegi 0e8656b
feat: implemented some sample encoding, decoding and ansatz layers
BrunoLiegiBastonLiegi 2c79d18
fix: small modifications to layers
BrunoLiegiBastonLiegi e015465
feat: added phase encoding and more examples + drafted torch interface
BrunoLiegiBastonLiegi 12c53e8
Update src/qiboml/models/abstract.py
BrunoLiegiBastonLiegi aa38656
Update src/qiboml/models/abstract.py
BrunoLiegiBastonLiegi 06d4a16
feat: drafted a torch module factory
BrunoLiegiBastonLiegi a6ac851
feat: minor refinements to the torch factory
BrunoLiegiBastonLiegi 3c09554
feat: drafted keras factory
BrunoLiegiBastonLiegi 82a7079
feat: added pytorch tf tutorials
BrunoLiegiBastonLiegi ddec82a
fix: removed cache files
BrunoLiegiBastonLiegi 10f533b
fix: minor fixes to keras interface
BrunoLiegiBastonLiegi 76a2b2b
feat: implemented keras Model
BrunoLiegiBastonLiegi 86969bd
fix: finally made keras interface working
BrunoLiegiBastonLiegi e10ffe6
feat: implemented QuantumModel for torch as well
BrunoLiegiBastonLiegi 5ab8583
test: started implementing tests
BrunoLiegiBastonLiegi 6aeaa1b
fix: various fixes + further tests
BrunoLiegiBastonLiegi 3f166e3
fix: various fixes
BrunoLiegiBastonLiegi 8fff5b8
build: added test dependencies
BrunoLiegiBastonLiegi c27cfdf
build: changed poetry rules
BrunoLiegiBastonLiegi 542f64a
test: added backend and frontend fixtures
BrunoLiegiBastonLiegi c9e48b8
build: lock update
BrunoLiegiBastonLiegi d69b130
fix: disabled pylint check
BrunoLiegiBastonLiegi 2e0c1bd
fix: some fixes
BrunoLiegiBastonLiegi 42c0a1b
fix: small fix to expectation layer
BrunoLiegiBastonLiegi 87ea094
fix: fixed exp layer tests and commented amp encoding
BrunoLiegiBastonLiegi 474ad93
fix: small fixes to test_models.py
BrunoLiegiBastonLiegi ce108f9
test: testing expecation from samples as well
BrunoLiegiBastonLiegi 2469037
fix: test fix
BrunoLiegiBastonLiegi 7b6b3a2
fix: added qubit_map to expactation_from_samples + casting to double
BrunoLiegiBastonLiegi 2a2d881
build: lock update
BrunoLiegiBastonLiegi 1c049d1
fix: skipping tf test with windows
BrunoLiegiBastonLiegi 9bffae7
fix: skipping tf test with windows
BrunoLiegiBastonLiegi 13f6a31
fix: import sys
BrunoLiegiBastonLiegi c78f189
fix: commented tf import
BrunoLiegiBastonLiegi 2ac5168
build: made tf optional
BrunoLiegiBastonLiegi 6213bfe
build: added deploy workflow
BrunoLiegiBastonLiegi 3dacd8d
build: changed python dependency
BrunoLiegiBastonLiegi 7b6b7af
test: added runtest setup
BrunoLiegiBastonLiegi 0313bb2
fix: various fixes to keras interface + testing both keras and torch …
BrunoLiegiBastonLiegi ffc37ea
fix: pylint disable import-error
BrunoLiegiBastonLiegi 6916600
fix: removed some leftovers + moved tf import inside functions
BrunoLiegiBastonLiegi c0120ff
test: added tests for errors and other things
BrunoLiegiBastonLiegi 8b8db4d
fix: added pytest_configure
BrunoLiegiBastonLiegi af8001e
fix: updated workflow
BrunoLiegiBastonLiegi 71286eb
build: pylint update
BrunoLiegiBastonLiegi 59aadf7
build: updated pytest packages and removed pytest-env
BrunoLiegiBastonLiegi 26944a6
build: merge main, replaced numpy backend with jax backend for testing
BrunoLiegiBastonLiegi e9f3e32
fix: using qibo's pytorch and small fix to jax backend
BrunoLiegiBastonLiegi 3e2a0e5
fix: rename issparse in pytorch backend
BrunoLiegiBastonLiegi bf116f0
fix: moved backend import in conftest
BrunoLiegiBastonLiegi eb39ca8
fix: removed keras and pytorch interface import from __init__.py
BrunoLiegiBastonLiegi 2726853
fix: removed super().__post_init__ call
BrunoLiegiBastonLiegi ef91fc2
fix: added super().__init__() to interface __post_init__
BrunoLiegiBastonLiegi a610cf4
fix: fixed seed of random_clifford in test_decoding
BrunoLiegiBastonLiegi 606cbb9
fix: added analytic argument exp layer in test_models_decoding.py
BrunoLiegiBastonLiegi a44942e
fix: trying to mitigate numerical instability
BrunoLiegiBastonLiegi 97ef336
fix: fix to test exp layer
BrunoLiegiBastonLiegi e516fb5
fix: small improvements to coverage
BrunoLiegiBastonLiegi 87d0f02
feat: added the qiboml.ndarray dtype and fixed various type hints
BrunoLiegiBastonLiegi bd7d1a0
fix: pylint disable for keras import
BrunoLiegiBastonLiegi f1f43fd
fix: using the parameters property when needed
BrunoLiegiBastonLiegi 6820b25
fix: removed cache files
BrunoLiegiBastonLiegi c0fd516
fix: replacing c() with backend.execute
BrunoLiegiBastonLiegi 51037fd
build: lock update
BrunoLiegiBastonLiegi a09f9b2
feat: drafted a compatible PSR and the custom pytorch autograd
BrunoLiegiBastonLiegi ac117d3
feat: various updates to autograd for torch
BrunoLiegiBastonLiegi 6bd798b
fix: loss.backward() runs without errors, gradients to be tested...
BrunoLiegiBastonLiegi bdd803d
fix: update
BrunoLiegiBastonLiegi f0c8455
feat: random things that don't work anyway...
BrunoLiegiBastonLiegi 0857d1e
feat: various changes to fix gradient flow
BrunoLiegiBastonLiegi 1b6ee4a
fix: finally the parameters are updating with pytorch
BrunoLiegiBastonLiegi e1fd300
feat: started introducing backpropagation in tests
BrunoLiegiBastonLiegi 8f14b37
feat: added backprop tests to to test_models_interfaces
BrunoLiegiBastonLiegi 11b54ff
feat: trying out the non-layer approach
BrunoLiegiBastonLiegi a4f6b32
fix: updated keras interface
BrunoLiegiBastonLiegi 9f4b4ac
feat: reimplemented some layers under the new schema
BrunoLiegiBastonLiegi 44f76dd
feat: some cleanup
BrunoLiegiBastonLiegi 5d2af1a
feat: working on jax differentiation
BrunoLiegiBastonLiegi 9c115d7
feat: working on jax differentiation
BrunoLiegiBastonLiegi 6b5769b
feat: still working on jax differentiation...
BrunoLiegiBastonLiegi a42a9d8
fix: small fix to jax differentiation
BrunoLiegiBastonLiegi 2ec6606
fix: some fixes to tests and jax differentiation
BrunoLiegiBastonLiegi b3a94b2
feat: improvement to jax differentiation
BrunoLiegiBastonLiegi 40ba22c
fix: fix integration of jax differentiation and pytorch interface, ha…
BrunoLiegiBastonLiegi bb5543e
fix: splitted jacobian with and without gradients
BrunoLiegiBastonLiegi a56a671
fix: splitted jacobian with and without gradients
BrunoLiegiBastonLiegi 123f053
fix: some cleanup
BrunoLiegiBastonLiegi e4e62c1
fix: fixed grad shape in QuantumModelAutograd.backward
BrunoLiegiBastonLiegi f6510f0
fix: using einsum in backward
BrunoLiegiBastonLiegi d01ff34
fix: updated shape in test state decoding
BrunoLiegiBastonLiegi e57a0ec
fix: small fix + found a problem
BrunoLiegiBastonLiegi 1118ca5
fix: fixing test seed
BrunoLiegiBastonLiegi 920889f
fix: replaced rz with ry in phase encoding
BrunoLiegiBastonLiegi f96e6ff
Merge pull request #37 from qiboteam/pytorch_autodiff
BrunoLiegiBastonLiegi e328f44
build: merge main
BrunoLiegiBastonLiegi 2f7d9c2
fix: fix to test metabackend load
BrunoLiegiBastonLiegi 3dc291e
fix: ignoring cov
BrunoLiegiBastonLiegi 4bb6a5c
build: merge main
BrunoLiegiBastonLiegi File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading status checks…
feat: various updates to autograd for torch
- Loading branch information
commit ac117d38656bf72ada05b3bd7dbd20e30f0d6138
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,41 @@ | ||
import torch | ||
from qibo import hamiltonians | ||
from qibo.backends import NumpyBackend, PyTorchBackend | ||
from qibo.symbols import Z | ||
|
||
from qiboml import pytorch as pt | ||
from qiboml.models import ansatze as ans | ||
from qiboml.models import encoding_decoding as ed | ||
|
||
# backend = PyTorchBackend() | ||
backend = NumpyBackend() | ||
|
||
nqubits = 5 | ||
dim = 4 | ||
training_layer = ans.ReuploadingLayer(nqubits, backend=backend) | ||
encoding_layer = ed.PhaseEncodingLayer(nqubits, backend=backend) | ||
kwargs = {"backend": backend} | ||
decoding_qubits = range(nqubits) | ||
observable = hamiltonians.SymbolicHamiltonian( | ||
sum([Z(int(i)) for i in decoding_qubits]), | ||
nqubits=nqubits, | ||
backend=backend, | ||
) | ||
kwargs["observable"] = observable | ||
kwargs["analytic"] = True | ||
decoding_layer = ed.ExpectationLayer(nqubits, decoding_qubits, **kwargs) | ||
q_model = pt.QuantumModel( | ||
layers=[ | ||
encoding_layer, | ||
training_layer, | ||
decoding_layer, | ||
] | ||
) | ||
print(list(q_model.parameters())) | ||
data = torch.randn(1, 5) | ||
data.requires_grad = True | ||
out = q_model(data) | ||
print(out.requires_grad) | ||
loss = (out - 1.0) ** 2 | ||
print(loss.requires_grad) | ||
loss.backward() |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason to use PyTorch with a non-Pytorch backend?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly to the comment above, this is the
pytorch
interface, which consumestorch.tensors
but this is separated from the backend you are using. If the backend is thePyTorchBackend
, then you don't need to do anything, otherwise you have to cast to the appropriate type. For example, if you want to run on hardware using thepytorch
framework, you are going to use theQibolabBackend
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the role of the
evaluate()
function, to interface the front-end and back-end (gradients included).I would argue that the conversion should happen in a single place (and that's
evaluate()
, that is working for every(frontend, backend)
pair).Instead, the model should just play with the front-end. Thus, if you're using a PyTorch model, you can safely assume your front-end is PyTorch, otherwise is good to just fail (and leave the back-end conversion somewhere else).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes indeed, but since there is currently no implementation of the
evaluate
function, as gradients are still not in main yet, I had to take care of that here in order to test. As I said in one of the previous meetings, this object is probably going to change when we incorporate the auto differentiation, as pytorch wants you to write a custom autograd function which is probably going to perform what is done here in theforward
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, maybe it's worth to commit a bit of time to the Pytorch implementation, since (in principle)
expectation()
is inmain
:qiboml/src/qiboml/operations/expectation.py
Lines 12 to 19 in 0e906ee
(sorry, I messed up, it's not
evaluate()
)Working on top of that, should make the model structure more compatible.
In any case, if you prefer doing it in a different PR I'm not completely against. But you might end up reverting there part of what is done here.