@@ -78,19 +78,8 @@ or programmatically, during runtime, as follows:
78
78
# retrieve the current number of threads
79
79
current_threads = qibo.get_threads()
80
80
81
- On the other hand, when using the ``tensorflow `` backend Qibo inherits
82
- Tensorflow's defaults for CPU thread configuration.
83
- Tensorflow allows restricting the number of threads as follows:
84
-
85
- .. code-block :: python
86
-
87
- import tensorflow as tf
88
- tf.config.threading.set_inter_op_parallelism_threads(1 )
89
- tf.config.threading.set_intra_op_parallelism_threads(1 )
90
- import qibo
91
-
92
- Note that this should be run during Tensorflow initialization in the beginning
93
- of the script and before creating the qibo backend.
81
+ For similar wariness when using a machine learning backend (such as TensorFlow or Pytorch)
82
+ please refer to the Qiboml documentation.
94
83
95
84
Using multiple GPUs
96
85
^^^^^^^^^^^^^^^^^^^
@@ -707,13 +696,18 @@ circuit output matches a target state using the fidelity as the corresponding lo
707
696
Note that, as in the following example, the rotation angles have to assume real values
708
697
to ensure the rotational gates are representing unitary operators.
709
698
699
+ Qibo doesn't provide Tensorflow and Pytorch as native backends; Qiboml has to be
700
+ installed and used as provider of these quantum machine learning backends.
701
+
710
702
.. code-block :: python
711
703
712
704
import qibo
713
- qibo.set_backend(" tensorflow" )
714
- import tensorflow as tf
705
+ qibo.set_backend(backend = " qiboml" , platform = " tensorflow" )
715
706
from qibo import gates, models
716
707
708
+ backend = qibo.get_backend()
709
+ tf = backend.tf
710
+
717
711
# Optimization parameters
718
712
nepochs = 1000
719
713
optimizer = tf.keras.optimizers.Adam()
@@ -737,8 +731,9 @@ to ensure the rotational gates are representing unitary operators.
737
731
optimizer.apply_gradients(zip ([grads], [params]))
738
732
739
733
740
- Note that the ``"tensorflow" `` backend has to be used here because other custom
741
- backends do not support automatic differentiation.
734
+ Note that the ``"tensorflow" `` backend has to be used here since it provides
735
+ automatic differentiation tools. To be constructed, the Qiboml package has to be
736
+ installed and used.
742
737
743
738
The optimization procedure may also be compiled, however in this case it is not
744
739
possible to use :meth: `qibo.circuit.Circuit.set_parameters ` as the
@@ -748,10 +743,12 @@ For example:
748
743
.. code-block :: python
749
744
750
745
import qibo
751
- qibo.set_backend(" tensorflow" )
752
- import tensorflow as tf
746
+ qibo.set_backend(backend = " qiboml" , platform = " tensorflow" )
753
747
from qibo import gates, models
754
748
749
+ backend = qibo.get_backend()
750
+ tf = backend.tf
751
+
755
752
nepochs = 1000
756
753
optimizer = tf.keras.optimizers.Adam()
757
754
target_state = tf.ones(4 , dtype = tf.complex128) / 2.0
0 commit comments