-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add artifacts interface #1342
Add artifacts interface #1342
Changes from all commits
fa675b4
fc68e24
4633b2f
731b763
07e85fc
3f55ad9
458b98c
0a37892
c942460
c22b3db
c7964f7
e4da036
f60ad6c
9b9c193
ed5cde2
ef376d3
65ad21b
f1a5671
4e46084
de3bc90
2dacc98
1293fa0
7f016c6
a755171
e71e962
78a967a
7a83179
3bc9525
68013a8
5032e86
7736f19
8bbaa15
fc9273e
0cae116
0fc9d47
db339d5
d0358e6
c7f8a4b
8eeba4f
20d1a8e
2fb28dc
83addf1
2a3a929
4fbf044
c85e951
7c2a792
4ea9f9e
838d944
3cf362e
296019d
08ce7e6
a08acdb
7ec4d24
a3ef526
ac972fd
01471bb
8dc6c4f
144127a
a81f97c
7c0662c
543ce53
9a7eeb7
b11386b
9973fd8
b99f7cb
92cfc92
346d23a
2052a24
fc058a9
120326f
353e452
ee03161
27ac6c7
f962648
079630a
2cba94a
337eeeb
a5e4f44
b1d3adf
a6d7782
5be4e88
ca03a1d
ee5b34d
38abdff
9e27f16
b870be3
cc905c6
0dc4eb2
f8c1efe
ee92f1d
03aac67
ac5bdd8
3d41587
b4efab4
b9ae107
79370a7
4a41a70
036e532
f74564b
eb89f37
f9a987e
6b56b55
83d33d4
93b1298
a448bb8
fe7485f
62184d0
2b3733e
74a65ba
f91bfe0
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
/* Styling for pandas dataframes in documentation */ | ||
|
||
div.output table { | ||
border: none; | ||
border-collapse: collapse; | ||
border-spacing: 0; | ||
color: black; | ||
font-size: 12px; | ||
table-layout: fixed; | ||
width: 100%; | ||
} | ||
div.output thead { | ||
border-bottom: 1px solid black; | ||
vertical-align: bottom; | ||
} | ||
div.output tr, | ||
div.output th, | ||
div.output td { | ||
text-align: right; | ||
vertical-align: middle; | ||
padding: 0.5em 0.5em; | ||
line-height: normal; | ||
white-space: normal; | ||
max-width: none; | ||
border: none; | ||
} | ||
div.output th { | ||
font-weight: bold; | ||
} | ||
div.output tbody tr:nth-child(odd) { | ||
background: #f5f5f5; | ||
} | ||
div.output tbody tr:hover { | ||
background: rgba(66, 165, 245, 0.2); | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,148 @@ | ||
Work with experiment artifacts | ||
============================== | ||
|
||
Problem | ||
------- | ||
|
||
You want to view, add, remove, and save artifacts associated with your :class:`.ExperimentData` instance. | ||
|
||
Solution | ||
-------- | ||
|
||
Artifacts are used to store auxiliary data for an experiment that don't fit neatly in the | ||
:class:`.AnalysisResult` model. Any data that can be serialized, such as fit data, can be added as | ||
:class:`.ArtifactData` artifacts to :class:`.ExperimentData`. | ||
|
||
For example, after an experiment that uses :class:`.CurveAnalysis` is run, its :class:`.ExperimentData` | ||
object is automatically populated with ``fit_summary`` and ``curve_data`` artifacts. The ``fit_summary`` | ||
artifact has one or more :class:`.CurveFitResult` objects that contain parameters from the fit. The | ||
``curve_data`` artifact has a :class:`.ScatterTable` object that contains raw and fitted data in a pandas | ||
:class:`~pandas:pandas.DataFrame`. | ||
|
||
Viewing artifacts | ||
~~~~~~~~~~~~~~~~~ | ||
|
||
Here we run a parallel experiment consisting of two :class:`.T1` experiments in parallel and then view the output | ||
wshanks marked this conversation as resolved.
Show resolved
Hide resolved
|
||
artifacts as a list of :class:`.ArtifactData` objects accessed by :meth:`.ExperimentData.artifacts`: | ||
|
||
.. jupyter-execute:: | ||
|
||
from qiskit_ibm_runtime.fake_provider import FakePerth | ||
from qiskit_aer import AerSimulator | ||
from qiskit_experiments.library import T1 | ||
from qiskit_experiments.framework import ParallelExperiment | ||
import numpy as np | ||
|
||
backend = AerSimulator.from_backend(FakePerth()) | ||
exp1 = T1(physical_qubits=[0], delays=np.arange(1e-6, 6e-4, 5e-5)) | ||
exp2 = T1(physical_qubits=[1], delays=np.arange(1e-6, 6e-4, 5e-5)) | ||
data = ParallelExperiment([exp1, exp2], flatten_results=True).run(backend).block_for_results() | ||
data.artifacts() | ||
|
||
Artifacts can be accessed using either the artifact ID, which has to be unique in each | ||
:class:`.ExperimentData` object, or the artifact name, which does not have to be unique and will return | ||
all artifacts with the same name: | ||
|
||
.. jupyter-execute:: | ||
|
||
print("Number of curve_data artifacts:", len(data.artifacts("curve_data"))) | ||
# retrieve by name and index | ||
curve_data_id = data.artifacts("curve_data")[0].artifact_id | ||
# retrieve by ID | ||
scatter_table = data.artifacts(curve_data_id).data | ||
print("The first curve_data artifact:\n") | ||
scatter_table.dataframe | ||
|
||
In composite experiments, artifacts behave like analysis results and figures in that if | ||
``flatten_results`` isn't ``True``, they are accessible in the :meth:`.artifacts` method of each | ||
:meth:`.child_data`. The artifacts in a large composite experiment with ``flatten_results=True`` can be | ||
distinguished from each other using the :attr:`~.ArtifactData.experiment` and | ||
:attr:`~.ArtifactData.device_components` | ||
attributes. | ||
|
||
One useful pattern is to load raw or fitted data from ``curve_data`` for further data manipulation. You | ||
can work with the dataframe using standard pandas dataframe methods or the built-in | ||
:class:`.ScatterTable` methods: | ||
|
||
.. jupyter-execute:: | ||
|
||
import matplotlib.pyplot as plt | ||
|
||
exp_type = data.artifacts(curve_data_id).experiment | ||
component = data.artifacts(curve_data_id).device_components[0] | ||
|
||
raw_data = scatter_table.filter(category="raw") | ||
fitted_data = scatter_table.filter(category="fitted") | ||
|
||
# visualize the data | ||
plt.figure() | ||
plt.errorbar(raw_data.x, raw_data.y, yerr=raw_data.y_err, capsize=5, label="raw data") | ||
plt.errorbar(fitted_data.x, fitted_data.y, yerr=fitted_data.y_err, capsize=5, label="fitted data") | ||
plt.title(f"{exp_type} experiment on {component}") | ||
plt.xlabel('x') | ||
plt.ylabel('y') | ||
plt.legend() | ||
plt.show() | ||
|
||
Adding artifacts | ||
~~~~~~~~~~~~~~~~ | ||
|
||
You can add arbitrary data as an artifact as long as it's serializable with :class:`.ExperimentEncoder`, | ||
which extends Python's default JSON serialization with support for other data types commonly used with | ||
Qiskit Experiments. | ||
|
||
.. jupyter-execute:: | ||
|
||
from qiskit_experiments.framework import ArtifactData | ||
|
||
new_artifact = ArtifactData(name="experiment_notes", data={"content": "Testing some new ideas."}) | ||
data.add_artifacts(new_artifact) | ||
data.artifacts("experiment_notes") | ||
|
||
.. jupyter-execute:: | ||
|
||
print(data.artifacts("experiment_notes").data) | ||
|
||
Saving and loading artifacts | ||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | ||
|
||
.. note:: | ||
This feature is only for those who have access to the cloud service. You can | ||
check whether you do by logging into the IBM Quantum interface | ||
and seeing if you can see the `database <https://quantum.ibm.com/experiments>`__. | ||
|
||
Artifacts are saved and loaded to and from the cloud service along with the rest of the | ||
:class:`ExperimentData` object. Artifacts are stored as ``.zip`` files in the cloud service grouped by | ||
the artifact name. For example, the composite experiment above will generate two artifact files, ``fit_summary.zip`` and | ||
``curve_data.zip``. Each of these zipfiles will contain serialized artifact data in JSON format named | ||
by their unique artifact ID: | ||
|
||
.. jupyter-execute:: | ||
:hide-code: | ||
|
||
print("fit_summary.zip") | ||
print(f"|- {data.artifacts('fit_summary')[0].artifact_id}.json") | ||
print(f"|- {data.artifacts('fit_summary')[1].artifact_id}.json") | ||
print("curve_data.zip") | ||
print(f"|- {data.artifacts('curve_data')[0].artifact_id}.json") | ||
print(f"|- {data.artifacts('curve_data')[1].artifact_id}.json") | ||
print("experiment_notes.zip") | ||
print(f"|- {data.artifacts('experiment_notes').artifact_id}.json") | ||
|
||
Note that for performance reasons, the auto save feature does not apply to artifacts. You must still | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We might want more save options here. Maybe some way to auto_save artifacts based on name or experiment name. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, I think it would be nice to change auto_save to do a single upload of figures/results/artifacts in bulk once all analysis callbacks are complete as opposed to individual calls. Artifacts is the worst to upload one by one because of the zipped files but the others have room for performance improvement too. |
||
call :meth:`.ExperimentData.save` once the experiment analysis has completed to upload artifacts to the | ||
cloud service. | ||
|
||
Note also though individual artifacts can be deleted, currently artifact files cannot be removed from the | ||
cloud service. Instead, you can delete all artifacts of that name | ||
using :meth:`~.delete_artifact` and then call :meth:`.ExperimentData.save`. | ||
This will save an empty file to the service, and the loaded experiment data will not contain | ||
these artifacts. | ||
|
||
See Also | ||
-------- | ||
|
||
* :ref:`Curve Analysis: Data management with scatter table <data_management_with_scatter_table>` tutorial | ||
* :class:`.ArtifactData` API documentation | ||
* :class:`.ScatterTable` API documentation | ||
* :class:`.CurveFitResult` API documentation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the hacky way to fix sphinx build errors that was introduced in #983.