-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
executable file
·162 lines (157 loc) · 5.83 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
<title>masK'IT by cirKITers</title>
<link rel="stylesheet" href="reveal.js/dist/reset.css">
<link rel="stylesheet" href="reveal.js/dist/reveal.css">
<link rel="stylesheet" href="reveal.js/dist/theme/black.css" id="theme">
<!-- Theme used for syntax highlighted code -->
<link rel="stylesheet" href="reveal.js/plugin/highlight/monokai.css" id="highlight-theme">
</head>
<body>
<div class="reveal">
<div class="slides">
<section>
<h1>masK'IT</h1>
<em>Tackling plateaus in parameterised quantum circuits with even more randomness</em>
<br/><br/>
<p>
<small>Team: cirKITers</small><br/>
<small>GitHub: <a href="https://github.com/cirKITers/masKIT">https://github.com/cirKITers/masKIT</a></small>
</p>
</section>
<section>
<h2>Motivation</h2>
<ul>
<li>It is not about the difficulty of climbing plateaus, but leaving them during optimization processes.</li>
</ul>
<img src="images/Plateau-Mountain--Y6-Gymnastics-Rivers-and-Mountains-Twinkl-Move-PE-KS2.png"/>
</section>
<section>
<h2>Motivation</h2>
<ul>
<li>Barren plateaus (but also local minima) can have a big impact on trainability for parameterized quantum circuits</li>
</ul>
<img src="images/no_training.png" height="350">
</section>
<section>
<h2>Idea</h2>
<ul>
<li>Dropouts are being used in Machine Learning</li>
<ul>
<li>Preventing overfitting</li>
<li>Helping to train all parts of the Neural Network</li>
</ul>
</ul>
<img src="images/dropout.png" alt="Source: https://towardsdatascience.com/the-less-is-more-of-machine-learning-1f571c0d4481"/>
</section>
<section>
<h2>Questions</h2>
<p>Do dropouts have a similar effect to quantum circuits as in Machine Learning?</p>
<p>How do they influence circuits of varying depth and height?</p>
<p>To which ansätze are dropouts applicable if at all?</p>
</section>
<section>
<h2>Experimental Setup</h2>
<ul>
<li>Pennylane to implement hybrid quantum-classical use case</li>
<li>Gradient Descent implementation for parameter optimization</li>
<li>Layered circuit ansatz as a template for scaling</li>
</ul>
<img src="images/pennylane.png" height="50" />
</section>
<section>
<h2>Circuit Ansatz</h2>
<ul>
<li><small>Circuit based on ansatz from McClean, J.R., Boixo, S., Smelyanskiy, V.N. et al. <em>Barren plateaus in quantum neural network training landscapes</em>. Nat Commun 9, 4812 (2018). https://doi.org/10.1038/s41467-018-07090-4</small></li>
<img src="images/circuit.png"/>
</ul>
</section>
<section>
<h2>Algorithm</h2>
<pre data-id="code-animation"><code class="hljs" data-trim>
if cost does not change for several steps:
create variations of current circuit
perform another training step
measure costs for variations
pick circuit with lowest cost
else:
perform another training step
measure current cost
</code></pre>
</section>
<section>
<h2>Ensemble-based Dropout</h2>
<img src="images/do_graph.png"/>
</section>
<section>
<h2>Results</h2>
<ul>
<li>Improved learning rate</li>
<li>Plot even <em>includes</em> penalty in number of steps for calculated ensembles for dropout</li>
</ul>
<img src="images/w10_l10.png" height="350" />
</section>
<section>
<h2>Results</h2>
<ul>
<li>Fast learning possible with pretrained parameters when dropped gates are added back into the circuit</li>
</ul>
<img src="images/w5_l10.png" width="400"/>
<img src="images/w5_l10_do.png" width="400"/>
</section>
<section>
<h2>Results</h2>
<ul>
<li>Good learning also for higher numbers of layers</li>
<li>Better understanding needed to improve performance</li>
</ul>
<img src="images/w5_l20.png" height="350">
</section>
<section>
<h2>Conclusions so far</h2>
<ul>
<li>Dropouts can be applied for QML but differ from ML</li>
<li>Ensemble-based dropouts for parameterized quantum circuits</li>
<ul>
<li>Rapid learning rates can be achieved</li>
<li>Method is able to escape plateaus where gradient descent converges either very slow or not at all</li>
<li>Allows for great tuning potential</li>
</ul>
</ul>
</section>
<section>
<h2>Future Work</h2>
<ul>
<li>Exploration of varying sizes of circuits</li>
<li>Exploration of varying circuit ansätze</li>
<li>Exploration of varying learning methods</li>
<li>Exploration of further dropout mechanism</li>
<li>Developing better understanding of interdependencies</li>
<li>Targeted dropout of defined parts of a circuit/layer</li>
</ul>
</section>
<section>
<small>All contents presented here and in the <a href="https://github.com/cirKITers/masKIT/tree/v1.0">accompanying GitHub repository</a> have been developed during the open hackathon at QHACK 2021.</small>
<img src="images/qhack-sun-cutout.svg">
</section>
</div>
</div>
<script src="reveal.js/dist/reveal.js"></script>
<script src="reveal.js/plugin/notes/notes.js"></script>
<script src="reveal.js/plugin/markdown/markdown.js"></script>
<script src="reveal.js/plugin/highlight/highlight.js"></script>
<script>
// More info about initialization & config:
// - https://revealjs.com/initialization/
// - https://revealjs.com/config/
Reveal.initialize({
hash: true,
// Learn about plugins: https://revealjs.com/plugins/
plugins: [ RevealMarkdown, RevealHighlight, RevealNotes ]
});
</script>
</body>
</html>