@@ -72,6 +72,24 @@ In both cases, audit events structure is defined by the API in the
72
72
` audit.k8s.io` API group. The current version of the API is
73
73
[`v1beta1`][auditing-api].
74
74
75
+ **Note:** for example, in case of patches, request body is a JSON array with patch operations, not
76
+ a JSON object with an appropriate Kubernetes API object. For example, the following request body
77
+ is a valid patch request to `/apis/batch/v1/namespaces/some-namespace/jobs/some-job-name`.
78
+
79
+ ` ` ` json
80
+ [
81
+ {
82
+ "op": "replace",
83
+ "path": "/spec/parallelism",
84
+ "value": 0
85
+ },
86
+ {
87
+ "op": "remove",
88
+ "path": "/spec/template/spec/containers/0/terminationMessagePolicy"
89
+ }
90
+ ]
91
+ ` ` `
92
+
75
93
# ## Log backend
76
94
77
95
Log backend writes audit events to a file in JSON format. You can configure
@@ -91,14 +109,62 @@ audit backend using the following kube-apiserver flags:
91
109
92
110
- ` --audit-webhook-config-file` specifies the path to a file with a webhook
93
111
configuration. Webhook configuration is effectively a [kubeconfig][kubeconfig].
94
- - `--audit-webhook-mode` define the buffering strategy, one of the following :
95
- - ` batch` - buffer events and asynchronously send the set of events to the external service
96
- This is the default
97
- - ` blocking` - block API server responses on sending each event to the external service
112
+ - ` --audit-webhook-initial-backoff` specifies the amount of time to wait after the first failed
113
+ request before retrying. From the second request and so on, exponential backoff is used.
98
114
99
115
The webhook config file uses the kubeconfig format to specify the remote address of
100
116
the service and credentials used to connect to it.
101
117
118
+ # ## Batching
119
+
120
+ Both log and webhook backends support batching. Using webhook as an example, here's the list of
121
+ available flags. To get the same flag for log backend, replace `webhook` with `log` in the flag
122
+ name. By default, batching is enabled in `webhook` and disabled in `log`. Similarly, by default
123
+ throttling is enabled in `webhook` and disabled in `log`.
124
+
125
+ - `--audit-webhook-mode` define the buffering strategy, one of the following :
126
+ - ` batch` - buffer events and asynchronously process then in batches. This is the default
127
+ - ` blocking` - block API server responses on processing each individual event
128
+
129
+ The following flags are only used in the `batch` mode.
130
+
131
+ - ` --audit-webhook-batch-buffer-size` defines the number of events to buffer before batching.
132
+ If the rate of incoming events is too high and the buffer is overflown, events are dropped
133
+ - ` --audit-webhook-batch-max-size` defines the maximum number of events in one batch
134
+ - ` --audit-webhook-batch-max-wait` defines the maximum amount of time to wait before unconditionally
135
+ batching events in the queue
136
+ - ` --audit-webhook-batch-throttle-qps` defines the maximum average number of batches generated
137
+ per second
138
+ - ` --audit-webhook-batch-throttle-burst` defines the maximum number of batches generated at the same
139
+ moment if the allowed QPS was underutilized previously
140
+
141
+ # ### Parameter tuning
142
+
143
+ Parameters should be set to accommodate your load on the apiserver.
144
+
145
+ For example, if you receive 100 requests to the kube-apiserver each second, and each request is
146
+ audited only on `StageResponseStarted` and `StageResponseComplete` stages, you should account for
147
+ ~200 audit events being generated each second. Assuming that you have up to 100 events in a batch,
148
+ you should set throttling level at at least 2 QPS. Assuming that the backend can take up to
149
+ 5 seconds to write events, you should set the buffer size to hold up to 5 seconds of events, i.e.
150
+ 10 batches, i.e. 1000 events.
151
+
152
+ In most cases however, the default parameters should be sufficient and you don't have to worry about
153
+ setting them manually. You can look at the following Prometheus metrics exposed by kube-apiserver
154
+ and in the logs to monitor the state of the auditing subsystem.
155
+
156
+ - ` apiserver_audit_event_total` metric contains the total number of audit events exported.
157
+ - ` apiserver_audit_error_total` metric contains the total number of events dropped due to an error
158
+ during exporting.
159
+
160
+ # # Multi-cluster setup
161
+
162
+ If you're extending the Kubernetes API with the [aggregation layer][kube-aggregator], you can also
163
+ set up audit logging for the aggregated apiserver. To do this, pass the configuration options in the
164
+ same format as described above to the aggregated apiserver and set up the log ingesting pipeline
165
+ to pick up audit logs. Different apiservers can have different audit configurations and different
166
+ audit policies.
167
+
102
168
# # Log Collector Examples
103
169
104
170
# ## Use fluentd to collect and distribute audit events from log file
@@ -250,8 +316,8 @@ plugin which supports full-text search and analytics.
250
316
251
317
# # Legacy Audit
252
318
253
- __Note:__ Legacy Audit is deprecated and is disabled by default since Kubernetes 1.8.
254
- To fallback to this legacy audit, disable the advanced auditing feature
319
+ __Note:__ Legacy Audit is deprecated and is disabled by default since Kubernetes 1.8. Legacy Audit
320
+ will be removed in 1.12. To fallback to this legacy audit, disable the advanced auditing feature
255
321
using the `AdvancedAuditing` feature gate in [kube-apiserver][kube-apiserver] :
256
322
257
323
` ` `
@@ -299,3 +365,4 @@ and `audit-log-maxage` options.
299
365
[fluentd_install_doc] : http://docs.fluentd.org/v0.12/articles/quickstart#step1-installing-fluentd
300
366
[logstash] : https://www.elastic.co/products/logstash
301
367
[logstash_install_doc] : https://www.elastic.co/guide/en/logstash/current/installing-logstash.html
368
+ [kube-aggregator] : /docs/concepts/api-extension/apiserver-aggregation
0 commit comments