Skip to content

Commit

Permalink
Rework Admin guide and air gap for Helm 3 default (Closes #1047)
Browse files Browse the repository at this point in the history
  • Loading branch information
Markus Napp committed Nov 23, 2020
1 parent 0496238 commit bc44724
Show file tree
Hide file tree
Showing 15 changed files with 20 additions and 471 deletions.
14 changes: 2 additions & 12 deletions adoc/admin-logging-centralized.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ Collecting logs in a central location can be useful for audit or debug purposes

== Prerequisites

In order to successfully use Centralized Logging, you first need to install `Helm`, and `Tiller` if using Helm 2.
In order to successfully use Centralized Logging, you first need to install `Helm`.
Helm is used to install the log agents and provide custom logging settings.

Refer to <<helm-tiller-install>>.
Refer to <<helm-install>>.

== Types of Logs

Expand Down Expand Up @@ -81,12 +81,6 @@ See <<log-optional_settings>> for the optional parameters and their default valu

- Running the following will create the minimal working setup:

[source,bash]
----
helm repo add suse https://kubernetes-charts.suse.com
helm install suse/log-agent-rsyslog --name <RELEASE_NAME> --namespace kube-system --set server.host=${SERVER_HOST} --set server.port=${SERVER_PORT}
----
Or if you have selected the Helm 3 alternative also see <<helm-tiller-install>>:
[source,bash]
----
helm repo add suse https://kubernetes-charts.suse.com
Expand Down Expand Up @@ -134,10 +128,6 @@ helm status <RELEASE_NAME> --namespace kube-system

- To uninstall log agents, use the `helm delete` command:
----
helm delete --purge <RELEASE_NAME> --namespace kube-system
----
Or if you have selected the Helm 3 alternative also see <<helm-tiller-install>>:
----
helm uninstall <RELEASE_NAME> --namespace kube-system
----

Expand Down
35 changes: 0 additions & 35 deletions adoc/admin-monitoring-stack.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -276,15 +276,6 @@ helm repo add suse https://kubernetes-charts.suse.com
+
[source,bash]
----
helm install --name prometheus suse/prometheus \
--namespace monitoring \
--values prometheus-config-values.yaml
----
+
Or if you have selected the Helm 3 alternative also see <<helm-tiller-install>>:
+
[source,bash]
----
helm install prometheus suse/prometheus \
--namespace monitoring \
--values prometheus-config-values.yaml
Expand Down Expand Up @@ -638,14 +629,6 @@ helm repo add suse https://kubernetes-charts.suse.com
+
[source,bash]
----
helm install --name grafana suse/grafana \
--namespace monitoring \
--values grafana-config-values.yaml
----
Or if you have selected the Helm 3 alternative also see <<helm-tiller-install>>:
+
[source,bash]
----
helm install grafana suse/grafana \
--namespace monitoring \
--values grafana-config-values.yaml
Expand Down Expand Up @@ -911,15 +894,6 @@ helm repo add suse https://kubernetes-charts.suse.com
+
[source,bash]
----
helm install --name prometheus suse/prometheus \
--namespace monitoring \
--values prometheus-config-values.yaml
----
+
Or if you have selected the Helm 3 alternative also see <<helm-tiller-install>>:
+
[source,bash]
----
helm install prometheus suse/prometheus \
--namespace monitoring \
--values prometheus-config-values.yaml
Expand Down Expand Up @@ -1055,15 +1029,6 @@ helm repo add suse https://kubernetes-charts.suse.com
+
[source,bash]
----
helm install --name grafana suse/grafana \
--namespace monitoring \
--values grafana-config-values.yaml
----
+
Or if you have selected the Helm 3 alternative also see <<helm-tiller-install>>:
+
[source,bash]
----
helm install grafana suse/grafana \
--namespace monitoring \
--values grafana-config-values.yaml
Expand Down
83 changes: 0 additions & 83 deletions adoc/admin-security-certificates.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -178,12 +178,6 @@ The addon certificates stored in the {kube} cluster Secret resource:

We use cert-exporter to monitor nodes' on-host certificates and addons' secret certificates. The cert-exporter collects the metrics of certificates expiration periodically (1 hour by default) and exposes them through the `/metrics` endpoint. Then, the Prometheus server can scrape these metrics from the endpoint periodically.

[source,bash]
----
helm repo add suse https://kubernetes-charts.suse.com
helm install suse/cert-exporter --name <RELEASE_NAME>
----
Or if you have selected the Helm 3 alternative also see <<helm-tiller-install>>:
[source,bash]
----
helm repo add suse https://kubernetes-charts.suse.com
Expand Down Expand Up @@ -234,18 +228,6 @@ For example:
+
[source,bash]
----
helm install suse/cert-exporter \
--name <RELEASE_NAME> \
--set customSecret.enabled=true \
--set customSecret.certs[0].name=cert-manager \
--set customSecret.certs[0].namespace=cert-manager-test \
--set customSecret.certs[0].includeKeys="{*.crt,*.pem}" \
--set customSecret.certs[0].annotationSelector="{cert-manager.io/certificate-name}"
----
Or if you have selected the Helm 3 alternative also see <<helm-tiller-install>>:
+
[source,bash]
----
helm install <RELEASE_NAME> suse/cert-exporter \
--set customSecret.enabled=true \
--set customSecret.certs[0].name=cert-manager \
Expand All @@ -258,17 +240,6 @@ helm install <RELEASE_NAME> suse/cert-exporter \
+
[source,bash]
----
helm install suse/cert-exporter \
--name <RELEASE_NAME> \
--set customSecret.enabled=true \
--set customSecret.certs[0].name=self-signed-cert \
--set customSecret.certs[0].includeKeys="{*.crt,*.pem}" \
--set customSecret.certs[0].labelSelector="{key=value}"
----
Or if you have selected the Helm 3 alternative also see <<helm-tiller-install>>:
+
[source,bash]
----
helm install <RELEASE_NAME> suse/cert-exporter \
--set customSecret.enabled=true \
--set customSecret.certs[0].name=self-signed-cert \
Expand All @@ -280,21 +251,6 @@ helm install <RELEASE_NAME> suse/cert-exporter \
+
[source,bash]
----
helm install suse/cert-exporter \
--name <RELEASE_NAME> \
--set customSecret.enabled=true \
--set customSecret.certs[0].name=cert-manager \
--set customSecret.certs[0].namespace=cert-manager-test \
--set customSecret.certs[0].includeKeys="{*.crt,*.pem}" \
--set customSecret.certs[0].annotationSelector="{cert-manager.io/certificate-name}" \
--set customSecret.certs[1].name=self-signed-cert \
--set customSecret.certs[1].includeKeys="{*.crt,*.pem}" \
--set customSecret.certs[1].labelSelector="{key=value}"
----
Or if you have selected the Helm 3 alternative also see <<helm-tiller-install>>:
+
[source,bash]
----
helm install <RELEASE_NAME> suse/cert-exporter \
--set customSecret.enabled=true \
--set customSecret.certs[0].name=cert-manager \
Expand All @@ -310,23 +266,6 @@ helm install <RELEASE_NAME> suse/cert-exporter \
+
[source,bash]
----
helm install suse/cert-exporter \
--name <RELEASE_NAME> \
--set node.enabled=false \
--set addon.enabled=false \
--set customSecret.enabled=true \
--set customSecret.certs[0].name=cert-manager \
--set customSecret.certs[0].namespace=cert-manager-test \
--set customSecret.certs[0].includeKeys="{*.crt,*.pem}" \
--set customSecret.certs[0].annotationSelector="{cert-manager.io/certificate-name}" \
--set customSecret.certs[1].name=self-signed-cert \
--set customSecret.certs[1].includeKeys="{*.crt,*.pem}" \
--set customSecret.certs[1].labelSelector="{key=value}"
----
Or if you have selected the Helm 3 alternative also see <<helm-tiller-install>>:
+
[source,bash]
----
helm install <RELEASE_NAME> suse/cert-exporter \
--set node.enabled=false \
--set addon.enabled=false \
Expand Down Expand Up @@ -673,16 +612,6 @@ The addon certificates can be automatically rotated by leveraging the functions
+
[source,bash]
----
helm install \
--name <RELEASE_NAME> \
--namespace cert-manager \
suse/reloader
----
+
Or if you have selected the Helm 3 alternative also see <<helm-tiller-install>>:
+
[source,bash]
----
helm install <RELEASE_NAME> \
--namespace cert-manager \
--create-namespace \
Expand All @@ -693,18 +622,6 @@ helm install <RELEASE_NAME> \
+
[source,bash]
----
helm install \
--name <RELEASE_NAME> \
--namespace cert-manager \
--set global.leaderElection.namespace=cert-manager \
--set installCRDs=true \
suse/cert-manager
----
+
Or if you have selected the Helm 3 alternative also see <<helm-tiller-install>>:
+
[source,bash]
----
helm install <RELEASE_NAME> \
--namespace cert-manager \
--create-namespace \
Expand Down
9 changes: 1 addition & 8 deletions adoc/admin-security-nginx-ingress.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ controller:

=== Deploy ingress controller from helm chart

TIP: For complete instructions on how to install Helm and Tiller refer to <<helm-tiller-install>>.
TIP: For complete instructions on how to install Helm refer to <<helm-install>>.

Add the link:https://kubernetes-charts.suse.com/[SUSE helm charts repository] by running:

Expand All @@ -73,13 +73,6 @@ helm repo add suse https://kubernetes-charts.suse.com

Then you can deploy the ingress controller and use the previously created configuration file to configure the networking type.

[source,bash]
----
helm install --name nginx-ingress suse/nginx-ingress \
--namespace nginx-ingress \
--values nginx-ingress-config-values.yaml
----
Or if you have selected the Helm 3 alternative also see <<helm-tiller-install>>:
[source,bash]
----
kubectl create namespace nginx-ingress
Expand Down
112 changes: 5 additions & 107 deletions adoc/admin-software-installation.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -95,120 +95,18 @@ sudo zypper in ceph-common xfsprogs

== {kube} stack

[[helm-tiller-install]]
[#helm-install]
=== Installing Helm

As of {productname} {productversion}, Helm 2 is part of the {productname} package repository, so to use this,
you only need to run the following command from the location where you normally run `skuba` commands:

[source,bash]
----
sudo zypper install helm
----

Helm 2 is the default for {productname} 5. Helm 3 is offered as an alternate tool and may be installed in parallel to aid migration.
As of {productname} {productversion}, Helm 3 is the default and provided by the package repository.
To install, run the following command from the location where you normally run `skuba` commands:

[source,bash]
----
sudo zypper install helm3
sudo update-alternatives --set helm /usr/bin/helm3
----

[WARNING]
====
Unless you are migrating from {productname} 4.2 with Helm charts already deployed or have legacy Helm charts that only work with Helm 2, please use Helm 3.
Helm 2 is planned to end support in November 2020.
Helm 3 is offered as an alternative in {productname} 4.5.0 and will become the default tool in the following release.
Please see <<helm-2to3-migration>> for upgrade instructions and upgrade as soon as feasible.
====

=== Installing Tiller

[NOTE]
Tiller is only a requirement for Helm 2 and has been removed from Helm 3. If using Helm 3, please skip this section.

As of {productname} {productversion}, Tiller is not part of the {productname} package repository but it is available as a
helm chart from the chart repository. To install the Tiller server, choose either way to deploy the Tiller server:

==== Unsecured Tiller Deployment

This will install Tiller without additional certificate security.

[source,bash,subs='attributes']
----
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init \
--tiller-image registry.suse.com/caasp/v4.5/helm-tiller:{helm_tiller_version} \
--service-account tiller
----

==== Secured Tiller Deployment with TLS certificate

This installs tiller with TLS certificate security.

===== Trusted Certificates

Please reference to <<trusted-server-certificate>> and <<trusted-client-certificate>> on how to sign the trusted tiller and helm certificate.
The server.conf for IP.1 is `127.0.0.1`.

Then, import trusted certificate to {kube} cluster. In this example, trusted certificate are `ca.crt`, `tiller.crt`, `tiller.key`, `helm.crt` and `helm.key`.

===== Self-signed Certificates (optional)

Please reference to <<self-signed-server-certificate>> and <<self-signed-client-certificate>> on how to sign the self-signed tiller and helm certificate.
The server.conf for IP.1 is `127.0.0.1`.

Then, import trusted certificate to {kube} cluster. In this example, trusted certificate are `ca.crt`, `tiller.crt`, `tiller.key`, `helm.crt` and `helm.key`.

. Deploy Tiller server with TLS certificate
+
[source,bash,subs="attributes"]
----
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init \
--tiller-tls \
--tiller-tls-verify \
--tiller-tls-cert tiller.crt \
--tiller-tls-key tiller.key \
--tls-ca-cert ca.crt \
--tiller-image registry.suse.com/caasp/v4.5/helm-tiller:{helm_tiller_version} \
--service-account tiller
----

. Configure Helm client with TLS certificate
+
Setup $HELM_HOME environment and copy the CA certificate, helm client certificate and key to the $HELM_HOME path.
+
[source,bash]
----
export HELM_HOME=<path/to/helm/home>
cp ca.crt $HELM_HOME/ca.pem
cp helm.crt $HELM_HOME/cert.pem
cp helm.key $HELM_HOME/key.pem
----
+
Then, for helm commands, pass flag `--tls`. For example:
[source,bash]
+
----
helm ls --tls [flags]
helm install --tls <CHART> [flags]
helm upgrade --tls <RELEASE_NAME> <CHART> [flags]
helm del --tls <RELEASE_NAME> [flags]
----

[[helm-2to3-migration]]
[#helm-2to3-migration]
=== Helm 2 to 3 Migration
[NOTE]
====
Expand All @@ -223,7 +121,7 @@ Refer to:

==== Preconditions

* A healthy {productname} 4.5 installation with applications deployed using Helm 2 and Tiller.
* A healthy {productname} 4.5.x installation with applications deployed using Helm 2 and Tiller.
* A system, which `skuba` and `helm` version 2 have run on previously.
** The procedure below requires an available internet connection to install the `2to3` plugin. If the installation is in an air gapped environment, the system may need to be moved back out of the air gapped environment.
* These instructions are written for a single cluster managed from a single Helm 2 installation. If more than one cluster is being managed by this installation of Helm 2, please reference https://github.com/helm/helm-2to3 for further details and do not do the clean-up step until all clusters are migrated.
Expand Down
Loading

0 comments on commit bc44724

Please sign in to comment.