Skip to content

Commit

Permalink
Remove obsolete CAP integration and link to CAP docs (Closes #757)
Browse files Browse the repository at this point in the history
  • Loading branch information
Markus Napp committed Apr 22, 2020
1 parent f6b0add commit 1dd3257
Showing 1 changed file with 1 addition and 139 deletions.
140 changes: 1 addition & 139 deletions adoc/admin-cap-integration.adoc
Original file line number Diff line number Diff line change
@@ -1,141 +1,3 @@
= {cap} Integration

{productname} offers {cap} for modern application delivery.
This chapter describes the steps required for successful integration.

== Prerequisites

Before you start integrating {cap}, you need to ensure the following:

* The {productname} cluster did not use the `--strict-capability-defaults` option
during the initial setup when you ran `skuba cluster init`.
This ensures the presence of extra CRI-O capabilities compatible with docker containers.
For more details refer to the
_{productname} Deployment Guide, Transitioning from Docker to CRI-O_.
* The {productname} cluster has `swapaccount=1` set on all worker nodes.
+
----
grep "swapaccount=1" /etc/default/grub || sudo sed -i -r 's|^(GRUB_CMDLINE_LINUX_DEFAULT=)\"(.*.)\"|\1\"\2 swapaccount=1 \"|' /etc/default/grub
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
sudo systemctl reboot
----
* The {productname} cluster has no restrictions for {cap} ports.
For more details refer to the {cap} documentation: https://documentation.suse.com/suse-cap/1.5.1/.
* `Helm` and `Tiller` are installed on the node where you run the `skuba` and `kubectl` commands. For instructions on how to install Helm and Tiller refer to <<helm_tiller_install>>.

== Procedures
. Create a storage class. For precise steps, refer to <<_RBD-dynamic-persistent-volumes>>.

. Add the `Helm` chart repository.
+
----
helm repo add suse https://kubernetes-charts.suse.com/
----

. Map the {productname} master node external IP address to the `<CAP_DOMAIN>` and
`uaa.<CAP_DOMAIN>` on your DNS server.
For testing purposes you can also use `/etc/hosts`.
+
----
<CAASP_MASTER_NODE_EXTERNAL_IP> <CAASP_MASTER_NODE_EXTERNAL_IP>.omg.howdoi.website
<CAASP_MASTER_NODE_EXTERNAL_IP> uaa.<CAASP_MASTER_NODE_EXTERNAL_IP>.omg.howdoi.website
----

. Create a shared value file. This will be used for CAP `uaa`, `cf`, and
`console` charts. Substitute the values enclosed in `< >` with specific values.
+
----
cat << *EOF* > custom_values.yaml
env:
DOMAIN: <CAP_DOMAIN>
UAA_HOST: uaa.<CAP_DOMAIN>
kube:
external_ips:
- <CAASP_MASTER_NODE_EXTERNAL_IP>
- <CAASP_MASTER_NODE_INTERNAL_IP>
storage_class:
persistent: <STORAGE_CLASS_NAME>
secrets:
# CLUSTER_ADMIN_PASSWORD is for user login access
CLUSTER_ADMIN_PASSWORD: <SECURE_PASSWORD>
# UAA_ADMIN_CLIENT_SECRET is for OAuth client
UAA_ADMIN_CLIENT_SECRET: <SECURE_SECRET>
*EOF*
----

. Deploy `uaa`:
+
----
helm install suse/uaa --name uaa --namespace uaa --values custom_values.yaml
kubectl -n uaa get pod ...
NAME READY STATUS RESTARTS AGE
mysql-0 1/1 Running 0 21h
secret-generation-1-wr76g 0/1 Completed 0 21h
uaa-0 1/1 Running 1 21h
...
----

. Verify uaa OAuth -- this should return a JSON object:
+
----
curl --insecure https://uaa.<CAP_DOMAIN>:2793/.well-known/openid-configuration
----

. Deploy `cf`:
+
----
SECRET=$(kubectl get pods --namespace uaa -o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
CA_CERT="$(kubectl get secret $SECRET --namespace uaa -o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
helm install suse/cf --name scf --namespace scf --values custom_values.yaml --set "secrets.UAA_CA_CERT=${CA_CERT}"
kubectl -n scf get pod ...
NAME READY STATUS RESTARTS AGE
adapter-0 2/2 Running 0 56m
api-group-0 2/2 Running 0 49m
bits-0 1/1 Running 0 57m
blobstore-0 2/2 Running 0 56m
cc-clock-0 2/2 Running 0 61m
cc-uploader-0 2/2 Running 0 61m
cc-worker-0 2/2 Running 0 61m
cf-usb-group-0 1/1 Running 0 53m
diego-api-0 2/2 Running 0 61m
diego-brain-0 2/2 Running 0 61m
diego-cell-0 2/2 Running 0 57m
diego-ssh-0 2/2 Running 0 61m
doppler-0 2/2 Running 0 56m
locket-0 2/2 Running 0 61m
log-api-0 2/2 Running 0 55m
log-cache-scheduler-0 2/2 Running 0 56m
mysql-0 1/1 Running 0 55m
nats-0 2/2 Running 0 57m
nfs-broker-0 1/1 Running 0 61m
post-deployment-setup-1-vrbcv 0/1 Completed 0 61m
router-0 2/2 Running 0 57m
routing-api-0 2/2 Running 0 61m
secret-generation-1-l9bf7 0/1 Completed 0 61m
syslog-scheduler-0 2/2 Running 0 61m
tcp-router-0 2/2 Running 0 61m
...
----

. Deploy `console`:
+
----
helm install suse/console --name console --namespace console --values custom_values.yaml
kubectl -n console get pod ...
NAME READY STATUS RESTARTS AGE
stratos-0 3/3 Running 1 54m
stratos-db-8d658bbf5-nsng6 1/1 Running 0 54m
volume-migration-1-s96cc 0/1 Completed 0 54m
....
----

A successful deployment allows you to access {cap} console via a Web browser at
https://<DOMAIN_NAME>:8443/login. The default username is admin and the password
is the `<SECURE_PASSWORD>` you have set in one of the steps above.
For integration with {cap}, refer to: link:https://documentation.suse.com/suse-cap/{cap_version}/single-html/cap-guides/#cha-cap-depl-caasp[Deploying SUSE Cloud Application Platform on SUSE CaaS Platform].

0 comments on commit 1dd3257

Please sign in to comment.