Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rework architecture diagram for v4 #734

Closed
r0ckarong opened this issue Mar 10, 2020 · 7 comments · Fixed by #740
Closed

Rework architecture diagram for v4 #734

r0ckarong opened this issue Mar 10, 2020 · 7 comments · Fixed by #740
Assignees
Labels
ArchitectureGuide Fix will change the Architecture Guide v4 CaaSP v4
Milestone

Comments

@r0ckarong
Copy link
Contributor

r0ckarong commented Mar 10, 2020

Relates to:

#571
#716
#717

Replace the old diagram:
architecture-caasp-components

@r0ckarong r0ckarong added v4 CaaSP v4 ArchitectureGuide Fix will change the Architecture Guide labels Mar 10, 2020
@r0ckarong r0ckarong added this to the Sprint 25 milestone Mar 10, 2020
@r0ckarong r0ckarong self-assigned this Mar 10, 2020
@r0ckarong
Copy link
Contributor Author

@r0ckarong
Copy link
Contributor Author

r0ckarong commented Mar 13, 2020

@flavio @ereslibre @kkaempf This is the current state. Is this any better? Are we missing something?

caasp_cluster_components

caasp_cluster_software

@ereslibre
Copy link
Contributor

ereslibre commented Mar 13, 2020

Some comments:

  • I would put etcd out of System workloads. It could be inside the Control Plane group. The apiserver connects to it.

  • kube-proxy is a DaemonSet in reality, so I would place it into System workloads in all nodes.

  • kubelet talks to CRI-O.

  • The load-balancer is not very clear from my perspective. It should connect to every apiserver on every control plane node. It should not connect to kubelet, or SSH. I would completely remove a layer in front of any SSH connection, we are targeting nodes directly, except for very specific customer setups where they could be using bastion nodes (I wouldn't draw this case).

  • I don't fully understand what "Invoke Kubernetes" means, where it is coming from, and what is its purpose, going through "Internet".

  • kubectl does not talk to the kubelet, but to the loadbalancer, that in turn, talks to the apiserver on every control plane node.

@r0ckarong
Copy link
Contributor Author

* I don't fully understand what "Invoke Kubernetes" means, where it is coming from, and what is its purpose, going through "Internet".

This was some orphaned line from a previous user item. I've changed it now.

* `kubectl` does not talk to the `kubelet`, but to the `loadbalancer`, that in turn, talks to the `apiserver` on every control plane node.

Changed

@r0ckarong
Copy link
Contributor Author

@Martin-Weiss Any thoughts on this ...

@Martin-Weiss
Copy link
Contributor

@Martin-Weiss Any thoughts on this ...

Looks great - but yes - there is something missing that is mandatory for all enterprise customers with on-premise deployments.. -> airgap and staging is missing.

SUSE provides RPMs via SCC -> SMT/RMT/SUSE Manager
SUSE provides Images -> skopeo to on premise registry (docker-distribution-registry+portus - connected to LDAP/AD)
SUSE provides Helm Charts -> helm fetch / copy file to web-server -> helm index

For all there things we have to have staging and different namespaces for test/reference/production.

And then all internal components fetch their things from the internal sources based on the environment..

Some other thoughts:

  • The LDAP server should not be hosted on the admin machine - this is external / fault tolerant and probably Active-Directory
  • The load balancer should also be separated from the admin machine - this is external / fault tolerant and probably HAPROXY or F5 or similar
  • The storage should not be "within the master / worker" as this is an external connected service - most often NFS to NetApp or SLES, or external Ceph or VSphere provisioner

@r0ckarong
Copy link
Contributor Author

r0ckarong commented Mar 19, 2020

* The LDAP server should not be hosted on the admin machine - this is external / fault tolerant and probably Active-Directory

* The load balancer should also be separated from the admin machine - this is external / fault tolerant and probably HAPROXY or F5 or similar

It's not, there are two boxes, one is "Customer Infrastructure" and the smaller one is "Management Workstation" neither LDAP nor load balancer are in the smaller box.

* The storage should not be "within the master / worker" as this is an external connected service - most often NFS to NetApp or SLES, or external Ceph or VSphere provisioner

Moved the storage service in the new version.

For all there things we have to have staging and different namespaces for test/reference/production.

I don't think I can produce anything meaningful by representing namespaces in the same diagram as high level component overview. Staging setup and cluster specific namespaces are two entirely different levels of diagrams that are not in scope of this task.

The airgap diagram is still in the works, I'm trying to find the right level of detail between "airgap" as a concept and "all components" as with the detailed component diagram.

@r0ckarong r0ckarong linked a pull request Mar 19, 2020 that will close this issue
r0ckarong pushed a commit that referenced this issue Mar 24, 2020
* Start rework of architecture diagram

* Update archi diagram without misleading color scheme

* Add components diagram rework state

* Update diagrams formatting and colors

* Update diagrams with icon outlines, incorporate first round of comments

* Fix arrow attachment in software diagram

* Remove old architecture diagram

* Replace architecture diagram in architecture doc

* Add default deployment scenario to deployment guide

* Add draft for airgap scenario diagram

* Update draft for airgap diag

* Update components diagram according to comments, move storage service, increase visibility of sub-components

* Add missing anchor to system reqs

* Update components diagram

* Update airgap draft

* Add draft for airgap network diagram
r0ckarong pushed a commit that referenced this issue Apr 6, 2020
* Start rework of architecture diagram

* Update archi diagram without misleading color scheme

* Add components diagram rework state

* Update diagrams formatting and colors

* Update diagrams with icon outlines, incorporate first round of comments

* Fix arrow attachment in software diagram

* Remove old architecture diagram

* Replace architecture diagram in architecture doc

* Add default deployment scenario to deployment guide

* Add draft for airgap scenario diagram

* Update draft for airgap diag

* Update components diagram according to comments, move storage service, increase visibility of sub-components

* Add missing anchor to system reqs

* Update components diagram

* Update airgap draft

* Add draft for airgap network diagram
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ArchitectureGuide Fix will change the Architecture Guide v4 CaaSP v4
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants