Skip to content
This repository was archived by the owner on Jun 28, 2023. It is now read-only.

Standalone workload on Windows with Docker does not come up after windows restart #1463

Closed
thesteve0 opened this issue Aug 26, 2021 · 2 comments
Labels
triage/duplicate A duplicate issue that should be closed

Comments

@thesteve0
Copy link

Bug Report

If I install TCE standalone on docker for windows, when I restart my computer I can not get the TCE instance to come back up.
Even if I go and manually restart the containers it does now work.
Looks like the problem is with networking in load balancer container

   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[NOTICE] 235/192652 (1) : New worker #1 (8) forked
[WARNING] 235/192657 (1) : Reexecuting Master process
[NOTICE] 235/192657 (1) : haproxy version is 2.2.9-2~bpo10+1
[NOTICE] 235/192657 (1) : path to executable is /usr/sbin/haproxy
[ALERT] 235/192657 (1) : sendmsg()/writev() failed in logger #1: No such file or directory (errno=2)
[WARNING] 235/192657 (8) : Stopping frontend controlPlane in 0 ms.
[WARNING] 235/192657 (8) : Stopping backend kube-apiservers in 0 ms.
[WARNING] 235/192657 (8) : Stopping frontend GLOBAL in 0 ms.
[WARNING] 235/192657 (8) : Proxy controlPlane stopped (cumulated conns: FE: 66, BE: 0).
[WARNING] 235/192657 (8) : Proxy kube-apiservers stopped (cumulated conns: FE: 0, BE: 66).
[NOTICE] 235/192657 (1) : New worker #1 (34) forked
[WARNING] 235/192657 (8) : Proxy GLOBAL stopped (cumulated conns: FE: 0, BE: 0).
[WARNING] 235/192657 (1) : Former worker #1 (8) exited with code 0 (Exit)
[WARNING] 235/192657 (34) : Server kube-apiservers/josh-better-be-right-control-plane-g5kcz is DOWN, reason: Layer4 connection problem, info: "SSL handshake failure", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 235/192657 (34) : backend 'kube-apiservers' has no server available!
[WARNING] 235/192716 (34) : Server kube-apiservers/josh-better-be-right-control-plane-g5kcz is UP, reason: Layer7 check passed, code: 200, check duration: 2ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
[NOTICE] 236/220939 (1) : haproxy version is 2.2.9-2~bpo10+1
[NOTICE] 236/220939 (1) : path to executable is /usr/sbin/haproxy
[ALERT] 236/220939 (1) : sendmsg()/writev() failed in logger #1: No such file or directory (errno=2)
[NOTICE] 236/220939 (1) : New worker #1 (9) forked
[WARNING] 236/220942 (9) : Server kube-apiservers/josh-better-be-right-control-plane-g5kcz is DOWN, reason: Layer6 timeout, check duration: 2000ms. 0 active and 0 backup servers left. 14089 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 236/220942 (9) : backend 'kube-apiservers' has no server available!
[WARNING] 236/221126 (1) : Exiting Master process...
[WARNING] 236/221126 (9) : Stopping frontend control-plane in 0 ms.
[WARNING] 236/221126 (9) : Stopping backend kube-apiservers in 0 ms.
[WARNING] 236/221126 (9) : Stopping frontend GLOBAL in 0 ms.
[WARNING] 236/221126 (9) : Proxy control-plane stopped (cumulated conns: FE: 14169, BE: 0).
[WARNING] 236/221126 (9) : Proxy kube-apiservers stopped (cumulated conns: FE: 0, BE: 14169).
[WARNING] 236/221126 (9) : Proxy GLOBAL stopped (cumulated conns: FE: 0, BE: 0).
[WARNING] 236/221126 (1) : Current worker #1 (9) exited with code 0 (Exit)
[WARNING] 236/221126 (1) : All workers exited. Exiting... (0)
[NOTICE] 236/221127 (1) : haproxy version is 2.2.9-2~bpo10+1
[NOTICE] 236/221127 (1) : path to executable is /usr/sbin/haproxy
[ALERT] 236/221127 (1) : sendmsg()/writev() failed in logger #1: No such file or directory (errno=2)
[NOTICE] 236/221127 (1) : New worker #1 (8) forked
[WARNING] 236/221129 (8) : Server kube-apiservers/josh-better-be-right-control-plane-g5kcz is DOWN, reason: Layer6 timeout, check duration: 2000ms. 0 active and 0 backup servers left. 8205 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 236/221129 (8) : backend 'kube-apiservers' has no server available!
[WARNING] 236/221141 (1) : Exiting Master process...
[WARNING] 236/221141 (8) : Stopping frontend control-plane in 0 ms.
[WARNING] 236/221141 (8) : Stopping backend kube-apiservers in 0 ms.
[WARNING] 236/221141 (8) : Stopping frontend GLOBAL in 0 ms.
[WARNING] 236/221141 (8) : Proxy control-plane stopped (cumulated conns: FE: 8213, BE: 0).
[WARNING] 236/221141 (8) : Proxy kube-apiservers stopped (cumulated conns: FE: 0, BE: 8213).
[WARNING] 236/221141 (8) : Proxy GLOBAL stopped (cumulated conns: FE: 0, BE: 0).
[WARNING] 236/221141 (1) : Current worker #1 (8) exited with code 0 (Exit)
[WARNING] 236/221141 (1) : All workers exited. Exiting... (0)
[NOTICE] 236/221219 (1) : haproxy version is 2.2.9-2~bpo10+1
[NOTICE] 236/221219 (1) : path to executable is /usr/sbin/haproxy
[ALERT] 236/221219 (1) : sendmsg()/writev() failed in logger #1: No such file or directory (errno=2)
[NOTICE] 236/221219 (1) : New worker #1 (8) forked
[WARNING] 236/221221 (8) : Server kube-apiservers/josh-better-be-right-control-plane-g5kcz is DOWN, reason: Layer6 timeout, check duration: 2000ms. 0 active and 0 backup servers left. 8150 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 236/221221 (8) : backend 'kube-apiservers' has no server available!

Expected Behavior

At the least, when I restart my computer I should be able to manually bring my cluster back up.

Steps to Reproduce the Bug

This is known not to work, just putting it here so it can get on the roadmap

Environment Details

  • Build version (tanzu version): 0.7
  • Operating System (client): Windows 10 with Docker for Desktops
@jorgemoralespou
Copy link
Contributor

Sounds as duplicate: #832

@joshrosso
Copy link
Contributor

Closing as this is a duplicate of #832.

Please note that we do not support host restarts for any CAPD-based clusters at this time.

@joshrosso joshrosso added triage/duplicate A duplicate issue that should be closed and removed kind/bug A bug in an existing capability triage/needs-triage Needs triage by TCE maintainers labels Aug 26, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
triage/duplicate A duplicate issue that should be closed
Projects
None yet
Development

No branches or pull requests

3 participants