Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CLI option --kube-context to override the kubecontext in Skaffold #2447

Merged
merged 9 commits into from
Aug 15, 2019

Conversation

corneliusweig
Copy link
Contributor

@corneliusweig corneliusweig commented Jul 10, 2019

This PR adds a new CLI option --kube-context to Skaffold run, dev, debug, delete, deploy, and build. It's the frist step in providing several options to override the kubecontext to use in Skaffold.

See #2384
Depends on #2509

@corneliusweig corneliusweig changed the title Add CLI option to override the kubecontext Add CLI option --kube-context to override the kubecontext in Skaffold Jul 10, 2019
currentConfigErr error
kubeConfigOnce sync.Once
kubeConfig clientcmd.ClientConfig
kubeContext string
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit sad about introducing some global state here. However, I have considered several alternatives and this appeared to be the one with the least head-ache overall. Let me know your thoughts...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it was already a global state, it is fine.
Maybe in the future we can think about how to combine it with RunContext.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was already investigating in that direction. That requires to pass down the RunContext in quite a few places, so I decided against it. But it's a good idea nevertheless.

@codecov
Copy link

codecov bot commented Jul 10, 2019

Codecov Report

Merging #2447 into master will increase coverage by 0.03%.
The diff coverage is 96.15%.

Impacted Files Coverage Δ
pkg/skaffold/config/options.go 100% <ø> (ø) ⬆️
cmd/skaffold/app/cmd/flags.go 100% <ø> (ø) ⬆️
pkg/skaffold/kubernetes/context/context.go 100% <100%> (ø) ⬆️
pkg/skaffold/docker/parse.go 87.36% <100%> (ø) ⬆️
cmd/skaffold/app/cmd/cmd.go 68.25% <100%> (+0.25%) ⬆️
pkg/skaffold/kubernetes/client.go 40% <50%> (-18.83%) ⬇️

@balopat balopat added the kokoro:run runs the kokoro jobs on a PR label Jul 11, 2019
@kokoro-team kokoro-team removed the kokoro:run runs the kokoro jobs on a PR label Jul 11, 2019
currentConfigErr error
kubeConfigOnce sync.Once
kubeConfig clientcmd.ClientConfig
kubeContext string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it was already a global state, it is fine.
Maybe in the future we can think about how to combine it with RunContext.

func getCurrentConfig() (clientcmdapi.Config, error) {
currentConfigOnce.Do(func() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why did we change the semantics of the currentConfigOnce?
Doesn't this open up the possibilities of inconsitencies in a skaffold process that started off version1 of a kubeconfig file but then someone mutated that file and a second read would be of version2?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The once-semantic is now encapsulated in getKubeConfig. So for the moment we are safe here. However, in the future when the kube-context is also overridden by skaffold.yaml, we will have to re-read the kubeconfig. This will occur during the skaffold.yaml parsing/profile phase and should not take very long. Do you think Skaffold should take extra precautions to work with the identical kubeconfig as in the first kubeconfig read?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found a nice way to ensure that Skaffold always uses the kubeconfig from the first read. Note that this has not been the case with respect to the kubernetes REST client so far, and is much more consistent now.

Note that there is still the possibility, that the kubecontext changes due to a change in skaffold.yaml (via UseKubeContext). However, that is the desired behavior.

@corneliusweig corneliusweig force-pushed the w/kubecontext/cli branch 4 times, most recently from af067a1 to f756516 Compare July 16, 2019 10:53
@corneliusweig

This comment has been minimized.

@balopat
Copy link
Contributor

balopat commented Jul 18, 2019

I will have a look tomorrow - but as a tip: can you rebase on top of master? Sometimes changes might impact Travis vs local - as Travis does merge master in your branch. It does look odd that test-dev is around, I see you added logs to see if it gets cleaned up from the other test...

@corneliusweig
Copy link
Contributor Author

@balopat 🤦‍♂️ Thanks that gives me a lead as to what is going wrong. You don't have to waste your time, I can debug further now.

@kokoro-team kokoro-team removed the kokoro:run runs the kokoro jobs on a PR label Jul 18, 2019
@corneliusweig
Copy link
Contributor Author

corneliusweig commented Jul 18, 2019

Alright, I found what is going on:
When my new test-case runs, the deployment test-dev from the previous test is still around. This deployment is erroneously picked up by pkg/skaffold/deploy/status_check.go:80 (the namespace in my test is "", which means all namespaces). From there on, the test is doomed.

I think this is mainly a problem how the status_check code determines the deployments to consider for watching. It would be better, if the status-check was looking for a UUID label which is generated for each Skaffold run. @tejal29 WDYT?

EDIT I opened #2496 with more details about this issue.

@balopat balopat added the kokoro:run runs the kokoro jobs on a PR label Jul 19, 2019
@kokoro-team kokoro-team removed the kokoro:run runs the kokoro jobs on a PR label Jul 19, 2019
Copy link
Contributor

@balopat balopat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried out locally with merging with the latest from master and I'm having port forwarding issues.

skaffold dev --default-repo gcr.io/balintp-gcp-lab --rpc-http-port 8080 --port-forward=true on microservices example times out. Maybe we're missing a kubecontext passed down there somewhere?

@balopat
Copy link
Contributor

balopat commented Jul 19, 2019

yup, we're missing kubecontext here: https://github.com/GoogleContainerTools/skaffold/blob/master/pkg/skaffold/kubernetes/portforward/kubectl_forwarder.go#L54

It could be useful to just do a mechanical search for anything kubectl and double check if kubecontext is missing.

@corneliusweig
Copy link
Contributor Author

Oh no! I wasn't aware that so many places called kubectl directly without going through pkg/skaffold/deploy/kubectl. I think the cleaner approach would be to consistently use our wrapper to call kubectl, but I will have to see how much refactoring this will require.

Thanks for catching this early!

@balopat balopat added the kokoro:run runs the kokoro jobs on a PR label Aug 14, 2019
@kokoro-team kokoro-team removed the kokoro:run runs the kokoro jobs on a PR label Aug 14, 2019
Copy link
Contributor

@balopat balopat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM after rebase.

@balopat balopat added the kokoro:run runs the kokoro jobs on a PR label Aug 14, 2019
@kokoro-team kokoro-team removed the kokoro:run runs the kokoro jobs on a PR label Aug 14, 2019
@corneliusweig
Copy link
Contributor Author

Rebase is done. Thanks everyone for reviewing and helping out!

@balopat balopat closed this Aug 14, 2019
@balopat balopat reopened this Aug 14, 2019
@balopat balopat added the kokoro:run runs the kokoro jobs on a PR label Aug 14, 2019
@kokoro-team kokoro-team removed the kokoro:run runs the kokoro jobs on a PR label Aug 14, 2019
Signed-off-by: Cornelius Weig <22861411+corneliusweig@users.noreply.github.com>
Many places in the code assume that kubeConfig.CurrentContext is the correct
kubeContext to use. In order to avoid passing down the configured value,
the cached kubeConfig is modified with the true kubeContext to use.
For that, an override kubeContext is configured early on.

Signed-off-by: Cornelius Weig <22861411+corneliusweig@users.noreply.github.com>
This ensures that overridden defaults are consistent when using the kubeconfig and the REST client.

Signed-off-by: Cornelius Weig <22861411+corneliusweig@users.noreply.github.com>
Signed-off-by: Cornelius Weig <22861411+corneliusweig@users.noreply.github.com>
Signed-off-by: Cornelius Weig <22861411+corneliusweig@users.noreply.github.com>
Signed-off-by: Cornelius Weig <22861411+corneliusweig@users.noreply.github.com>
Signed-off-by: Cornelius Weig <22861411+corneliusweig@users.noreply.github.com>
… kubeconfig

Signed-off-by: Cornelius Weig <22861411+corneliusweig@users.noreply.github.com>
@corneliusweig
Copy link
Contributor Author

Looks like another glitch in the CI. I've rebased one more time :)

@tejal29 tejal29 added the kokoro:run runs the kokoro jobs on a PR label Aug 15, 2019
@kokoro-team kokoro-team removed the kokoro:run runs the kokoro jobs on a PR label Aug 15, 2019
@tejal29
Copy link
Contributor

tejal29 commented Aug 15, 2019

Sorry @corneliusweig, we changed from travis-ci.org to premium. We cancelled all jobs in the old free travis and hence you must have seen a failure.
You job is green on Travis. I just started kokoro job

@balopat balopat added the kokoro:run runs the kokoro jobs on a PR label Aug 15, 2019
@balopat balopat merged commit ba24064 into GoogleContainerTools:master Aug 15, 2019
@corneliusweig corneliusweig deleted the w/kubecontext/cli branch August 15, 2019 20:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla: yes kokoro:run runs the kokoro jobs on a PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants