Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add NFS to VolumeSources in workspaces #3467

Closed
veggiemonk opened this issue Oct 28, 2020 · 6 comments
Closed

Add NFS to VolumeSources in workspaces #3467

veggiemonk opened this issue Oct 28, 2020 · 6 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@veggiemonk
Copy link

Hi,

Nice work on the API design and the naming of things.
Coming from Cloud Build (GCP) it seems more verbose but also gives more control.
Kudos for that 😄

I would like to use an NFS drive to share among tasks. VolumeSources seems like the correct place for that. Correct me if this is a wrong assumption.

I saw the comment in the documentation to open an issue in case I wanted more type supported: (see the end of the section)
https://github.com/tektoncd/pipeline/blob/master/docs/workspaces.md#specifying-volumesources-in-workspaces

Basic example of using NFS volume in a kubernetes Pod:

apiVersion: v1
kind: Pod
metadata:
  name: nfs-example
spec:
  containers:
    - image: ...
      volumeMounts:
        - mountPath: /mnt/logs
          name: logging-mount
  volumes:
    - name: logging-mount
      nfs:
        path: /logs
        server: 172.100.101.102

Example of usage in workspace:

workspaces:
- name: myworkspace
  nfs: 
    path: /logs
    server: 172.100.101.102

After looking at the code, it seems this is the place to add it:

type WorkspaceBinding struct {

Maybe there is a simpler way to do exactly that but I'm new to Tekton, any guidance is appreciated.
I'm running on GKE and the NFS drive is FileStore.

Let me know what you think about this,

Julien

@ghost
Copy link

ghost commented Oct 28, 2020

You may be able to do this today using a Persistent Volume and volumeClaimTemplate. Does the following work for you:

# PersistentVolume config taken from
# https://cloud.google.com/filestore/docs/accessing-fileshares#create_a_persistent_volume
apiVersion: v1
kind: PersistentVolume
metadata:
 name: mypv
spec:
 capacity:
   storage: <storage size>
 accessModes:
 - ReadWriteMany
 nfs:
   path: <file share name>
   server: <FileStore instance IP>
---
# In the TaskRun Workspaces list
workspaces:
- name: myworkspace
  volumeClaimTemplate:
    accessModes:
    - ReadWriteMany
    storageClassName: ""
    volumeName: mypv
    resources:
      requests:
        storage: <storage amount for taskrun>

@ghost
Copy link

ghost commented Oct 28, 2020

/kind feature

@tekton-robot tekton-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Oct 28, 2020
@veggiemonk
Copy link
Author

veggiemonk commented Oct 30, 2020

Hi @sbwsg

I tried what you suggested. Thanks a lot for providing the example.

The issue is I got was all the PVC got stuck in "Pending" and the message from kubectl describe pod XXX yields "pod has unbound PersistentVolumeClaims". I could not understand why.

Found a simple way:

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: mytask
spec:
  steps:
    - name: writesomething
      image: ubuntu
      command: ["bash", "-c"]
      args: ["mkdir -p /mnt/storage/tekton/ && echo 'hello tekton!' > /mnt/storage/tekton/test.txt"]
      volumeMounts:
        - mountPath: /mnt/storage
          name: logging-mount
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
  generateName: mytaskrun-
spec:
  serviceAccountName: default
  taskRef:
    name: mytask
  podTemplate:
    volumes:
      - name: logging-mount
        nfs:
          path: <file share name>
          server: <FileStore instance IP>

The main difference is that every pod has access to the whole disk.

What do you think about this approach ?

What features do I loose by not using workspace?

@veggiemonk
Copy link
Author

Hi @sbwsg

I still would like to have a workspace with NFS. The use case I have is to share a build cache among pipelineruns. Basically to decouple the lifetime of the workspace from the lifetime of the pipelinerun.

Does that make sense?
What is the way forward ? Can I make a PR ? join a meeting ? 😄

Highly appreciated

@ghost
Copy link

ghost commented Nov 4, 2020

Hi @sbwsg

I tried what you suggested. Thanks a lot for providing the example.

The issue is I got was all the PVC got stuck in "Pending" and the message from kubectl describe pod XXX yields "pod has unbound PersistentVolumeClaims". I could not understand why.

I have just tested this against Filestore and was able to get it working. For the Pending issue I recommend trying again with a new PersistentVolume object and looking at its state when you submit the TaskRun.

I noticed that creating a new PersistentVolume is required for every TaskRun. The PersistentVolume starts out in Available state then moves to Bound when the TaskRun executes and finally enters Released state when the TaskRun is deleted. Once the PersistentVolume is in Released state it's not possible to start a new TaskRun using the same PersistentVolume - you have to delete the existing mypv and then create a new one. Here are the resources I created:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mypv
spec:
  capacity:
    storage: 1T
  accessModes:
  - ReadWriteMany
  nfs:
    path: /test1
    server: <ip address>
---
kind: Task
apiVersion: tekton.dev/v1beta1
metadata:
  name: foo
spec:
  workspaces:
  - name: data
  steps:
  - name: check-data
    image: alpine:3.12.0
    script: |
      echo "foo" >> /workspace/data/foo
      cat /workspace/data/foo
---
kind: TaskRun
apiVersion: tekton.dev/v1beta1
metadata:
  name: run-foo
spec:
  taskRef:
    name: foo
  workspaces:
  - name: data
    volumeClaimTemplate:
      spec:
        accessModes:
        - ReadWriteMany
        storageClassName: ""
        volumeName: mypv
        resources:
          requests:
            storage: 500Mi

The problem with nfs specifically appears to be that Dynamic Provisioning of nfs PersistentVolumes is not supported out-of-the-box with kubernetes. See the note about nfs in this section of the docs: https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner

For example, NFS doesn't provide an internal provisioner, but an external provisioner can be used. There are also cases when 3rd party storage vendors provide their own external provisioner.

What this effectively means is that you must create a new nfs PersistentVolume every time you run a TaskRun. There are external third-party dynamic provisioners for nfs volumes that can apparently help you work around this limitation. Here's a doc specific to FileStore that describes setting one up: https://cloud.google.com/community/tutorials/gke-filestore-dynamic-provisioning

I still would like to have a workspace with NFS. The use case I have is to share a build cache among pipelineruns. Basically to decouple the lifetime of the workspace from the lifetime of the pipelinerun.

Does that make sense?
What is the way forward ?

The first thing to do is try and debug why it didn't work on your first attempt. I would really like to know what's not working, and why, before modifying Tekton. If you create a new nfs PersistentVolume, see it's in an Available state and then submit a TaskRun and it does not work then I'd really like to see the YAMLs of the resources (PV, PVC created by volume claim template, Pod, TaskRun) showing their state.

Can I make a PR ? join a meeting ? 😄

The meeting where we discuss changes to Tekton's API is the API WG (documented here). You can find links to our calendar and mailing list here. The process for proposing changes in Tekton Pipelines is the TEP process, which is outlined with further links to follow here.

@ghost ghost mentioned this issue Nov 4, 2020
@veggiemonk
Copy link
Author

Thanks a lot for taking the time to reply.

I'm going to close this issue as my problem is currently fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

2 participants