-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add NFS to VolumeSources in workspaces #3467
Comments
You may be able to do this today using a Persistent Volume and volumeClaimTemplate. Does the following work for you: # PersistentVolume config taken from
# https://cloud.google.com/filestore/docs/accessing-fileshares#create_a_persistent_volume
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv
spec:
capacity:
storage: <storage size>
accessModes:
- ReadWriteMany
nfs:
path: <file share name>
server: <FileStore instance IP>
---
# In the TaskRun Workspaces list
workspaces:
- name: myworkspace
volumeClaimTemplate:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: mypv
resources:
requests:
storage: <storage amount for taskrun> |
/kind feature |
Hi @sbwsg I tried what you suggested. Thanks a lot for providing the example. The issue is I got was all the PVC got stuck in "Pending" and the message from Found a simple way: apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: mytask
spec:
steps:
- name: writesomething
image: ubuntu
command: ["bash", "-c"]
args: ["mkdir -p /mnt/storage/tekton/ && echo 'hello tekton!' > /mnt/storage/tekton/test.txt"]
volumeMounts:
- mountPath: /mnt/storage
name: logging-mount
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
generateName: mytaskrun-
spec:
serviceAccountName: default
taskRef:
name: mytask
podTemplate:
volumes:
- name: logging-mount
nfs:
path: <file share name>
server: <FileStore instance IP> The main difference is that every pod has access to the whole disk. What do you think about this approach ? What features do I loose by not using workspace? |
Hi @sbwsg I still would like to have a workspace with NFS. The use case I have is to share a build cache among pipelineruns. Basically to decouple the lifetime of the workspace from the lifetime of the pipelinerun. Does that make sense? Highly appreciated |
I have just tested this against Filestore and was able to get it working. For the I noticed that creating a new apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv
spec:
capacity:
storage: 1T
accessModes:
- ReadWriteMany
nfs:
path: /test1
server: <ip address>
---
kind: Task
apiVersion: tekton.dev/v1beta1
metadata:
name: foo
spec:
workspaces:
- name: data
steps:
- name: check-data
image: alpine:3.12.0
script: |
echo "foo" >> /workspace/data/foo
cat /workspace/data/foo
---
kind: TaskRun
apiVersion: tekton.dev/v1beta1
metadata:
name: run-foo
spec:
taskRef:
name: foo
workspaces:
- name: data
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: mypv
resources:
requests:
storage: 500Mi The problem with
What this effectively means is that you must create a new nfs PersistentVolume every time you run a TaskRun. There are external third-party dynamic provisioners for nfs volumes that can apparently help you work around this limitation. Here's a doc specific to FileStore that describes setting one up: https://cloud.google.com/community/tutorials/gke-filestore-dynamic-provisioning
The first thing to do is try and debug why it didn't work on your first attempt. I would really like to know what's not working, and why, before modifying Tekton. If you create a new nfs PersistentVolume, see it's in an
The meeting where we discuss changes to Tekton's API is the API WG (documented here). You can find links to our calendar and mailing list here. The process for proposing changes in Tekton Pipelines is the TEP process, which is outlined with further links to follow here. |
Thanks a lot for taking the time to reply. I'm going to close this issue as my problem is currently fixed. |
Hi,
Nice work on the API design and the naming of things.
Coming from Cloud Build (GCP) it seems more verbose but also gives more control.
Kudos for that 😄
I would like to use an NFS drive to share among tasks. VolumeSources seems like the correct place for that. Correct me if this is a wrong assumption.
I saw the comment in the documentation to open an issue in case I wanted more type supported: (see the end of the section)
https://github.com/tektoncd/pipeline/blob/master/docs/workspaces.md#specifying-volumesources-in-workspaces
Basic example of using NFS volume in a kubernetes Pod:
Example of usage in workspace:
After looking at the code, it seems this is the place to add it:
pipeline/pkg/apis/pipeline/v1beta1/workspace_types.go
Line 54 in 0df5c32
Maybe there is a simpler way to do exactly that but I'm new to Tekton, any guidance is appreciated.
I'm running on GKE and the NFS drive is FileStore.
Let me know what you think about this,
Julien
The text was updated successfully, but these errors were encountered: