Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improve sriov operator must-gather information #480

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

SchSeba
Copy link

@SchSeba SchSeba commented Feb 16, 2025

No description provided.

@@ -110,9 +112,16 @@ for CONFIG_DAEMON_POD in ${CONFIG_DAEMON_PODS[@]}; do
out=$(oc exec -n ${operator_ns} "${CONFIG_DAEMON_POD}" -c sriov-network-config-daemon -- chroot /host \
/bin/bash -c "cat var/log/multus.log" 2>/dev/null) && echo "$out" 1> "${MULTUS_LOG_PATH}" & PIDS+=($!)

# collect kernel lockdown mode
oc exec -n ${operator_ns} "${CONFIG_DAEMON_POD}" -c sriov-network-config-daemon -- chroot /host \
/bin/bash -c "/sys/kernel/security/lockdown" > "${KERNEL_LOCKDOWN_FILE_PATH}" & PIDS+=($!)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/bin/bash -c "/sys/kernel/security/lockdown" > "${KERNEL_LOCKDOWN_FILE_PATH}" & PIDS+=($!)
/bin/bash -c "cat /sys/kernel/security/lockdown" > "${KERNEL_LOCKDOWN_FILE_PATH}" & PIDS+=($!)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice catch!

Signed-off-by: Sebastian Sch <sebassch@gmail.com>
@SchSeba SchSeba force-pushed the improve_sriov_operator_gather branch from a4780d7 to 353155a Compare February 17, 2025 13:13
@zeeke
Copy link
Contributor

zeeke commented Feb 17, 2025

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Feb 17, 2025
Copy link
Contributor

openshift-ci bot commented Feb 17, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: SchSeba, zeeke
Once this PR has been reviewed and has the lgtm label, please assign sferich888 for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Contributor

openshift-ci bot commented Feb 17, 2025

@SchSeba: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@sferich888
Copy link
Contributor

In general (and this isn't caused by this PR; per-say) I see 2 issues with this collection script.

  1. We are collecting data serialy (with a loop) and executing a lot of 'operations' to complete this complete collection.
  2. (this is an issue with this PR) We are bulk (collecting more than one file) collecting data provided by a DaemonSet (or collection of pods); which is unbounded (IE: we could have anywhere between 1 to 500+ instances to operate against).

In both situations the time to collect and the size of the data returned are potential issues or problems to consider.
must-gather has an unofficial goal to generally keep data collections under 1GB in size and have that collection complete with in 10 minutes.

Code paths like this one in this collection script make meeting these SLO's potentially difficult; given that the aren't optimized collection operations and/or unbounded (not targeted as specific nodes/instances).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants