Automation to deploy the sigstore ecosystem on RHEL
The automation within this repository establishes the components of the Sigstore project within a single Red Hat Enterprise Linux (RHEL) machine using a standalone containerized deployment. Containers are spawned using Kubernetes based manifests using podman kube play.
The following Sigstore components are deployed as part of this architecture:
An NGINX frontend is placed as an entrypoint to the various backend components. Communication is secured via a set of self-signed certificates that are generated at runtime.
Keycloak is being used as a OIDC issuer for facilitating keyless signing.
Utilize the steps below to understand how to setup and execute the provisioning.
Ansible must be installed and configured on a control node that will be used to perform the automation.
NOTE: Future improvements will make use of an Execution environment
Perform the following steps to prepare the control node for execution.
Install the required Ansible collections by executing the following
ansible-galaxy collection install -r requirements.yml
Populate the sigstore
group within the inventory file with details related to the target host.
Keycloak is deployed to enable keyless (OIDC) signing. A dedicated realm called sigstore
is configured by default using a client called sigstore
To be able to sign containers, you will need to authenticate to the Keycloak instance. By default, a single user (jdoe) is created. This can be customized by specifying the keycloak_sigstore_users
variable. The default value is shown below and can be used to authenticate to Keycloak if no modifications are made:
keycloak_sigstore_users:
- username: jdoe
first_name: John
last_name: Doe
email: jdoe@redhat.com
password: mysecurepassword
The automation deploys and configures a software load balancer as a central point of ingress. Multiple hostnames underneath a base hostname are configured and include the following hostnames:
- https://rekor.<base_hostname>
- https://fulcio.<base_hostname>
- https://keycloak.<base_hostname>
- https://tuf.<base_hostname>
Each of these hostnames must be configured in DNS to resolve to the target machine. The base_hostname
parameter must be provided
when executing the provisining. To configure hostnames in DNS, edit /etc/hosts
with the following content:
<REMOTE IP ADDRESS> keycloak.<base_hostname>
<REMOTE_IP_ADDRESS> fulcio.<base_hostname> fulcio
<REMOTE_IP_ADDRESS> rekor.<base_hostname> rekor
<REMOTE_IP_ADDRESS> tuf.<base_hostname> tuf
cosign is used as part of testing and validating the setup and configuration. It is an optional install if there is not a desire to perform the validation as described below.
Execute the following commands to execute the automation:
# Run the playbook from your local system
ansible-playbook -vv -i inventory playbooks/install.yml -e base_hostname=sigstore-dev.ez -K
The certificate can be downloaded from the browser Certificate Viewer by navigating to https://rekor.<base_domain>
.
Download the root certiicate that issued the rekor certificate.
In Red Hat based systems, the following commands will add a CA to the system truststore.
$ sudo openssl x509 -in ~/Downloads/root-cert-from-browser -out sigstore-ca.pem --outform PEM
$ sudo mv sigstore-ca.pem /etc/pki/ca-trust/source/anchors/
$ sudo update-ca-trust
Utilize the following steps to sign a container that has been published to an OCI registry
- Export the following environment variables substituting
base_hostname
with the value used as part of the provisioning
export KEYCLOAK_REALM=sigstore
export BASE_HOSTNAME=<base_hostname>
export FULCIO_URL=https://fulcio.$BASE_HOSTNAME
export KEYCLOAK_URL=https://keycloak.$BASE_HOSTNAME
export REKOR_URL=https://rekor.$BASE_HOSTNAME
export TUF_URL=https://tuf.$BASE_HOSTNAME
export KEYCLOAK_OIDC_ISSUER=$KEYCLOAK_URL/realms/$KEYCLOAK_REALM
- Initialize the TUF roots
cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Note: If you have used cosign
previously, you may need to delete the ~/.sigstore
directory
- Sign the desired container
cosign sign -y --fulcio-url=$FULCIO_URL --rekor-url=$REKOR_URL --oidc-issuer=$KEYCLOAK_OIDC_ISSUER <image>
Authenticate with the Keycloak instance using the desired credentials.
- Verify the signed image
Refer to this example that verifies an image signed with email identity sigstore-user@email.com
and issuer https://github.com/login/oauth
.
cosign verify \
--rekor-url=$REKOR_URL \
--certificate-identity-regexp sigstore-user \
--certificate-oidc-issuer-regexp keycloak \
<image>
If the signature verification did not result in an error, the deployment of Sigstore was successful!
If running mac please execute the following before launching the terraform install
sudo killall -HUP mDNSResponder
Terraform code is included within this repository. To test the functionality run the following.
NOTE: You will be prompted to provide the base domain and vpc at launch time.
terraform init
terraform apply --auto-approve
If you need to remove the assets and run terraform again run the following to ensure you are starting clean.
git checkout inventory && rm -f aws_keys_pairs.pem && terraform destroy --auto-approve && terraform apply --auto-approve
The following assumes that cosign has been installed on the system
NOTE: Replace octo-emerging.redhataicoe.com
with your base domain.
rm -rf ./*.pem && openssl s_client -showcerts -verify 5 -connect rekor.octo-emerging.redhataicoe.com:443 < /dev/null | awk '/BEGIN CERTIFICATE/,/END CERTIFICATE/{ if(/BEGIN CERTIFICATE/){a++}; out="cert"a".pem"; print >out}' && for cert in *.pem; do newname=$(openssl x509 -noout -subject -in $cert | sed -nE 's/.*CN ?= ?(.*)/\1/; s/[ ,.*]/_/g; s/__/_/g; s/_-_/-/; s/^_//g;p' | tr '[:upper:]' '[:lower:]').pem; echo "${newname}"; mv "${cert}" "${newname}" ; done && sudo mv octo-emerging_redhataicoe_com.pem /etc/pki/ca-trust/source/anchors/ && sudo update-ca-trust && export KEYCLOAK_REALM=sigstore && export BASE_HOSTNAME=octo-emerging.redhataicoe.com && export FULCIO_URL=https://fulcio.$BASE_HOSTNAME && export KEYCLOAK_URL=https://keycloak.$BASE_HOSTNAME && export REKOR_URL=https://rekor.$BASE_HOSTNAME && export TUF_URL=https://tuf.$BASE_HOSTNAME && export KEYCLOAK_OIDC_ISSUER=$KEYCLOAK_URL/realms/$KEYCLOAK_REALM && /usr/bin/cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Next, ensure that an image has been tagged with your quay repository and run the following. For this example, the image quay.io/rcook/tools:awxy-runner2
is used.
/usr/bin/cosign sign -y --fulcio-url=$FULCIO_URL --rekor-url=$REKOR_URL --oidc-issuer=$KEYCLOAK_OIDC_ISSUER quay.io/rcook/tools:awxy-runner2
This deployment can be run inside an Ansible Execution Environment. To build an Execution Environment and run this deployment inside a custom container, reproduce the following steps:
-
Populate the
execution-environment.yml
file with the base image to use as a value for theEE_BASE_IMAGE
variable inbuild_arg_defaults
. For more information on how to write anexecution-environment.yml
file with more available options, refer to the following article -
If
ansible-builder
is not present on your environment, install it by usingpython3 -m pip install ansible-builder
. Run theansible-builder build --tag my_ee
to build the Execution Environment. This command will create a directorycontext/
with_build
information about requirements for the image and a Containerfile -
Ensure that you are logged in the target container image registry, then tag and push the created image to the registry:
docker tag my-ee:latest quay.io/myusername/my_ee:latest
docker push quay.io/myusername/my_ee:latest
-
To run this deployment inside the created Execution Environment, use the
ansible-navigator
command line. It can be installed via thepython3 -m pip install ansible-navigator
command or refer to the installation instructions.ansible-navigator
supportsansible-playbook
commands to run automation jobs, but adds more capabilities like Execution Environment support. -
Create an
env/
directory at the root of the repository, and create anenv/extravars
file. Populate it with your base hostname as follows:
---
base_hostname: base_hostname
- Run the deployment job inside the Execution Environment:
ansible-navigator run -i inventory --execution-environment-image=quay.io/myusername/my-ee:latest playbooks/install.yml
The following are planned next steps:
- Update
Pod
manifests toDeployment
manifests - Configure pods as systemd services
Any and all feedback is welcomed. Submit an Issue or Pull Request as desired.