Skip to content

Automation to deploy the sigstore ecosystem on Virtual Machines

Notifications You must be signed in to change notification settings

sallyom/sigstore-ansible

 
 

Repository files navigation

sigstore-ansible

Automation to deploy the sigstore ecosystem on RHEL

⚠️ The contents of this repository are a Work in Progress.

Overview

The automation within this repository establishes the components of the Sigstore project within a single Red Hat Enterprise Linux (RHEL) machine using a standalone containerized deployment. Containers are spawned using Kubernetes based manifests using podman kube play.

The following Sigstore components are deployed as part of this architecture:

An NGINX frontend is placed as an entrypoint to the various backend components. Communication is secured via a set of self-signed certificates that are generated at runtime.

Keycloak is being used as a OIDC issuer for facilitating keyless signing.

Utilize the steps below to understand how to setup and execute the provisioning.

Prerequisites

Ansible must be installed and configured on a control node that will be used to perform the automation.

NOTE: Future improvements will make use of an Execution environment

Perform the following steps to prepare the control node for execution.

Dependencies

Install the required Ansible collections by executing the following

ansible-galaxy collection install -r requirements.yml 

Inventory

Populate the sigstore group within the inventory file with details related to the target host.

Keycloak

Keycloak is deployed to enable keyless (OIDC) signing. A dedicated realm called sigstore is configured by default using a client called sigstore

To be able to sign containers, you will need to authenticate to the Keycloak instance. By default, a single user (jdoe) is created. This can be customized by specifying the keycloak_sigstore_users variable. The default value is shown below and can be used to authenticate to Keycloak if no modifications are made:

keycloak_sigstore_users:
 - username: jdoe
   first_name: John
   last_name: Doe
   email: jdoe@redhat.com
   password: mysecurepassword

Ingress

The automation deploys and configures a software load balancer as a central point of ingress. Multiple hostnames underneath a base hostname are configured and include the following hostnames:

Each of these hostnames must be configured in DNS to resolve to the target machine. The base_hostname parameter must be provided when executing the provisining. To configure hostnames in DNS, edit /etc/hosts with the following content:

<REMOTE IP ADDRESS> keycloak.<base_hostname>
<REMOTE_IP_ADDRESS> fulcio.<base_hostname> fulcio
<REMOTE_IP_ADDRESS> rekor.<base_hostname> rekor
<REMOTE_IP_ADDRESS> tuf.<base_hostname> tuf

Cosign

cosign is used as part of testing and validating the setup and configuration. It is an optional install if there is not a desire to perform the validation as described below.

Provision

Execute the following commands to execute the automation:

# Run the playbook from your local system
ansible-playbook -vv -i inventory playbooks/install.yml -e base_hostname=sigstore-dev.ez -K

Add the root CA that was created to your local truststore.

The certificate can be downloaded from the browser Certificate Viewer by navigating to https://rekor.<base_domain>. Download the root certiicate that issued the rekor certificate. In Red Hat based systems, the following commands will add a CA to the system truststore.

$ sudo openssl x509 -in ~/Downloads/root-cert-from-browser -out sigstore-ca.pem --outform PEM
$ sudo mv sigstore-ca.pem /etc/pki/ca-trust/source/anchors/
$ sudo update-ca-trust

Signing a Container

Utilize the following steps to sign a container that has been published to an OCI registry

  1. Export the following environment variables substituting base_hostname with the value used as part of the provisioning
export KEYCLOAK_REALM=sigstore
export BASE_HOSTNAME=<base_hostname>
export FULCIO_URL=https://fulcio.$BASE_HOSTNAME
export KEYCLOAK_URL=https://keycloak.$BASE_HOSTNAME
export REKOR_URL=https://rekor.$BASE_HOSTNAME
export TUF_URL=https://tuf.$BASE_HOSTNAME
export KEYCLOAK_OIDC_ISSUER=$KEYCLOAK_URL/realms/$KEYCLOAK_REALM
  1. Initialize the TUF roots
cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json

Note: If you have used cosign previously, you may need to delete the ~/.sigstore directory

  1. Sign the desired container
cosign sign -y --fulcio-url=$FULCIO_URL --rekor-url=$REKOR_URL --oidc-issuer=$KEYCLOAK_OIDC_ISSUER  <image>

Authenticate with the Keycloak instance using the desired credentials.

  1. Verify the signed image

Refer to this example that verifies an image signed with email identity sigstore-user@email.com and issuer https://github.com/login/oauth.

cosign verify \
--rekor-url=$REKOR_URL \
--certificate-identity-regexp sigstore-user \
--certificate-oidc-issuer-regexp keycloak  \
<image>

If the signature verification did not result in an error, the deployment of Sigstore was successful!

Terraform

If running mac please execute the following before launching the terraform install

sudo killall -HUP mDNSResponder

Terraform code is included within this repository. To test the functionality run the following.

NOTE: You will be prompted to provide the base domain and vpc at launch time.

terraform init
terraform apply --auto-approve

If you need to remove the assets and run terraform again run the following to ensure you are starting clean.

git checkout inventory && rm -f aws_keys_pairs.pem && terraform destroy --auto-approve && terraform apply --auto-approve

Testing

The following assumes that cosign has been installed on the system

NOTE: Replace octo-emerging.redhataicoe.com with your base domain.

rm -rf ./*.pem && openssl s_client -showcerts -verify 5 -connect rekor.octo-emerging.redhataicoe.com:443 < /dev/null |    awk '/BEGIN CERTIFICATE/,/END CERTIFICATE/{ if(/BEGIN CERTIFICATE/){a++}; out="cert"a".pem"; print >out}' && for cert in *.pem; do          newname=$(openssl x509 -noout -subject -in $cert | sed -nE 's/.*CN ?= ?(.*)/\1/; s/[ ,.*]/_/g; s/__/_/g; s/_-_/-/; s/^_//g;p' | tr '[:upper:]' '[:lower:]').pem;         echo "${newname}"; mv "${cert}" "${newname}" ; done && sudo mv octo-emerging_redhataicoe_com.pem /etc/pki/ca-trust/source/anchors/ && sudo update-ca-trust && export KEYCLOAK_REALM=sigstore && export BASE_HOSTNAME=octo-emerging.redhataicoe.com && export FULCIO_URL=https://fulcio.$BASE_HOSTNAME && export KEYCLOAK_URL=https://keycloak.$BASE_HOSTNAME && export REKOR_URL=https://rekor.$BASE_HOSTNAME && export TUF_URL=https://tuf.$BASE_HOSTNAME && export KEYCLOAK_OIDC_ISSUER=$KEYCLOAK_URL/realms/$KEYCLOAK_REALM && /usr/bin/cosign  initialize --mirror=$TUF_URL --root=$TUF_URL/root.json

Next, ensure that an image has been tagged with your quay repository and run the following. For this example, the image quay.io/rcook/tools:awxy-runner2 is used.

/usr/bin/cosign sign -y --fulcio-url=$FULCIO_URL --rekor-url=$REKOR_URL --oidc-issuer=$KEYCLOAK_OIDC_ISSUER  quay.io/rcook/tools:awxy-runner2

Execution Environments support

This deployment can be run inside an Ansible Execution Environment. To build an Execution Environment and run this deployment inside a custom container, reproduce the following steps:

  1. Populate the execution-environment.yml file with the base image to use as a value for the EE_BASE_IMAGE variable in build_arg_defaults. For more information on how to write an execution-environment.yml file with more available options, refer to the following article

  2. If ansible-builder is not present on your environment, install it by using python3 -m pip install ansible-builder. Run the ansible-builder build --tag my_ee to build the Execution Environment. This command will create a directory context/ with _build information about requirements for the image and a Containerfile

  3. Ensure that you are logged in the target container image registry, then tag and push the created image to the registry:

docker tag my-ee:latest quay.io/myusername/my_ee:latest
docker push quay.io/myusername/my_ee:latest
  1. To run this deployment inside the created Execution Environment, use the ansible-navigator command line. It can be installed via the python3 -m pip install ansible-navigator command or refer to the installation instructions. ansible-navigator supports ansible-playbook commands to run automation jobs, but adds more capabilities like Execution Environment support.

  2. Create an env/ directory at the root of the repository, and create an env/extravars file. Populate it with your base hostname as follows:

---
base_hostname: base_hostname
  1. Run the deployment job inside the Execution Environment:
ansible-navigator run -i inventory --execution-environment-image=quay.io/myusername/my-ee:latest playbooks/install.yml

Future Efforts

The following are planned next steps:

  • Update Pod manifests to Deployment manifests
  • Configure pods as systemd services

Feedback

Any and all feedback is welcomed. Submit an Issue or Pull Request as desired.

About

Automation to deploy the sigstore ecosystem on Virtual Machines

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • HCL 54.4%
  • Jinja 41.8%
  • Dockerfile 3.8%