This project uses semantic versioning.
- Use boolean type for
installCRDs
cert-manager value to support v1.15.0+. Eventually, this value should be migrated tocrds.keep: true
andcrds.enabled: true
.
- Set
allowSnippetAnnotations: true
to allow user snippets (see Disable user snippets per default).
- Requires Ansible v6.0+
- Switch to
kubernetes.core
for Ansible 6.x+ support. Thecommunity.kubernetes
collection was renamed tokubernetes.core
in v2.0.0 of the kubernetes.core collection. Since Ansible v3.0.0, both thekubernetes.core
andcommunity.kubernetes
namespaced collections were included for convenience. Ansible v6.0.0 removed thecommunity.kubernetes
convenience package. - Use fully qualified collection names (FQCNs) to be explicit
- Add
k8s_cert_manager_release_values
variable to allow per-project customization of Helm chart values
- Add optional Descheduler for Kubernetes support. Enable with
k8s_install_descheduler
and referencedefaults/main.yml
for configuration options.
- Set nginx
podAntiAffinity
to default to be a nice default that prefers (but does not require) scheduling pods on different nodes. Override withk8s_ingress_nginx_affinity
. - Allow configuration of nginx service
loadBalancerIP
viak8s_ingress_nginx_load_balancer_ip
. - Add extendable
k8s_cert_manager_solvers
variable to support configuring DNS01 challenge provider - Update api version for echotest ingress
- Default to 2 replicas for ingress controller
- Fix typo in
k8s_digitalocean_load_balancer_hostname
- Move CI user creation to caktus.django-k8s role since it is something that is project or environment-specific.
BACKWARDS INCOMPATIBLE CHANGES:
- Use Helm to install
ingress-nginx
ingress-nginx
controller upgraded from0.26.1
to0.44.0
(via3.23.0
helm chart release)
- Use Helm to install
cert-manager
cert-manager
controller upgraded fromv0.10.1
tov1.2.0
- The accompanying
caktus.django-k8s role
must also be updated to >
v0.0.11
to restore certificate validation.
- You must follow the Digital Ocean instructions and set a hostname via
k8s_digitalocean_loadbalancer_hostname
to keep PROXY protocol enabled on Digital Ocean (required to see real client IP addresses).
Upgrade instructions:
-
First, purge the old cert-manager and create a new ingress controller in a new namespace:
# Install new ingress controller, but don't delete the old one yet k8s_install_ingress_controller: yes k8s_ingress_nginx_namespace: ingress-nginx-temp k8s_purge_ingress_controller: no # Don't install a new cert-manager, but do delete the old one k8s_install_cert_manager: no k8s_purge_cert_manager: yes
If you don't wish to make two DNS changes, you may find it helpful to set
k8s_ingress_nginx_namespace
to a more permanent name.$ ansible-playbook -l <host/group> deploy.yaml -vv
-
Look up the IP or hostname for the new ingress controller:
$ kubectl -n ingress-nginx-temp get svc
-
Change the DNS for all domains that point to this cluster to use the new IP or hostname. You may find it helpful to watch the logs of both ingress controllers during this time to see the traffic switch to the new ingress controller.
The post Kubernetes: Nginx and Zero Downtime in Production has a more detailed overview of this approach.
-
Next, add
k8s_purge_ingress_controller: yes
to your variables file and re-rundeploy.yaml
. Note that you will now have bothk8s_install_ingress_controller: yes
andk8s_purge_ingress_controller: yes
, however, the former refers to the new namespace and the latter refers only to the old namespace. This should clear out the old ingress controller.Note, you may need to run this a few times if Ansible times out attempting to delete everything the first time.
-
If you want to switch everything to use the original
ingress-nginx
namespace again, make the change in your variables file and re-rundeploy.yaml
with your final configuration.Otherwise, simply set
k8s_install_cert_manager: yes
and do not change the namespace.# your variables file (e.g., group_vars/all.yaml) k8s_install_ingress_controller: yes k8s_ingress_nginx_namespace: ingress-nginx k8s_install_cert_manager: yes
Make sure to remove the two
k8s_purge_*
variables as they are no longer needed and will be removed in a future release. -
If you elected to switch namespaces again:
-
Change the DNS to the new service address as in step 3 and wait for traffic to stop going to the temporary ingress controller.
-
Remove the
ingress-nginx-temp
namespace as follows:helm -n ingress-nginx-temp uninstall ingress-nginx kubectl delete ns ingress-nginx-temp
-
-
Test that cert-manager is working properly by deploying a new echotest pod as described in the README.
-
Update any projects that deploy to the cluster to use the corresponding 1.0 release of ansible-role-django-k8s.
Please note that the k8s_purge_*
variables are intended only for removing the previously-installed
versions of these resources. If you need to remove the newly installed cert-manager or ingress-nginx
for any reason, you should use the helm uninstall
method described above.
Other Changes:
- Move Papertrail and New Relic to
caktus.k8s-hosting-services.
The existing deployments will not be automatically removed, but they are no
longer managed from this role. To take advantage of future changes to those
deployments, add the
caktus.k8s-hosting-services
role to your requirements.yaml file - Retire
k8s_cluster_name
variable.
- Allow Papertrail memory resources to be configurable
- Support creation of an AWS IAM user with limited perms that can be used on CI to push images and deploy.
- Introduce
k8s_cluster_name
variable
- On AWS, grant cluster access to IAM users in
k8s_iam_users
.
- Re-enable validation in cert-manager workspace after installing or updating cert-manager and Lets Encrypt.
- Add NewRelic Infrastructure support
- Added Logspout for Papertrail
- Initial release