Deploying Workflow with the Helm Operator

This post will show how to use WeaveWorks' fantastic Helm Operator, an open source component of Weave Flux, on a new cluster to deploy Hephy Workflow v2.21.0, the latest version, with some minor modifications to operate the cluster in a LoadBalancer-free mode.  The special configuration is used as an example of how one might configure a Helm Chart through Weave Flux.  Workflow's Experimental Native Ingress feature is exercised instead of the default router mode, and these instructions assume you want to use ingress-nginx controller, and also that you haven't set one up already.

These instructions do mostly assume that you have already installed helm-operator, either with Weave Cloud or from the standalone instructions at weaveworks/flux(helm-operator.md#flux-helm-operator), which is the basic prerequisite you should have mastered on your own, or through following these instructions to achieve an operational Workflow helm release and Workflow deployment on your cluster.  In the notes that follow, my preferred basic staging cluster configuration is documented and explored.

If you don't care about Helm Operator, already have nginx and cert-manager on lock, you will find a shorter version of this post that doesn't include any of those things here, on the Hephy blog.

Take-aways

When you are finished with this article, you will have a basic staging cluster and hopefully understand how to make simple changes to the configuration of Hephy Workflow, you will have nginx-ingress and cert-manager namespaces and HelmRelease CRDs for nginx, cert-manager, and hephy, which a properly configured helm-operator will enforce, communicating changes through the Tiller, managing release updates on your behalf through the GitOps workflow.

As well as that, besides the server artifacts this guide will enable you to create, you should have gained a basic understanding of how Hephy Workflow's experimental native ingress creates and manages ingress rules for new applications that you create through the Workflow Controller API, and how you can use LetsEncrypt SSL-enabled ingresses on any K8S service via annotations for cert-manager and nginx-ingress, however your apps are deployed.

As a final bonus, we will modify the default configuration so that the nginx-ingress-controller, builder SSH and controller API are all reachable from outside of the cluster without any platform Load Balancers (as I mentioned, there is a shorter version of this post with only this part).  You can use Load Balancers that are provided by infrastructure services like AWS or DigitalOcean, or you can configure your own nginx-ingress deployment with a HostNetwork and DaemonSet mode as I have done, so that each worker node in the cluster runs nginx, and the node itself doubles as both an L7 and L4 load balancer.  No batteries (or cloud provider) required!

This post is tested on DigitalOcean's Managed Kubernetes and following the steps provided below should enable the reader to retrace my steps on a bare Kubernetes cluster v1.13 or above.

Configuring Helm Operator

Full details of the helm-operator configuration can be found in the flux/helm-operator repo and you can skip this section if your Helm Operator is already configured.  The basic steps are:

  1. Create a basic independent SSL CA hierarchy with cfssl, so your Helm Tiller can authenticate Helm Operator using TLS, which is the recommended configuration of Helm v2 and Helm Operator.
  2. Deploy Helm v2 with RBAC and TLS enabled.  Following the steps in the document linked above will take you through this, I recommend following the guide directly from Helm Operator if you haven't already done so.
  3. Test Tiller with TLS certificates that you generated, to confirm that the certificates are properly recognized in the Helm client and Tiller before handing them off to Helm Operator.
  4. Deploy the Weave Flux Helm Operator, which includes creating a new Kubernetes TLS secret for the Helm client certs.  Use the environment settings provided here with the generated directory ./tls credentials:
export CLUSTER_HOME=/home/kingdon/projects/hephy.rocks
export HELM_TLS_ENABLE=1
export HELM_TLS_VERIFY=1
export HELM_TLS_CA_CERT=$CLUSTER_HOME/tls/ca.pem
export HELM_TLS_CERT=$CLUSTER_HOME/tls/flux-helm-operator.pem
export HELM_TLS_KEY=$CLUSTER_HOME/tls/flux-helm-operator-key.pem
export HELM_TLS_HOSTNAME=tiller-deploy.kube-system

When you have done this TLS dance successfully and Helm Operator is communicating with Tiller, then you can either destroy the ./tls directory that was created on your local machine (and you will use Flux to manage Helm Releases from here forward), or preserve it so that you can override Helm Operator's control loop in case something has gone wrong.

The basic workflow that we will follow to install any Helm chart, which can be automated through Helm Operator (and I'll show you as this post goes on...) is as follows:

VENDOR=weaveworks
CHART_REPO_URL=https://weaveworks.github.io/flux
CHART_NAME=flux
RELEASE_NAME=flux

helm repo add $VENDOR $CHART_REPO_URL

helm upgrade --install \
  --set some.settingHere=true \
  $RELEASE_NAME \
  $VENDOR/$CHART_NAME

# Additional settings needed to install flux are documented in helm-operator.md

I recommend that you use this process to install Flux by hand, especially if it's unfamiliar to you.  Flux Helm Operator is a relatively thin wrapper around Helm client.  The more familiar you are with Helm's manual operation, the more power you will have to dig yourself out of any sticky situations you might land yourself in with some badly configured or more exotic configurations of flux and helm operator, or other helm charts.

We will not ask the question, "how can I use Flux to manage itself" at this time, even though it's fairly easy to bootstrap that configuration.  The rest of this article will assume you have already configured and tested Helm Operator, and that your setup is good.  If there are reader questions about how this works, then it may be something to cover in another article.  In the section that follows, your working Helm Operator is used to install and configure nginx-ingress and cert-manager, which are prerequisites of a Hephy Workflow deployment configured in Experimental Native Ingress mode.

Flux Conventions Used

As hinted at before, we will use a generic workflow to install any kind of Helm chart using Helm Operator.  The operator can take HelmRelease chart upstream manifests from either Git or Helm Repository.  To promote absolute clarity and transparency to newcomers and those who might be less familiar with Helm, a Git repository source is used herein.

The convention used with Flux and Helm Operator is to manage all manifests inside of a single Git repository.  Any HelmRelease and Namespace manifests that are needed will be created through flux itself, housed in a directory within the repo called yamls/ and any HelmRelease spec.chart.git refs will be defined similar to the spec that follows here:

spec:
  releaseName: nginx-ingress
  chart:
    git: git@github.com:kingdonb/hephy.rocks.git
    path: nginx-ingress
    ref: master

The flux repo should perhaps be kept private, in case you need to store any secrets in there.  (Ha ha, just checking if you're paying attention... please don't do that at all!)

Configuring nginx-ingress and cert-manager

Within your flux repo source dir, fetch and untar the two charts to install:

helm fetch --untar stable/nginx-ingress
helm repo add jetstack https://charts.jetstack.io
helm fetch --untar jetstack/cert-manager

Make some modifications to the default values provided by the nginx-ingress chart:

diff --git a/nginx-ingress/values.yaml b/nginx-ingress/values.yaml
index 80bc1a6..c30db3c 100644
--- a/nginx-ingress/values.yaml
+++ b/nginx-ingress/values.yaml
@@ -17,7 +17,7 @@ controller:
   # Required for use with CNI based kubernetes installations (such as ones set up by kubeadm),
   # since CNI and hostport don't mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920
   # is merged
-  hostNetwork: false
+  hostNetwork: true

   # Optionally change this to ClusterFirstWithHostNet in case you have 'hostNetwork: true'.
   # By default, while using host network, name resolution uses the host's DNS. If you wish nginx-controller
@@ -26,7 +26,7 @@ controller:

   ## Use host ports 80 and 443
   daemonset:
-    useHostPort: false
+    useHostPort: true

     hostPorts:
       http: 80
@@ -83,7 +83,7 @@ controller:
   ## DaemonSet or Deployment
   ##
-  kind: Deployment
+  kind: DaemonSet

   # The update strategy to apply to the Deployment or DaemonSet
   ##
@@ -186,7 +186,7 @@ controller:
       http: http
       https: https

-    type: LoadBalancer
+    type: NodePort

     # type: NodePort
     # nodePorts:

Nginx in HostPort mode will not create any platform Load Balancers as mentioned before.  (This setting is optional.)

Next, make some modifications to cert-manager defaults, enabling cert-manager to use a default ClusterIssuer:

diff --git a/cert-manager/values.yaml b/cert-manager/values.yaml
index 826c49e..5a9c12f 100644
--- a/cert-manager/values.yaml
+++ b/cert-manager/values.yaml
@@ -80,9 +80,9 @@ podLabels: {}

 nodeSelector: {}

-ingressShim: {}
-  # defaultIssuerName: ""
-  # defaultIssuerKind: ""
+ingressShim:
+  defaultIssuerName: "letsencrypt-prod"
+  defaultIssuerKind: "ClusterIssuer"
   # defaultACMEChallengeType: ""
   # defaultACMEDNS01ChallengeProvider: ""

You may prefer to set the default issuer to letsencrypt-staging; we will configure both ClusterIssuers and the CRD to support them, through our yamls/ manifests directory, diverging slightly from upstream docs while noting additional steps as described in Step 5 - Deploy Cert Manager from the Cert-Manager tutorial page.  The instructions to support cert-manager are more complicated because of the use of CRDs.  We can still use regular YAML manifests and flux to handle all of this, which we will do later when creating the HelmRelease CRs for helm-operator in the next section.

Make one further modifications to the settings in nginx-ingress' values.yaml:

diff --git a/nginx-ingress/values.yaml b/nginx-ingress/values.yaml
index f412476..2f09026 100644
--- a/nginx-ingress/values.yaml
+++ b/nginx-ingress/values.yaml
@@ -380,8 +380,8 @@ imagePullSecrets: []
 # TCP service key:value pairs
 # Ref: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/tcp
 ##
-tcp: {}
-#  8080: "default/example-tcp-svc:9000"
+tcp:
+  2222: "deis/deis-builder:2222"

 # UDP service key:value pairs
 # Ref: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/udp

Including all of these settings up-front ensures that we will not need to roll our nginx-ingress-controller pods manually later on, in order to apply this change in configuration.  Nginx will discover the deis-builder service when we have installed it, and the Ingress controllers will load balance L4/TCP port 2222 to the SSH port exposed by deis-builder.

At this point you may want to ensure that any node firewall or Security Group does not prevent the clients from reaching nodes on port 80, 443, or 2222.  DigitalOcean users find on your console: Networking / Firewalls, which will show you any ports that are allowed.  Create an Inbound rule for HTTP, HTTPS, and Custom TCP 2222 (one each) to allow All IPv4, All IPv6 - or others which should be able to access hosted applications and Workflow through Ingress.  You can restrict 2222 to only networks where developers should be, if this is more preferable for your environment.

Commit your changes and push them now if you like; they won't be made effective by helm-operator until we create a HelmRelease, which lets the operator know that it should monitor each chart directory for new releases to install.

Next, we will create some YAML manifests that cause flux to install nginx-ingress and cert-manager from the configurations we've just provided!

Installing nginx-ingress and cert-manager

Create a YAML file for each namespace and HelmRelease in the git directory yamls/ which you have configured flux to monitor for new YAML files (use your own git repo here instead of git: git@github.com:kingdonb/hephy.rocks.git):

# nginx-ingress.yaml
---
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-ingress
spec:
  finalizers:
  - kubernetes
---
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
  name: ingress
  namespace: nginx-ingress
spec:
  releaseName: ingress
  chart:
    git: git@github.com:kingdonb/hephy.rocks.git
    path: nginx-ingress
    ref: master
# cert-manager.yaml
---
apiVersion: v1
kind: Namespace
metadata:
  labels:
    certmanager.k8s.io/disable-validation: "true"
  name: cert-manager
spec:
  finalizers:
  - kubernetes
---
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
  name: cert-manager
  namespace: cert-manager
spec:
  releaseName: cert-manager
  chart:
    git: git@github.com:kingdonb/hephy.rocks.git
    path: cert-manager
    ref: master

Create one more yaml for the ClusterIssuer resources letsencrypt-prod and letsencrypt-staging:

# clusterissuers.yaml
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
  namespace: cert-manager
spec:
  acme:
    # The ACME server URL
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: yourname@example.com
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      # Secret resource used to store the account's private key.
      name: hephy-rocks-staging-account-key
    # Enable HTTP01 validations
    http01: {}
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: cert-manager
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: yourname@example.com
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      # Secret resource used to store the account's private key.
      name: hephy-rocks-account-key
    # Enable HTTP01 validations
    http01: {}
---

Be sure to include your email address in place of email: yourname@example.com so that LetsEncrypt's certificate expiry warnings can reach you, in case this automation should ever fail to renew the certificates on-time.

Use kubectl to install the CRD list for cert-manager, by hand:

## IMPORTANT: you MUST install the cert-manager CRDs **before** installing the
## cert-manager Helm chart
kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.7/deploy/manifests/00-crds.yaml

# from: https://docs.cert-manager.io/en/latest/tutorials/acme/quick-start/index.html

At the time of this writing, Cert-Manager 0.7 was the latest stable version.  If this has changed, you may need to adjust the steps described here.  With the CRDs installed from Cert Manager tutorial as described above, you now have everything needed so that Flux and Helm Operator can install both nginx-ingress and cert-manager.

Commit your changes and push them so that Helm Operator can process the changes and install everything for you. ? (fingers crossed!)

kingdon@localhost:~/projects/hephy.rocks$ helm ls -q
cert-manager
flux
ingress

kingdon@localhost:~/projects/hephy.rocks$ helm ls
NAME        	REVISION	UPDATED                 	STATUS  	CHART              	APP VERSION	NAMESPACE    
cert-manager	1       	Sat May 18 18:09:14 2019	DEPLOYED	cert-manager-v0.7.2	v0.7.2     	cert-manager 
flux        	1       	Thu May  9 18:29:03 2019	DEPLOYED	flux-0.9.4         	1.12.2     	default      
ingress     	1       	Sat May 18 18:09:18 2019	DEPLOYED	nginx-ingress-1.6.0	0.24.1     	nginx-ingress

It worked!  Hopefully your output looks like this, and it has worked just as well for you.  Next, Hephy Workflow!

Installing Hephy Workflow with Helm Operator

Assuming that all went well, this part should be a snap... we simply repeat the generic chart install procedure as before.

Add the Hephy chart repo to your helm repository cache, fetch the chart, make some modifications, and commit:

helm repo add hephy https://charts.teamhephy.com 
helm fetch --untar hephy/workflow

The changes below set Experimental Native Ingress, with Builder in NodePort to depend on the support for L4 routing that we configured earlier with nginx-ingress.  Assuming your cluster provides some CNI, and RBAC is enabled, the only other setting which you must provide yourself is a valid platform_domain value (ours is a subdomain of hephy.rocks).

diff --git a/workflow/charts/builder/templates/builder-service.yaml b/workflow/charts/builder/templates/builder-service.yaml
index c4fad80..277b661 100644
--- a/workflow/charts/builder/templates/builder-service.yaml
+++ b/workflow/charts/builder/templates/builder-service.yaml
@@ -12,5 +12,5 @@ spec:
   selector:
     app: deis-builder
 {{ if .Values.global.experimental_native_ingress }}
-  type: "LoadBalancer"
+  type: "NodePort"
 {{ end }}
diff --git a/workflow/values.yaml b/workflow/values.yaml
index 2bb44af..d448512 100644
--- a/workflow/values.yaml
+++ b/workflow/values.yaml
@@ -56,9 +56,9 @@ global:
   # Valid values are:
   # - true: deis-router will not be deployed. Workflow will not be usable until a Kubernetes ingress controller is installed.
   # - false: deis-router will be deployed (default).
-  experimental_native_ingress: false
+  experimental_native_ingress: true
   # If the Kubernetes cluster uses CNI
-  # use_cni: true
+  use_cni: true
   # Set the `listen` variable for registry-proxy's NGINX
   #
   # Valid values are:
@@ -73,7 +73,7 @@ global:
   # Valid values are:
   # - true: all RBAC-related manifests will be installed (in case your cluster supports RBAC)
   # - false: no RBAC-related manifests will be installed
-  use_rbac: false
+  use_rbac: true
 
 
 s3:
@@ -133,7 +133,7 @@ controller:
   # The publicly resolvable hostname to build your cluster with.
   #
   # This will be the hostname that is used to build endpoints such as "deis.$HOSTNAME"
-  platform_domain: ""
+  platform_domain: "team.hephy.rocks"
 
 database:
   # The username and password to be used by the on-cluster database.

This configuration also assumes that you have configured, in this case, *.team.hephy.rocks as a round-robin DNS entry pointing one A-record at each node in your cluster, since they are running nginx and acting as L4 and L7 load balancers.

Create this manifest in yamls/ (again remembering to set spec.chart.git to point at your own flux repo):

# hephy-workflow.yaml
---
apiVersion: v1
kind: Namespace
metadata:
  name: deis
spec:
  finalizers:
  - kubernetes
---
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
  name: hephy
  namespace: deis
spec:
  releaseName: hephy
  chart:
    git: git@github.com:kingdonb/hephy.rocks.git
    path: workflow
    ref: master

One more change is required before this configuration will work, if your helm-operator deployment is to be able to resolve the dependencies of Hephy Workflow, it must be aware of the Hephy Repo too.  If you are using the flux helm chart, this step is fairly easy: just add the following block to flux/values.yaml:

diff --git a/flux/values.yaml b/flux/values.yaml
index 7b0e929..89b3693 100644
--- a/flux/values.yaml
+++ b/flux/values.yaml
@@ -72,15 +72,13 @@ helmOperator:
     hostname: ""
   # Mount repositories.yaml configuration in a volume
   configureRepositories:
-    enable: false
+    enable: true
     volumeName: repositories-yaml
     secretName: flux-helm-repositories
     cacheVolumeName: repositories-cache
     repositories:
-      # - name: bitnami
-      #   url: https://charts.bitnami.com
-      #   username:
-      #   password:
+      - name: hephy
+        url: https://charts.teamhephy.com
   # Override Flux git settings
   git:
     pollInterval: ""

You may also want to add the Kubernetes charts/stable repo here too, so that other charts with dependencies in the stable repo, that you may want to install later, can work on helm-operator too.  The kubernetes-charts stable repository listing is also provided here too, for reference:

     repositories:
       - name: hephy
         url: https://charts.teamhephy.com
+      - name: kubernetes-charts
+        url: https://kubernetes-charts.storage.googleapis.com/

If you have installed flux without fetching the helm chart, your configuration change will look something like this:

diff --git a/scripts/flux-deploy.sh b/scripts/flux-deploy.sh
index 05b0bd1..74deabf 100644
--- a/scripts/flux-deploy.sh
+++ b/scripts/flux-deploy.sh
@@ -8,5 +8,8 @@ helm upgrade --install \
     --set helmOperator.tls.verify=true \
     --set helmOperator.tls.secretName=helm-client \
     --set helmOperator.tls.caContent="$(cat ./tls/ca.pem)" \
+    --set helmOperator.configureRepositories.enable=true \
+    --set 'helmOperator.configureRepositories.repositories[0].name=hephy' \
+    --set 'helmOperator.configureRepositories.repositories[0].url="https://charts.teamhephy.com"' \
     flux \
     weaveworks/flux

If everything is lined up perfectly, you will now have a mostly-configured Hephy Workflow with one LoadBalancer.  (Why not zero LoadBalancer? Something is subtly wrong with our configuration, but everything else is presumably OK here, so long as your cluster is sufficiently similar to mine.)

Confirming Workflow+Ingress HTTP configuration

You can confirm your DNS and Ingress config is appropriate by running three curl commands and checking what response you get, if any:

$ curl deis.team.hephy.rocks; echo
<h1>Not Found</h1><p>The requested resource was not found on this server.</p>
$ curl 157.230.81.203; echo
default backend - 404
$ curl deis-builder.team.hephy.rocks; echo
default backend - 404

Here I have checked all of my relevant infrastructure endpoints and am seeing appropriate responses.  The first <h1>Not Found</h1> is a response from the Deis Controller API, now online waiting for an Admin user to register.

The other two responses are from the node IP (the nginx-ingress default backend which serves up generic 404s for every request that isn't handled by an ingress) and deis-builder, which can either use the Load Balancer's IP or Node port.  If you have configured DNS appropriately, then both of these should work.

Configuring SSL with Let's Encrypt through cert-manager

Add some annotations to your Kubernetes Ingress resource, which points to the Hephy Controller API server:

$ kubectl -n deis edit ing controller-api-server-ingress-http

metadata:
  annotations:
    flux.weave.works/antecedent: deis:helmrelease/hephy
+    kubernetes.io/tls-acme: "true"
+    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
  creationTimestamp: "..."

...you can override the default ClusterIssuer as above, and add a tls section to the Ingress spec:

spec:
+  tls:
+  - hosts:
+    - deis.team.hephy.rocks
+    secretName: deis.team.hephy.rocks
  rules:
  - host: deis.team.hephy.rocks
    http:

If all goes well, you will find a successfully issued Cert in about 30 seconds:

$ kubectl -n deis get certs
NAME                    READY   SECRET                  AGE
deis.team.hephy.rocks   True    deis.team.hephy.rocks   30s

At this point, you may safely register the Admin user and confirm the controller is SSL-terminated:

$ deis register https://deis.team.hephy.rocks
username: kingdon
password: 
password (confirm): 
email: 
Registered kingdon
Logged in as kingdon
Configuration file written to /home/kingdon/.deis/client.json

Now everything is working, you may proceed with the regular Hephy Workflow quickstart and Deploy an App, or keep reading to understand why our change to the Builder LoadBalancer configuration didn't prevent the Load Balancer from coming online, and what we can do now to fix it:

$ kubectl -n deis get svc
NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)             AGE
deis-builder             LoadBalancer   10.245.106.20    174.138.108.62   2222:30139/TCP      26m
deis-controller          ClusterIP      10.245.216.148   <none>           80/TCP              26m
deis-database            ClusterIP      10.245.250.18    <none>           5432/TCP            26m
deis-logger              ClusterIP      10.245.154.28    <none>           80/TCP              26m
deis-logger-redis        ClusterIP      10.245.50.127    <none>           6379/TCP            26m
deis-minio               ClusterIP      10.245.220.209   <none>           9000/TCP            26m
deis-monitor-grafana     ClusterIP      10.245.240.145   <none>           80/TCP              26m
deis-monitor-influxapi   ClusterIP      10.245.242.249   <none>           80/TCP              26m
deis-monitor-influxui    ClusterIP      10.245.13.247    <none>           80/TCP              26m
deis-nsqd                ClusterIP      10.245.122.135   <none>           4151/TCP,4150/TCP   26m
deis-registry            ClusterIP      10.245.8.47      <none>           80/TCP              26m
deis-workflow-manager    ClusterIP      10.245.174.230   <none>           80/TCP              26m
$ kubectl -n deis delete svc deis-builder
service "deis-builder" deleted
$ helm upgrade --install hephy --namespace deis workflow/

This strategy goes around helm-operator, as we have made a change to a dependency chart.  This demonstrates an important difference between the way that Helm client operates on an unpacked chart in the working directory, as opposed to how helm-operator treats a chart directory that is referenced in a HelmRelease custom resource.

Unfortunately since v2.21.0 does not provide an option in workflow/values.yaml which can affect this change, it cannot be applied simply in the way we were expecting by the Helm Operator.  A change to builder and umbrella chart are necessary to make Helm Operator able to apply this change.

We saw that Helm Operator would have an error, if we did not tell it about the hephy chart repo.  This is so Operator  reads the top chart (workflow) and values.yaml directly from the repo dir, then fetch  subordinate charts for itself in order to enforce the version of chart deps as described by the chart's requirements.yaml and requirements.lock files.  It may not be immediately clear why this is a desirable behavior, as at the moment it's simply unfortunately in our way.  (Edit: there's good news, you can resolve this already in Helm Operator 0.7.0+, the HelmRelease CRD now supports a skipDepUpdate to instruct the operator to not update dependencies for charts from a git source!)

The Type field of a Service is not mutable when it is LoadBalancer, so we could not simply edit the deis-builder service and change its Type to NodePort.  Helm Operator cannot reach equilibrium and It's abundantly clear upon closer inspection of the helm releases with Helm client, that we've apparently done a bad thing, and should put it back:

kingdon:~/projects/ob-mirror$ helm ls
NAME        	REVISION	UPDATED                 	STATUS  	CHART              	APP VERSION	NAMESPACE    
cert-manager	1       	Sat May 18 18:09:14 2019	DEPLOYED	cert-manager-v0.7.2	v0.7.2     	cert-manager 
flux        	3       	Sat May 18 19:18:00 2019	DEPLOYED	flux-0.9.4         	1.12.2     	default      
hephy       	17      	Sat May 18 20:07:14 2019	DEPLOYED	workflow-v2.21.0   	           	deis         
ingress     	1       	Sat May 18 18:09:18 2019	DEPLOYED	nginx-ingress-1.6.0	0.24.1     	nginx-ingress

Helm-operator is in a loop and will continue creating new revisions of Hephy until we force it to stop.  The ineffective releases will begin racking up at a rate of about 20 revisions per hour if  corrective action is not taken.  I will first delete the HelmRelease from my yamls/hephy-workflow.yaml, commit, push, wait for flux to sync... and then remove the HelmRelease from the cluster manually, since flux won't really delete it for us in this case.

NB: removing a HelmRelease while Helm Operator is still running, will cause Operator to purge the installed chart from your cluster.  If you want to keep Hephy installed but stop Helm Operator from tracking it, scale down helm-operator before the following steps to safely remove the Hephy HelmRelease resource.  (Instead, Leave Operator running while you delete the HelmRelease "hephy" if you prefer to delete Hephy Workflow, so you can try to install it again.)

$ kubectl scale --replicas=0 deploy/flux-helm-operator
deployment.extensions/flux-helm-operator scaled
$ kubectl --namespace deis delete hr hephy
helmrelease.flux.weave.works "hephy" deleted
$ kubectl scale --replicas=1 deploy/flux-helm-operator
deployment.extensions/flux-helm-operator scaled

That's all for this week's installment of the Team Hephy Info Blog!  Thanks for tuning in!

Back to top