Hector Sab

Managing multiple k8s config files

Published on 2023-01-06

So it finally happened! I got myself a blog that’s up and running! And I am inaugurating it with a small blog about how I make my life easier when dealing with multiple Kubernetes clusters and having to connect to them.

What is the problem?

Imagine you are asked to deploy some fancy service in the k8s (Kubernetes) cluster A and given the kubeconfig for being able to connect to it. Later in the day, you are asked to deploy another fancy service in the k8s cluster B, for which you are also given its respective kubeconfig. The next day you are now asked to yet again deploy a different fancy service into cluster C and fix an issue on cluster B. Ah, from your boss you also heard that cluster A has been decommissioned so now you need to clean up your configs so the (now useless) information of cluster A doesn’t stay around unnecessarily.

Hopefully, you see my point now. Over time it becomes tedious to deal with these config files, especially if the way you do it is through a single config file (a.k.a. ~/.kube/config) as you need some way of adding, removing, and even sharing clusters details.

For the sake of completion, let’s visualize the problem with some dummy kubeconfigs. The file below represents cluster A, and at this point, we have already saved it to the default location.

 1# ~/.kube/config
 2apiVersion: v1
 3kind: Config
 4
 5clusters:
 6- cluster:
 7    server: https://k8s.example.org/k8s/clusters/example-a
 8  name: example-a
 9
10users:
11- name: example-a
12  user:
13    client-certificate-data: certificate-data-example-a
14    client-key-data: key-data-example-a
15
16contexts:
17- context:
18  name: example-a

The file below represents cluster B, and it’s saved in $HOME. Note that in this location kubectl is not aware of the file.

 1# ~/example-b.yaml
 2apiVersion: v1
 3kind: Config
 4
 5clusters:
 6- cluster:
 7    server: https://k8s.example.org/k8s/clusters/example-b
 8  name: example-b
 9
10users:
11- name: example-b
12  user:
13    client-certificate-data: certificate-data-example-b
14    client-key-data: key-data-example-b
15
16contexts:
17- context:
18  name: example-b

Our options

Now let’s talk about what are our options here. So far, I have seen and used 3 ways of handling this problem, being the last one listed here the most effective for me.

Replace default kubeconfig

The first way I dealt with this was as a caveman. I would move whatever kubeconfig I needed to ~/.kube/config. So, with our example, I would move ~/.kube/config into ~/example-a and then ~/example-b back into ~/.kube/config.

As you can imagine, the problem with this approach is that it is way too manual and prone to errors. A wrong move and say goodbye to the one cluster config, or worse, now you have mixed up the clusters configs without being aware of it. It also adds a mental overhead of having to remember where all the files are backed up.

This is a big no.

Merge files

The second option is to merge both files (and any subsequent one) into ~/.kube/config. That would look like below. This is much nicer than the first option, now we can use something like kubectx for switching clusters with ease. However, I still have two problems with it: 1) having to add new clusters info to the file, and 2) having to remove the info of one cluster when I no longer need it.

For the first problem, I know there’s some bash magic that can merge all new kubeconfig files into one, but I haven’t set that up. And for the second one, I am unaware of how to do the cleanup in “one-click”.

So, while this is a better option, it’s still a no for me.

 1# ~/.kube/config
 2apiVersion: v1
 3kind: Config
 4
 5clusters:
 6- cluster:
 7    server: https://k8s.example.org/k8s/clusters/example-a
 8  name: example-a
 9- cluster:
10    server: https://k8s.example.org/k8s/clusters/example-b
11  name: example-b
12
13users:
14- name: example-a
15  user:
16    client-certificate-data: certificate-data-example-a
17    client-key-data: key-data-example-a
18- name: example-b
19  user:
20    client-certificate-data: certificate-data-example-b
21    client-key-data: key-data-example-b
22
23contexts:
24- context:
25  name: example-a
26- context:
27  name: example-b

Edit the KUBECONFIG env var

This third way so far is what I use up to this day. It consists of modifying the env var KUBECONFIG and adding all the file paths that we require kubectl to be aware of. Just this time not as a caveman.

The way how tools like kubectl/kubectx/kubens work is that if no KUBECONFIG env var is defined they read ~/.kube/config. If defined, these tools will interpret the string value as a list of paths separated by a colon (:). What we need now to automate the loading of kubeconfig is making an assumption and a few lines of bash in our ~/.bashrc/~/.zshrc.

The assumption we make is that all the config files are going to be saved under the directory ~/.kube/configs/; all the files in it will be set into KUBECONFIG. For setting up the variable I have two recipes, one “clear” and another that is a one-liner. Both result in having KUBECONFIG=~/.kube/config:~/.kube/configs/example-a:~/.kube/configs/example-b, you can choose whatever you prefer.

Bash magic the clear way

 1CONFIGS_DIR=$HOME/.kube/configs
 2KUBECONFIG="$HOME/.kube/config"
 3for entry in "$CONFIGS_DIR"/*
 4do
 5  if [ -z "$KUBECONFIG" ]
 6  then
 7    KUBECONFIG="$entry"
 8  else
 9    KUBECONFIG="$KUBECONFIG:$entry"
10  fi
11done
12export KUBECONFIG

Bash magic the one-liner way

1export KUBECONFIG="$HOME/.kube/config:"$(paste -sd ":" <(find ~/.kube/configs -type f))
#k8s #shell