How to connect to multiple Google Kubernetes clusters easily in parallel

Alex Moss
5 min readFeb 27, 2021
Photo by Alvaro Reyes on Unsplash

This was first posted on my personal website on 2nd Feb 2021. That article also includes some screen captures, so head there if you want to see this stuff in action!

In this post I’m going to talk through the approach I use to switch between multiple Google Kubernetes Engine clusters on the command line. I’d expect a lot of the stuff in here has some benefit for non-GCP Kubernetes clusters too, but the ones I use on a day-to-day basis are all hosted there.

Key outcomes for me were:

  1. To be able to switch from one cluster to another with only a small number of commands, even if I have to authenticate with different user accounts (as primarily a GCP user, this means different email addresses / GCP projects).
  2. To be connected to different Kubernetes in different windows — for example tailing logs in a Prod & Non-Prod cluster at the same time — and for that connection to persist across multiple kubectl commands.

Spoiler Alert: I solved this with a bit of fiddling of ~/.kube/config plus the marvellous kubie.


I switched to this approach probably around six months ago. At work we have a relatively small number of clusters so up until that point I was pretty comfortable using what I think is the most common approach of kubectx + kubens and this worked well enough. However I found that I was increasingly getting an inconsistent experience when switching between the clusters I use for work and others I was using for running personal websites (like my blog) and for fiddling with things — so I started looking for alternatives.

For the purposes of explaining things, lets assume a hypothetical setup like this:

  • A collection of GKE clusters across several GCP projects at work. Access to these is through a common Production email account, but they are split by project/cluster.
  • A sandbox GKE cluster in a different Google organisation where the Prod email account doesn’t have access.

The hierarchy would therefore look a bit like this:

Made up hierarchy of users/projects/cluster for illustration

I feel I should point out here that, at work, we do not name our clusters after types of cheese. I just really fancied some cheese when writing this, okay?

Enough background — on with how I set things up.

First, Multiple Google Accounts

Photo by Farzad Nazifi on Unsplash

My approach here was massively inspired by this blog post by Googler Daz Wilkin. I’m not going to repeat what is explained really well there already so go have a read if you want to understand why the following works!

For brand new setup, you will need to gcloud init once to set up the default configuration. It will also be necessary to gcloud auth login on each account used at least once, and this may need refreshing once in a while (but not often enough for me to really notice).

I threw away my pre-saved gcloud configurations — not gonna need ‘em! All I have in ~/.config/gcloud/ is a config_default, which gets updated with a simple bash script when I need to switch between Google Accounts/Projects.

The switching script looks like this— with the bits wrapped in << >> to be replaced. I alias this so I would simply do switch dev for a pre-saved project, or switch gcp-project user-email to activate a new/rarely-used project.ond, Multiple Kubernetes Contexts

Bash script for switching gcloud config on the fly

Here, my ~/.kube/config file does not exist and I set up new configs under ~/.kube/configs/ whenever I have a new cluster I need to deal with. For the number I have to worry about, this is quite manageable, but it could be automated if you had a frequently-changing enough list to be worthwhile. The steps look like this (this must be done in a brand new shell not using kubie — see below!):

  1. Auth to the new cluster as normal: gcloud container clusters get-credentials ${cluster} — project=${project} — zone=${zone}. This adds an entry to your blank ~/.kube/config.
  2. Copy a template config file (see below) into ~/.kube/configs/ with a unique name.
  3. Take the values for clusters.cluster.certificate-authority-data and clusters.cluster.server from the no-longer blank ~/.kube/config and put them into your new file created from the template.
  4. Update the name: fields for the cluster to reflect what you want it to be known as when you list your contexts — , and contexts.context.cluster. It does not have to match exactly the cluster name if you want to save typing.
  5. Delete ~/.kube/config (unless you want to have a default cluster for when not using kubie — but you’ll need to keep this file tidy to avoid confusion!).

The template I mentioned for this looks as follows:

Template used to create new configs under ~/.kube/configs/

Finally, Loading Parallel Kubernetes Contexts

Photo by Jonathan Petersson on Unsplash

To make use of this shiny new config, we bring in kubie. This tool works in a similar way to kubectx + kubens— you specify kubie ctx to set your current cluster, and kubie ns to select a namespace. The difference being, that when you run kubie ctx you spawn a new shell within your terminal window, with the context loaded to that.

What that means in practice is you can for example have a terminal on the left of your screen connect to prod and a terminal on the right of your screen connected to dev, and both continue to work independently from each other. This is really marvelous.

I have sufficient muscle memory that I had to alias kctx='kubie ctx'and alias kns='kubie ns' to save re-learning / more typing!

There’s also a kubie exec to run just one command using a different context without swapping out the whole shell if you prefer — for example kubie exec cheddar kube-system kubectl get pods. This is really handy if you want to use this in scripts across multiple clusters.

There’s way more info/options available — see the project on github for more ideas.

How This Works in Practice

If working with two clusters and a shared user account, then I simply issue kctx brie and kctx cheddar in separate terminals and I’m away.

If the second cluster needs a separate user account, then I would switch sandbox first, then kctx chutney, and I’m sorted. The only thing I need to keep in mind here is that my gcloud context has switched globally (no equivalent of kubie here), so any gcloud SDK commands are going to be against the sandbox project in both terminal windows (unless I switch again) — but my kubectl commands are fine (I suspect until the refresh token expires, but in practice I’ve never had an issue).

If you’d like to see this working in practice, I recorded a couple of screen captures in the equivalent article to this on my own blog:

And that’s a wrap — hopefully this inspires you to give kubie a try!



Alex Moss

Engineering Lead for the John Lewis & Partners Digital Cloud Platform