In the 2.3 release, we introduced the ability to manage the lifecycle of your workload clusters with our kubefirst console application. When you install the kubefirst platform for the first time, you'll receive a management, development, staging, and production cluster.
The management cluster is a physical cluster that lives in your preferred cloud. The 3 default workload clusters are virtual clusters that live inside your management cluster. The 3 clusters have different names, but are otherwise identical. Each has its own ingress controller, cert manager, secrets operator, etc. And when you create workload clusters, yours will be identical, thanks to the new fleet templating that we've introduced to the platform.
When you work in an organization that has multiple kubernetes clusters, you'll discover a lot of overlap in how your clusters are managed, what apps belong to these clusters, and how these clusters are being provisioned. The right way to organize your clusters will vary from shop to shop, but it's quite common to want multiple clusters that are virtually identical, and to be able to depend on their identical configuration so you can assert that your test in staging means production will be fine, or so that you can guarantee different tenants will have identical experiences. You need a space where you can model your kubernetes cluster and app configurations so that your provisioned clusters implement identical experience.
This is the core concept of a kubernetes fleet. It's a day-0 modeling space where you can establish the template of how your clusters are created and what they will consist of. If you can establish the model of what a cluster, you can build a fleet of instances that implement the model.
As of 2.3, when you install a new kubefirst platform, your new
gitops git repository will have a templates directory in its root. The directory will have a
workload-vcluster directory that defines the model for a physical cluster fleet or a virtual cluster fleet.
The directory is a simple layout of an Argo CD app-of-apps implementation. The number prefix on the files will indicate the order of operations for implementation so that your cluster can be created first, then your argocd instance can connect to the cluster, then your first app can be installed, and so on.
Now you may be wondering how to add variability into your instances. For example, you can't create 2 instances of the same cluster in the same account with the same name, or you'll run into a naming collision in the cloud. For this, we've implemented the concept of tokens in our modeling so that you can make each instance of your fleet distinct based on the name you want to give the cluster.
Our kubefirst console will conduct these string replacements on your behalf when you provision a new cluster from your gitops repo templates. As a result, your instances in your fleet are virtually identical and minimally distinct with some simple string replacement techniques.
Our fleet management implementation is in its MVP in the kubefirst 2.3 release and supports the following token replacements in your modeling:
<WORKLOAD_CLUSTER_NAME> token is almost always the token to use in your modeling to make the names of your cloud resources distinct. This simple modeling technique has the added benefit of ensuring that your conventions across instances are always concretely implemented as well.
Getting a brand new multi-cluster fleet management ecosystem may sound intimidating. To do it from scratch using only open source tools takes most organizations over a year to get everything set up correctly. Kubefirst can give it to you for free in minutes with our instant gitops platforms.