This phrase might conjure up an image of childhood games or harmless pranks, but for the [Kubefirst](https://kubefirst.io/) team, it’s how we describe a practice that hamstrings even the most well-meaning engineering teams.
To avoid what feels like insurmountable complexity in properly managing the database credentials and API authorization keys that help cloud native applications run, developers/engineers end up storing secrets wherever is most convenient for them. That could be directly on their machine, in a CI pipeline no one will audit, or inadvertently checked in and forever discoverable via their project’s Git history.
Some teams try cloud-based solutions for stashing secrets safely, like [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) (SSM), but when faced with the overhead of having to properly configure their service or application to use a cloud-based solution, engineers often choose convenience.
Secret-scattering accumulates into big problems for platform engineering or DevOps/GitOps teams. When they have no single source of truth for their organization’s secrets, they lose control of their ability to quickly rotate any single secret, or _all _of them, for any reason. When they don’t know where all their infrastructure secrets are hidden, they can only do much to harden and mitigate against security vulnerabilities.
For now, the cloud native community deals with secret-scattering by pushing their teams toward [HashiCorp Vault](https://www.vaultproject.io/), an open-source secrets manager and identity provider. And while we love Vault—it’s [built directly](https://docs.kubefirst.io/common/vault.html) into every standard Kubernetes infrastructure deployed by kubefirst when you run `kubefirst cluster create`—we’re always eager to push beyond what someone might say is “good enough.” We can get all the benefits of Vault, and stop the threat of secret-scattering altogether, by using External Secrets Operator as a bridge between your secrets and your applications.
## Why Vault isn’t a standalone solution to secret-scattering
Vault’s value proposition is a unified interface for storing and accessing any secret your organization might need, with tools for key rolling, auditing logs, and revocation, which can lock down entire systems during an ongoing cyberattack. And Vault has quickly become the de facto standard for secrets management _anywhere_, particularly in the cloud native/Kubernetes community, because once you set Vault up, developers have an easy interface for creating and referencing secrets in their resource configurations.
But Vault only solves one part of the troubling habit of secret-scattering—the convenience factor. And this assumes your engineering team can handle the challenges around getting Vault working correctly in your cluster at all. For many, it’s not a project they can gainfully prioritize.
The other challenge is that your entire infrastructure becomes dependent on Vault.
How? The most common way to pull secrets into your application is with a [sidecar container](https://kubernetes.io/docs/concepts/workloads/pods/#workload-resources-for-managing-pods) using the [vault injector](https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar). When a pod starts on your cluster, one of its containers, which requires a secret, checks in with Vault to find the secret at a specific path. Vault provides the secret, which is added to a file mount that the pod/container can access, thus giving it access to secrets that weren’t hard-coded into the container and it didn’t have to begin with.
That’s great in theory, but what happens when you need to bring your Vault instance down for maintenance, or worse, when there’s an unexpected outage?
The containers _already running_ in your cluster can still use the secrets they accessed when they were first scheduled and initialized, but if your cluster wants to recreate any pods during a Vault outage, they’ll no longer be able to access your secrets ecosystem. Pods fail to start, applications crash, and you end up with major user-facing outages.
We love Vault at Kubefirst, but no engineering team should do all the challenging work to deploy a “secrets infrastructure” on Kubernetes only to create a real-time dependency that can bring your entire infrastructure down. Which, in turn, inevitably leads to more secret-scattering to prioritize uptime over security.
## Overcoming a single point of failure with External Secrets Operator
What we find so confounding about this outage scenario is that there is no reason your application needs a real-time dependency on Vault. If the content of your secrets haven’t changed during an outage, they’re still valid, and your pods shouldn’t have to connect to Vault again to retrieve them safely.
We imagined a better solution: What if you had a second layer of secret storage, a bridge between Vault and your pods, that solves the root cause of secret-scattering _and_ mitigates the danger of Vault downtime?
A quick aside before we jump in. You can build this system using the tools and platforms we mention, without using kubefirst, which is indicative of what we love about the cloud native landscape. Or you can skip the hassle of recreating the wheel and use [kubefirst](https://github.com/kubefirst/kubefirst), which instantly deploys an open-source and GitOps-powered ecosystem of application delivery and infrastructure management platforms to your cluster.
Let’s jump in with an example you’re probably familiar with as someone on a platform/DevOps/GitOps engineering team managing secrets for connecting a pod to Datadog for observability. In this scenario, you have [External Secrets Operator](https://external-secrets.io/) running as a second layer between your application and Vault. You ask the Operator to define a native Kubernetes secret object for later usage. At this point, you’re not declaring which parts of your application will need to access said secrets, just laying the foundation in a GitOps-friendly `external-secrets.yaml` resource.
Let’s look at an example, and then we’ll walk through its parts:
A secret, called `datadog-secret`, runs in the `datadog` namespace and connects to `vaults-secrets-backend`. When the External Secrets Operator requests the `datadog` secret path provided by Vault, it looks for `DD_API_KEY` and `DD_APP_KEY` secrets, and then maps the values it finds onto `api-key` and `app-key`, which are native Kubernetes secrets.
The External Secrets Operator checks in with Vault every ten seconds to request the `DD_API_KEY` and `DD_APP_KEY` secrets. If the values it receives are in-sync with what’s already stored in the native Kubernetes secrets `api-key` and `app-key`, the Operator does nothing. For example, if there are changes due to a rotated key, the Operator automatically updates the Kubernetes secret.
You can now configure your applications to reference these native Kubernetes secrets instead of requesting secrets from Vault directly. Minimal changes to your resource manifests result in critical redundancy—if your Vault instance crashes, your application relies on the native Kubernetes secret, stored safely within your cluster, without strict reliance on Vault.
The instant Vault comes back online, the External Secrets Operator seamlessly rolls in any changes to the secrets you depend on.
## The additional wins of decoupling your application from Vault
Our goal with leveraging the External Secrets Operator was to stop secret-scattering by providing a single, dependency-free interface for applications to get their secrets. Along the way, we’ve unlocked other wins we hadn’t thought of initially.
### Better GitOps practices
When you simplify how secrets are stored, and how developers access them in the pods and containers they create, you’re better able to put all you infrastructure as code (IaC) into a single `gitops` repository—without putting your actual secrets into Git. That packages into a bigger story about asset management, where you help your organization drop its reliance on self-service ScriptOps/ClickOps deployments and storing secrets in CLIs, CI/CD pipelines, or SSM (Systems Manager) Parameter Store, all of which lead to lost inventory and enormous headache.
### No lock-in to HashiCorp Vault
By using External Secrets Operator as your interface for getting secrets into your application, and abstracting away the use of Vault directly, you’re actually free to change your secrets source altogether without having to re-configure each container. You can move from Vault to Bitnami’s [SealedSecrets](https://github.com/bitnami-labs/sealed-secrets) or [AWS Key Management Service](https://aws.amazon.com/kms/), for example, if they meet your needs better.
### More disaster recovery options
If your Vault instance becomes inaccessible, whether that’s because of a cyberattack or irrevocable destruction, you’re no longer completely lost. The External Secrets Operator is still pinging Vault every ten seconds and looking for updates but retaining the secrets it already has. Your team can recover those secrets from within the native Kubernetes secrets store and rehydrate your recovered Vault instance with them.
### Improved policy compliance
If you operate in specific domains where governance rules require you to rotate your secrets frequently, for example, you can do so quickly and without editing your containers or affecting your cluster’s ability to schedule new pods.
## The secrets you won’t scatter (or spill)
As much as we might fear secret-scattering as platform/DevOps/GitOps engineers, we also need to recognize that it’s a natural result of the headwinds every organization faces. Deploying faster to beat out the competition. Navigating new cultures and processes for developing software. Working remotely and often in very disparate time zones. Using GitHub, for better or worse, for almost every aspect of developing and deploying to Kubernetes.
At Kubefirst, we prioritize building bridges over placing blame. We put cloud native tools together to solve operation problems—in this case of secrets, that’s using External Secrets Operator in coordination with Vault to give your entire organization a simple interface for defining and requesting secrets.
As we said before, you can build this bridge using External Secrets Operator, Vault, and any additional glue to connect them on your Kubernetes cluster, but if you’d rather prioritize bigger operational projects or further a GitOps-friendly culture, give Kubefirst a try. Our registry of vendor-agnostic Kubernetes products, all of which work together seamlessly, gives you a massive head start in your Kubernetes journey, and it’s [entirely free to use](https://kubefirst.io/download) locally or deploy directly to a production-ready AWS cluster.
In about five minutes, you’ll have an operational Kubernetes platform and, more importantly, no more excuses for scattering or spilling secrets.