Table of Contents
Introduction
Kubernetes is now the standard for container orchestration. With organizations slowly adopting container-first development structures, many workloads are still running on virtual machines in the public cloud or private data centers. Many companies are now faced with migrating from their previous methods to Kubernetes.
Migrating to Kubernetes touches the entire DevOps process, including monitoring, logging, CI/CD, and, most importantly, security. Security can be handled both at the cluster level and also at the application level.
In this post, we will try to gain more insight into how we can manage application secrets effectively in Kubernetes.
In Kubernetes, a secret object manages sensitive information such as API integration tokens, OAuth tokens, and database passwords. These secrets are made accessible to pods as a mounted volume.
If you run different Kubernetes clusters for other environments (highly recommended), you might want to store all your environment-specific secrets in a single place. Then, make sure you have a secret management tool that can smartly identify the environment the pod is deployed in and fetch secrets accordingly. We will learn more about doing this later in this post.
Key Takeaways
- Kubernetes uses secret objects to manage sensitive information, such as API tokens, OAuth tokens, and passwords. These secrets are accessible to pods as mounted volumes.
- Secrets can be created using kubectl and mounted in pods as volume mounts. They can be created from literal values or files containing secret information.
- Secrets can also be made available as environment variables for pod applications. The secret data is encoded and stored in a secret object manifest and then accessed using the key to map it to an environment variable.
- To manage secrets across multiple Kubernetes clusters, it is recommended to use a centralized secret store and a secure mechanism for populating pod environments with the required secrets.
- Two solutions for managing secrets in Kubernetes clusters are mentioned: using Vault with Kubernetes-based authentication and integrating Chamber with Dockerfiles to populate secrets from AWS Parameter Store in Kubernetes clusters on AWS. These solutions provide secure and scalable ways to store and retrieve secrets.
Getting Started with Kubernetes Secrets
Creating Secrets Using Kubectl and Mounting Secrets as Volume Mounts
Let’s create a secret for a token to be used by an application for authenticating with a 3rd party service.
We can either create a secret from a literal value or a file. In this case, we have put our secret information in access.txt.
$cat access.txt
APP_AUTH_TOKEN=WEj4VmNF755uc9vZdz98zvPXB6DkHp
$ kubectl create secret generic auth-token --from-file=./access.txt
secret "auth-token" created
$kubectl get secrets
NAME TYPE DATA AGE
auth-token Opaque 1 14s
default-token-k7vmv kubernetes.io/service-account-token 3 17m
$ kubectl describe secret auth-token
Name: auth-token
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
access.txt: 46 bytes
Let’s now use this secret in your pod as a mounted volume.
First, we create a demo pod and apply the manifest.
$ cat pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
containers:
- name: demo-pod
image: ubuntu
command: ["/bin/bash", "-ec", "while :; do echo '.'; sleep 5 ; done"]
volumeMounts:
- name: myvolume
mountPath: "/tmp"
readOnly: true
volumes:
- name: myvolume
secret:
secretName: auth-token
$ kubectl apply -f pod.yaml
pod "demo-pod" created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-pod 1/1 Running 0 1m
Now, if we exec inside this pod, we should be able to find our secret mounted to /tmp directory.
$ kubectl exec -it demo-pod /bin/bash
root@demo-pod:/# ls /tmp
access.txt
root@demo-pod:/# cat /tmp/access.txt
APP_AUTH_TOKEN=WEj4VmNF755uc9vZdz98zvPXB6DkHp
root@demo-pod:/#
We can use this method to mount configuration files that contain sensitive data safely. Then, the application can read them from the mounted directory. However, we sometimes want sensitive data to be available as an env var for our application. Let’s try to do that in the next section.
How To Make Secrets Available as Pod Environment Variables
To make secrets available as env var, we will create and apply a secret object manifest. We will encode our data before putting it in the secret file as plain text.
$ echo WEj4VmNF755uc9vZdz98zvPXB6DkHp | base64
V0VqNFZtTkY3NTV1Yzl2WmR6OTh6dlBYQjZEa0hwCg==
$ cat auth_secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
auth_token: V0VqNFZtTkY3NTV1Yzl2WmR6OTh6dlBYQjZEa0hwCg==
$ kubectl apply -f auth_secret.yaml
secret "mysecret" created
$ kubectl describe secret mysecret
Name: mysecret
Namespace: default
Labels: <none>
Annotations:
Type: Opaque
Data
====
auth_token: 31 bytes
In order to inject the secret as an env var we will have to access the secret using the key at which it is stored and map it to an env var for the pod. In the following example we do this with another demo pod, demo-pod-2.
$ cat pod-2.yaml
apiVersion: v1
kind: Pod
metadata:
name: demo-pod-2
spec:
containers:
- name: demo-pod-2
image: ubuntu
command: ["/bin/bash", "-ec", "while :; do echo '.'; sleep 5 ; done"]
env:
- name: APP_AUTH_TOKEN
valueFrom:
secretKeyRef:
name: mysecret
key: auth_token
$ kubectl apply -f pod-2.yaml
pod "demo-pod-2" created
Now, if we exec into demo-pod-2 and check the value for APP_AUTH_TOKEN env var, we should see the decoded value of our secret.
$ kubectl exec -it demo-pod-2 /bin/bash
root@demo-pod-2:/# echo $APP_AUTH_TOKEN
WEj4VmNF755uc9vZdz98zvPXB6DkHp
root@demo-pod-2:/#
Further Reading
Recently, Kubernetes released a new feature for encrypting secrets at rest. I highly recommend reading it. Although the feature is new, that shouldn’t stop us from trying it out.
Advanced Management of Kubernetes Secrets
So far, we have learned how Kubernetes secrets work and how to consume them in a pod. However, when running multiple Kubernetes clusters (development/staging/production), we need a centralized secret store and a secure mechanism to populate our pod environment with the required secrets.
Today, we will explore two solutions for this:
- Using Vault with Kubernetes-based auth
- Integrating Chamber in your Dockerfiles to populate secrets from AWS Parameter Store (for Kubernetes clusters on AWS)
Using Vault with Kubernetes-Based Auth
The workflow for Vault-based authentication can be summarized as follows:
- A pod is deployed in a particular namespace and associated with a specific service account.
- The namespace and the associated service account are tied to a Kubernetes authentication role in the Vault backend.
- The pod uses the token from the service account to authenticate with Vault and get the VAULT_TOKEN.
- Using the VAULT_TOKEN and the VAULT_ADDRESS, we can retrieve secrets from Vault securely.
Furthermore, we have a few options for injecting those secrets as environment variables into the pod, which we will discuss later.
What Is Vault?
Vault is a lightweight tool for effectively storing and managing secrets. It has excellent support for Kubernetes authentication, which is encrypted and secure. Consul usually backs the vault as the storage engine. Therefore, it is highly reliable and resilient to node failures.
Setting Up Vault
We have a couple of ways to set up Vault for our Kubernetes applications:
- Setting up Vault in a Kubernetes cluster, which can be accessed across environments.
- Using a hosted version of Vault.
How to set up Vault on Kubernetes is beyond the scope of this blog, but below are some resources which you can use:
- https://www.hashicorp.com/blog/announcing-the-vault-helm-chart
- https://github.com/hashicorp/consul-helm
Once you have Vault set up, you can hook it up with your Kubernetes cluster. As stated previously, the goal is to ensure your pods can effectively authenticate with Vault so the application can retrieve the secrets.
Authenticating Kubernetes Pods with Vault
First, create a Kubernetes service account with the following permissions.
$ cat vault_sa.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: role-tokenreview-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: vault-auth
namespace: default
$ kubectl apply -f vault_sa.yaml
clusterrolebinding.rbac.authorization.k8s.io "role-tokenreview-binding" created
Then, retrieve variables to enable Kubernetes auth in the Vault backend.
Kubernetes host — an address that Vault can connect with.
k8s_host="$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ")"
Cluster authority data — a certificate to verify the connection.
k8s_cacert="$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}' | base64 --decode)"
User token — a vault-auth service account with a token reviewer role. Vault will interact with the cluster using this service account.
secret_name="$(kubectl get serviceaccount vault-auth -o go-template='{{ (index .secrets 0).name }}')"
account_token="$(kubectl get secret ${secret_name} -o go-template='{{ .data.token }}' | base64 --decode)"
Once we have all of these values, we can enable Kubernetes auth in our Vault backend.
vault auth enable kubernetes
vault write auth/kubernetes/config \
token_reviewer_jwt=${account_token} \
kubernetes_host=${k8s_host} \
kubernetes_ca_cert=${k8s_cacert}
Now, we need to ensure that our newly launched pods can authenticate with our Vault server. To this end, we bind a namespace to our Vault role and will later use the service account JWT in that namespace for authentication.
vault write auth/kubernetes/role/demo bound_service_account_names=vault-auth bound_service_account_namespaces=default policies=demo-policy ttl=1h
Let’s test it!
Create a demo pod and try to authenticate with Vault.
kubectl run -it --rm --image=ubuntu --serviceaccount=vault-auth test -- /bin/bash
root$ apt-get update -y && apt-get install vim curl jq mysql-client -y
#Let's get the service account JWT token
root$ JWT="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
#Now we can use this to get the vault token
root$ VAULT_TOKEN="$(curl --request POST --data '{"jwt": "'"$JWT"'", "role": "demo"}' -s -k https://${VAULT_ADDRESS}/v1/auth/kubernetes/login | jq -r '.auth.client_token')"
We can now use VAULT_TOKEN to authenticate with Vault and retrieve our secrets. However, this is only half the battle. We aimed to populate the environment variables with Vault secrets so the application could consume them.
If your application has logic to read directly from Vault using the VAULT_TOKEN and VAULT_ADDRESS environment variables, you can skip this part completely. Otherwise, you can do the following:
- Integrate vaultenv with your Docker container.
- Use vaultenv at the beginning of your entrypoint script and populate the environment.
Storing Secrets in AWS Parameter Store
What Is AWS Parameter Store?
AWS provides a highly scalable and secure service for storing secrets. You can read and write secrets using the AWS CLI. If you access it from an EC2 instance or an ECS task, appropriate IAM roles should be configured.
What Is Chamber?
Chamber is a command-line utility that helps you read and write secrets from the AWS Parameter Store. It also supports several other commands to populate the execution environment with secrets or export them in various formats.
To get started with this, you will need the following information handy:
- AWS_DEFAULT_REGION
- AWS_SECRET_KEY_ID
- AWS_SECRET_ACCESS_KEY
Installing Chamber Locally
If you’re on a Mac, Chamber can be installed locally using the following commands.
brew update
brew install chamber
Make sure you have AWS CLI configured on your laptop.
Use the following to write secrets to the AWS Parameter Store.
chamber write <service> <key> <value>
Use the following to read secrets from the AWS Parameter Store.
chamber read <service> <key>
Integrating Chamber with Your Kubernetes Apps
To integrate Chamber with your Kubernetes apps, you will need to make some minor changes to the following:
- Dockerfile of your app
- Entrypoint of the script
- Deployment manifest file
At the top of your Dockerfile, add the following content.
FROM golang:1.10.4 AS build
RUN CGO_ENABLED=0 GOOS=linux go get -v github.com/segmentio/chamber
FROM <existing base image>
COPY --from=build /go/bin/chamber /chamber
…
This will build and integrate the Chamber binary into your container.
In the entry point script, make the following changes.
Before the main entrypoint logic kicks in, add the following statement to populate the environment variables.
eval "$(chamber env $SERVICE)"
The following environment variables must be set in the manifest for the deployment.
- AWS_DEFAULT_REGION: this corresponds to the default region for your access credentials.
- SERVICE: this corresponds to the service whose secrets you want to read from the Parameter Store using Chamber.
Chamber also needs access to AWS_SECRET_KEY_ID and AWS_SECRET_ACCESS_KEY; however, we do not recommend setting those in your pod manifest file. It is highly recommended that you use an IAM role-based permission method to authenticate with the AWS Parameter Store. To do that, make sure the IAM role assigned to your Kubernetes worker nodes has the ssm:GetParameters action enabled.
Once you have all this set up, the container in your pod will read the secrets under the SERVICE environment variable. Then, it will authenticate with the AWS Parameter Store to populate the container’s environment with all of the secrets needed for that service.
How MetricFire Can Help
If managed correctly, Kubernetes secrets can greatly simplify the deployment process. You can choose to inject them into your application’s execution environment or read them on the fly using custom-built logic.
If you have deployed your Kubernetes cluster in the AWS cloud, we highly recommend using the Chamber and AWS Parameter Store integration, as it is the easiest to get started with and very secure.
Also, you can manage access to Parameter Store using fine grain IAM access with Kiam's help. This will ensure only certain pods in your cluster can retrieve and use secrets from the AWS Parameter Store. However, that is a discussion for another time.
When you need to monitor your Kubernetes setup, try out the MetricFire free trial, or book a demo and talk to us directly. A lot of our customers at MetricFire are monitoring Kubernetes clusters - we may have the expertise you're looking for.
This post was written by our guest blogger Vaibhav Thakur. If you liked his stuff, check out his LinkedIn profile.