Personal Insights
Use Kubernetes Service Accounts in Combination with OIDC Identity Federation for imagePullSecrets
In this blog, I will share how you can use Kubernetes service accounts and their OIDC tokens to securely pull container images from private registries without having to copy secrets around. In this blog, I will focus on how to set it up using a Kubernetes cluster provisioned by Gardener and container images hosted on Google Cloud Platforms (GCP) Artifact Registry, but the same concept works with a generic Kubernetes cluster in combination with any registry, which supports retrieving access credentials using OIDC.
What are imagePullSecrets?
Kubernetes can pull container images from private registries using a special Kubernetes secret of type kubernetes.io/dockerconfigjson
containing authentication credentials for the registry. The imagePullSecret normally contains a long-lived access credential in the form of a username and password/access token. The imagePullSecrets are stored in each namespace and can then be referenced by pods:
spec:
imagePullSecrets:
- name: regcred
A common approach on GCP, which we used previously as well, is to use the complete service account key for access by creating a GCP service account, downloading the key file and then using the full key file as password in combination with the special username _json_key
. While this is a straightforward approach it has the disadvantage that you send the long-lived service account credentials each time over the internet to the registry. The GCP documentation even has a big warning that you should avoid this method:
Note: When possible, use an access token or credential helper to reduce the risk of unauthorized access to your artifacts.
In addition, rotating long-lived imagePullSecrets across multiple namespaces is effort and error-prune, as they need to be replicated into each cluster and namespace.
As Kubernetes has OIDC-issuing capabilities and GCP allows retrieving access credentials with OIDC-tokens there is a better option available using short-lived access tokens as imagePullSecrets.
What is OpenID Connect (OIDC)?
Before diving in, let’s quickly look at what OIDC is and how it works. OIDC (OpenID Connect) is a layer on top of OAuth2 and allows clients to be identified in a standard way. For this, an authority creates signed identity tokens and these signed identity tokens can then be verified by a third party using the publicly available OIDC metadata and public signing keys of the authority.
In Kubernetes, this is an integral part of the built-in service accounts. The service accounts are represented by identity tokens and the Kubernetes API-server verifies them and thus allows the service accounts access to the Kubernetes APIs. In addition, the identity tokens can be used by external services to validate if a request originated from a specific Kubernetes cluster and includes additional information like the workload and service account name.
Decoding a service account token, which gets injected at /var/run/secrets/kubernetes.io/serviceaccount/token
in a pod, shows what information is available:
{
"aud": ["kubernetes", "gardener"],
"exp": 1693292880,
"iat": 1661756880,
"iss": "https://api.cluster.project.gardener.cloud",
"kubernetes.io": {
"namespace": "default",
"pod": {
"name": "test-pod",
"uid": "b38f5a1e-87c3-4009-b2c6-755d83c4283d"
},
"serviceaccount": {
"name": "default",
"uid": "97c400e9-fd0c-4d6d-a456-79c4fe27ac39"
},
"warnafter": 1661760487
},
"nbf": 1661756880,
"sub": "system:serviceaccount:default:default"
}
The interesting information is:
- Issuer (
iss
): who created the identity token - Subject (
sub
): whom the identity token represents - Audience (
aud
): for whom these tokens are intended
Instead of the Kubernetes API-server, other parties like GCP can validate the identity tokens as well. In GCP this is called identity federation and allows the exchange of a workload identity token signed by a Kubernetes API-server first for a federated access token and then later for a short-lived access token for a GCP service account. A simple token exchange then looks like this:
Exposing and Adjusting the OIDC Metadata
For an external service, like GCP, to validate identity tokens, it needs to be able to query the public OIDC metadata. Kubernetes exposes the OIDC metadata under <API-server url>/.well-known/openid-configuration
and the associated public signing keys under <API-server url>/openid/v1/jwks
by default. Depending on the Kubernetes API-server configuration these endpoints require authentication or, if your API-server runs in a corporate network, are not accessible at all from the outside. If your OIDC metadata is already available anonymously over the internet you can continue with Configuring Workload Identity Federation.
There are multiple options to ensure that an external service can retrieve them without authentication:
- Enabling anonymous authentication to the Kubernetes API-server and allowing unauthenticated users to access the OIDC metadata
- Making them available via a proxy like the gardener/service-account-issuer-discovery for anonymous consumption
- Hosting a copy of the (modified) metadata files on an independent static page
Expand for details of hosting metadata with a Google Cloud Storage Bucket
We use the third option as our API-servers are hosted in an internal network and couldn’t be exposed either directly or via a proxy. To set this up the OIDC metadata needs to be exposed on a public static page. An easy way to do this is to host them in a public Google Cloud Storage bucket as that allows them to be directly consumable without additional infrastructure.
Before uploading the configuration you need to update the OIDC issuer URL in the cluster. GCP expects that the issuer URL matches the URL which it retrieves the configuration from. This can be easily done in Kubernetes with the setting --service-account-issuer <issuer-url>
for the API-server to the desired issuer URL. In Gardener this can be done via the .spec.kubernetes.kubeAPIServer.serviceAccountConfig.issuer <issuer-url>
for the cluster. For Google Cloud Storage the URL is https://storage.googleapis.com/<public_oidc_bucket>/<our_cluster>
and this URL can then be set as issuer in the cluster.
After the issuer is configured, start a kubectl proxy
and then retrieve from localhost:8001/.well-known/oidc-configuration
the OIDC metadata information. They should look like this:
{
"issuer": "https://storage.googleapis.com/<public_oidc_bucket>/<our_cluster>",
"jwks_uri": "https://api.cluster.project.gardener.cloud:443/openid/v1/jwks",
"response_types_supported": ["id_token"],
"subject_types_supported": ["public"],
"id_token_signing_alg_values_supported": ["RS256"]
}
Before uploading them to the bucket, modify the jwks_uri
to match the bucket URL, where the signing keys will be stored. The final oidc-configuration
then should look like this.
{
"issuer": "https://storage.googleapis.com/<public_oidc_bucket>/<our_cluster>",
"jwks_uri": "https://storage.googleapis.com/<public_oidc_bucket>/<our_cluster>/openid/v1/jwks",
"response_types_supported": ["id_token"],
"subject_types_supported": ["public"],
"id_token_signing_alg_values_supported": ["RS256"]
}
And can be finally uploaded to the the bucket at https://storage.googleapis.com/<public_oidc_bucket>/<our_cluster>/.well-known/oidc-configuration
.
Afterwards the signing keys (jwks
) can be retrieved from localhost:8001/openid/v1/jwks
and uploaded unmodified to https://storage.googleapis.com/<public_oidc_bucket>/<our_cluster>/openid/v1/jwks
. Notice that when the signing keys are rotated in the Kubernetes API-server the new signing keys need to be uploaded again otherwise the OIDC federation will break.
The OIDC configuration is now publicly available and can be consumed from the OIDC federation service of GCP.
Configuring Workload Identity Federation
In GCP, the trust relationship for workload identity federation needs to be configured. There create a pool that serves all clusters and then you can add multiple providers for each Kubernetes cluster. In the provider choose as issuer
the issuer URL configured earlier and in the attribute mapping, map google.subject
to assertion.sub
. assertion.sub
will contain a value like system:serviceaccount:<namespace>:<serviceAccount>
as seen in the earlier decoded identity token. Finally note down the audience URL which looks like https://iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_name>/providers/<provider_name>
for later.
After creating the pool and providers you need to grant the Pool access to a GCP service account. If you haven’t created a GCP service account and granted it read access to your container images, you should do that now. In the pool, select Grant Access
and select your GCP service account and ensure that Only identities matching the filter
is selected and the subject
is restricted to your Kubernetes service account. To restrict it to the default service account in the default namespace use system:serviceaccount:default:default
.
Now you have the OIDC metadata exposed and the identity federation in GCP configured.
Creating Access Tokens
Start a pod in our Kubernetes cluster and try to retrieve GCP access credentials. You can use the alpine/k8s
container image as this already has both curl
and kubectl
preinstalled. When starting the pod, let Kubernetes inject a serviceAccountToken
with the previously retrieved audience from the GCP pool configuration.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: oidc-test
spec:
containers:
- name: oidc-test
# In production pin image to kubernetes version
image: alpine/k8s:1.25.2
command: ["sleep", "3600"]
volumeMounts:
- name: oidc-token
mountPath: /var/run/secrets/tokens
volumes:
- name: oidc-token
projected:
sources:
- serviceAccountToken:
path: oidc-token
# The audience retrieved from the GCP pool configuration
audience: "https://iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_name>/providers/<provider_name>"
EOF
After creating the pod, connect to it using kubectl exec -it oidc-test -- bash
. Inside /var/run/secrets/tokens/oidc-token
is the identity token stored. When decoding the identity token you can see it should have the correct issuer and audience:
{
"aud": [
"https://iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_name>/providers/<provider_name>"
],
"exp": 1661766985,
"iat": 1661763385,
"iss": "<issuer URL e.g. API-server URL or custom configured issuer URL>",
"kubernetes.io": {
"namespace": "default",
"pod": {
"name": "oidc-test",
"uid": "c779016e-f832-4d28-bba2-5c90dd03a215"
},
"serviceaccount": {
"name": "default",
"uid": "99e2eb6d-52ca-47d0-8b57-040d867921c3"
}
},
"nbf": 1661763385,
"sub": "system:serviceaccount:default:default"
}
With this identity token and some additional requests, you can retrieve first a short-lived access token from the GCP Security Token Service API using this script:
# Need to be adjusted with the values from the audience
PROJECT_NUMBER="<the project number>"
POOL_ID="<the pool id>"
PROVIDER="<the provider id>"
TOKEN=$(cat /var/run/secrets/tokens/oidc-token)
PAYLOAD=$(cat <<EOF
{
"audience": "//iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/${POOL_ID}/providers/${PROVIDER_ID}",
"grantType": "urn:ietf:params:oauth:grant-type:token-exchange",
"requestedTokenType": "urn:ietf:params:oauth:token-type:access_token",
"scope": "https://www.googleapis.com/auth/cloud-platform",
"subjectTokenType": "urn:ietf:params:oauth:token-type:jwt",
"subjectToken": "${TOKEN}"
}
EOF
)
curl -X POST "https://sts.googleapis.com/v1/token" \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--data "${PAYLOAD}"
If everything works, this request should return a short-lived federated access token in the .access_token
response field. With this federated access token, you can retrieve the actual GCP access token for a GCP service account using this request:
# Need to be adjusted with the actual service account email
SERVICE_ACCOUNT_EMAIL="<the email of the GCP service account>"
FEDERATED_TOKEN=$(curl -X POST "https://sts.googleapis.com/v1/token" \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--data "${PAYLOAD}" \
| jq -r '.access_token'
)
echo "STS token retrieved"
curl -X POST "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/${SERVICE_ACCOUNT_EMAIL}:generateAccessToken" \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${FEDERATED_TOKEN}" \
--data '{"scope": ["https://www.googleapis.com/auth/cloud-platform"]}'
With this request, a short-lived (1h) access token of the GCP service account is created and can use them to interact with the GCP APIs including the artifact registry where the container images are stored. A normal application could already use the access token to interact with all GCP services, but Kubernetes requires that the registry credentials are stored in a secret, so it needs to be stored in a secret first.
This can be done by just updating the imagePullSecret with the retrieved GCP service account access token.
ACCESS_TOKEN=$(curl -X POST "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/${SERVICE_ACCOUNT_EMAIL}:generateAccessToken" \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${FEDERATED_TOKEN}" \
--data '{"scope": ["https://www.googleapis.com/auth/cloud-platform"]}' \
| jq -r '.accessToken'
)
echo "Service Account Token retrieved"
kubectl create secret docker-registry regcred \
--docker-server=europe-docker.pkg.dev \
--docker-username=oauth2accesstoken \
--docker-password="${ACCESS_TOKEN}" \
--dry-run=client -o yaml \
| kubectl apply -f - --server-side=true
To finally test it you can create a pod referencing the imagePullSecret:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: europe-docker.pkg.dev/<private-image>
imagePullPolicy: Always
imagePullSecrets:
- name: regcred
EOF
And you should see that it can be successfully pulled:
$ kubectl describe pod private-reg
...
Normal Pulling 3s kubelet Pulling image "europe-docker.pkg.dev/<private-image>"
Normal Pulled 3s kubelet Successfully pulled image "europe-docker.pkg.dev/<private-image>" in 316.801958ms
Putting It All Together
The generated imagePullSecret, which can be used to pull an image, is only valid for 1h before expiring. So it needs to be regularly refreshed and this can be archived natively by automating the above manual steps in an automatic Cronjob
.
Putting it all together in a Kubernetes spec file it is a Cronjob, which runs in a special image-system
namespace every 15min to refresh the imagePullSecrets in a list of namespaces.
Expand for full Kubernetes spec file for imagePullSecret refresher
This Kubernetes spec file provides the full setup, but requires the adjustment of the trust relationship to trust the Kubernetes service account system:serviceaccount:image-system:default
. Depending on the naming of the secret, the ClusterRole needs to be updated as well to allow modification of the secret.
apiVersion: v1
kind: Namespace
metadata:
name: image-system
---
# Allow the service account access to all `regcred` secrets in all namespaces
# Alternatively use Role and RoleBinding to only grant permissions to a single namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: regcred-secret-editor
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["regcred"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: regcred-secret-editor
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: regcred-secret-editor
subjects:
- kind: ServiceAccount
name: default
namespace: image-system
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: oidc-imagepullsecret-refresher
namespace: image-system
spec:
# At every 15th minute
schedule: "*/15 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: oidc-imagepullsecret-refresher
# In production pin image to kubernetes version
image: alpine/k8s:1.25.2
command: ["/scripts/oidc-script.sh"]
resources:
limits:
memory: 256Mi
cpu: 100m
requests:
memory: 128Mi
cpu: 50m
env:
# the GCP project number (notice that is different from the project id/name)
- name: PROJECT_NUMBER
value: "PROJECT_NUMBER"
# the ID of the GCP workload identity pool
- name: POOL_ID
value: POOL_ID
# the ID of the GCP workload identity provider
- name: PROVIDER_ID
value: PROVIDER_ID
# the GCP service account to impersonate
- name: SERVICE_ACCOUNT_EMAIL
value: SERVICE_ACCOUNT_EMAIL
# name of the secret where the service account access should be stored
- name: REGISTRY_SECRET_NAME
value: regcred
# comma-separated list of namespaces where secrets should be created/updated
- name: REGISTRY_SECRET_NAMESPACES
value: default
volumeMounts:
- name: oidc-script
mountPath: /scripts
- name: oidc-token
mountPath: /var/run/secrets/tokens
restartPolicy: OnFailure
volumes:
- name: oidc-script
configMap:
name: oidc-script
defaultMode: 0744
- name: oidc-token
projected:
sources:
- serviceAccountToken:
path: oidc-token
audience: https://iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_name>/providers/<provider_name>
---
apiVersion: v1
kind: ConfigMap
metadata:
name: oidc-script
namespace: image-system
data:
oidc-script.sh: |
#! /bin/bash
# filename: oidc-script.sh
set -eo pipefail
TOKEN=$(cat /var/run/secrets/tokens/oidc-token)
PAYLOAD=$(cat <<EOF
{
"audience": "//iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/${POOL_ID}/providers/${PROVIDER_ID}",
"grantType": "urn:ietf:params:oauth:grant-type:token-exchange",
"requestedTokenType": "urn:ietf:params:oauth:token-type:access_token",
"scope": "https://www.googleapis.com/auth/cloud-platform",
"subjectTokenType": "urn:ietf:params:oauth:token-type:jwt",
"subjectToken": "${TOKEN}"
}
EOF
)
FEDERATED_TOKEN=$(curl --fail -X POST "https://sts.googleapis.com/v1/token" \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--data "${PAYLOAD}" \
| jq -r '.access_token'
)
echo "STS token retrieved"
ACCESS_TOKEN=$(curl --fail -X POST "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/${SERVICE_ACCOUNT_EMAIL}:generateAccessToken" \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${FEDERATED_TOKEN}" \
--data '{"scope": ["https://www.googleapis.com/auth/cloud-platform"]}' \
| jq -r '.accessToken'
)
echo "Service Account Token retrieved"
set +eo pipefail
EXIT_CODE=0
export IFS=","
for REGISTRY_SECRET_NAMESPACE in $REGISTRY_SECRET_NAMESPACES; do
echo "Namespace: $REGISTRY_SECRET_NAMESPACE"
kubectl create secret docker-registry "$REGISTRY_SECRET_NAME" \
-n "$REGISTRY_SECRET_NAMESPACE" \
--docker-server=europe-docker.pkg.dev \
--docker-username=oauth2accesstoken \
--docker-password="${ACCESS_TOKEN}" \
--dry-run=client -o yaml | \
kubectl apply -f - --server-side=true || EXIT_CODE=1
done
exit $EXIT_CODE
---
Conclusion
In summary, we did the following three things to enable OIDC identity federation in a cluster: – Ensure that the Kubernetes OIDC metadata is internet retrievable – Configure trust relationship and identity federation in GCP – Create a Kubernetes cronjob to retrieve short-lived access tokens and store them in imagePullSecrets in the cluster
After the initial setup, this is now a fully automated setup to retrieve short-lived access credentials for a GCP registry via OIDC tokens issued by Kubernetes. Allowing to not have long-lived credentials inside a cluster for accessing private images from the GCP Artifact Registry.
Let me know in the comments if you found this blog helpful and could use it. Have an awesome day!