Skip to Content
Personal Insights
Author's profile photo Tobias Gabriel

Using GitHub Actions OpenID Connect in Kubernetes

Update (12.04.2023): Another approach leveraging Kubernetes’ native credential plugin is now available at the end of this post.

Insufficient credential hygiene is one of the top security threats to automatic CI/CD pipelines and connected environments (Top 10 CI/CD Security Risks – cidersecurity.io). Automated pipelines transporting updates from merged Pull Request over automatic tests to production in Kubernetes often require high-privileged, long-lived service account credentials. As a result, multiple workflows might share the same service account credentials, making them hard to track and keep secure. In this blog, I show how you can move from storing static service accounts in the GitHub Actions Secret store to using dynamic workload identities for authentication in Kubernetes. I outline the required configuration in GitHub and Kubernetes, concluding with a showcase of an end-to-end demo deployment without static credentials.

Both GitHub and Kubernetes implement OpenID Connect (OIDC), an open standard for decentralized authentication. You can leverage GitHub Actions OIDC issuing capabilities and the Kubernetes OIDC authentication strategy to eliminate manually distributing and managing long-lived credentials. Instead of multiple GitHub Actions workflows in a repository getting access to the same long-lived service account token, with OIDC, a workflow can request a short-lived identity token representing exactly that workflow run. The cryptographically signed identity token includes information about the current workflow, like repository, branch, and workflow name. The Kubernetes API Server can verify requests with the identity token with the public cryptographic signing keys of GitHub Actions. The API server then determines the Kubernetes internal identity based on the information in the identity token and resolves the associated permissions.

The following diagram shows how a workflow first requests an identity token and then uses it to interact with the Kubernetes API server. The Kubernetes API server validates the token signature using the GitHub Actions public information, checks permissions, and executes the request.

Diagram modified from the Kubernetes OpenID Connect Tokens diagram licensed under CC BY 4.0.

In the following section, I will outline how you configure OIDC trust in Kubernetes and assign permissions to the OIDC identity. Afterward, I show different options on how your GitHub Actions workflows can request identity tokens and use them to execute Kubernetes commands.

Setting Up OIDC Trust in Kubernetes

First, the Kubernetes API server must trust GitHub Actions as an OIDC identity provider. For this, configure the trust in the Kubernetes API server using the --oidc-flags.

--oidc-issuer-url=https://token.actions.githubusercontent.com
--oidc-client-id=my-kubernetes-cluster
--oidc-username-claim=sub
--oidc-username-prefix=actions-oidc:
--oidc-required-claim=repository=myOrg/myRepo
--oidc-required-claim=workflow=deploy-kubernetes
--oidc-required-claim=ref=refs/heads/main
  • issuer-url: Unique identifier for the OIDC identity provider. In the case of GitHub Actions, this is always https://token.actions.githubusercontent.com.
  • client-id: Unique identifier for the Kubernetes cluster (e.g. your Kubernetes API server URL).
  • username-claim: Identity token attribute to use as a username. It should uniquely represent the workflow to allow granular authorization. GitHub includes a reasonable, auto-created subject (sub) attribute in the identity token. For most scenarios, this subject attribute is a good choice. For example, inside a workflow triggered on a push event, the subject would be repo:myOrg/myRepo:ref:refs/heads/main containing GitHub organization, repository, and branch name.
  • username-prefix: The prefix used for all identities issued by this OIDC provider. A unique prefix prevents unwanted impersonation of users inside your Kubernetes cluster.
  • required-claim: Multiple key-value pairs restrict which identities have access. Allow only workflows inside your organization and repository. You can restrict it to specific refs (e.g., branches) or workflow names. Without any restriction, any workflow on GitHub could access your cluster.

After configuring the OIDC trust inside your Kubernetes API server, workflows can use the GitHub Actions issued identity tokens to authenticate against the Kubernetes API server. Kubernetes extracts the user information from the identity token and uses the mapped Kubernetes username to determine authorization.

Note: Multiple OIDC-issuers, e.g., separate ones for user accounts and automation, can not be configured in the Kubernetes API server. However, the Gardener project provides a “Webhook Authenticator for dynamic registration of OpenID Connect providers”, which you can deploy inside a generic Kubernetes cluster.

If you have a Gardener Kubernetes cluster, the OIDC webhook authenticator exists as well as a managed shoot service and you can enable it with adding .spec.extensions[].type: shoot-oidc-service to your shoot configuration YAML.

With the OIDC Webhook Authenticator, you can create an OpenIDConnect resource to establish the trust relationship.

apiVersion: authentication.gardener.cloud/v1alpha1
kind: OpenIDConnect
metadata:
  name: actions-oidc
spec:
  issuerURL: https://token.actions.githubusercontent.com
  clientID: my-kubernetes-cluster
  usernameClaim: sub
  usernamePrefix: "actions-oidc:"
  requiredClaims:
    repository: myOrg/myRepo
    workflow: deploy-kubernetes
    ref: refs/heads/main

Providing Identities Access in Kubernetes

Authorizations via roles and rolebindings are required in Kubernetes for any user to perform any action. Following the principle of least privilege to provide the best security, roles should have as few permissions as possible.

The deployment of the demo application only requires permission to modify deployments and list pods.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: demo
  name: actions-oidc-role
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "watch", "list"]
  - apiGroups: ["apps"]
    resources: ["deployments"]
    verbs: ["get", "watch", "list", "create", "update", "delete"]

After you create the role, bind it to the mapped workload user. The username consists of the username-prefix followed by the extracted username-claim attribute from the identity token.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: actions-oidc-binding
  namespace: demo
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: actions-oidc-role
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: actions-oidc:repo:myOrg/myRepo:ref:refs/heads/main

The workflow identity now has permission to perform actions inside the specific Kubernetes namespace.

Requesting an OIDC Token within a Workflow

After setting up Kubernetes to authenticate and authorize requests with GitHub Actions issued identity tokens, you must modify an existing or create a new workflow. The workflow must first request an identity token and use it to authenticate Kubernetes API.

To be able to request identity tokens, you must explicitly add the id-token-permission to a single job inside the workflow or the complete workflow. The workflow name must match the specified value in the Kubernetes API server required-claim option if you have restricted your cluster to a specific workflow name.

name: deploy-kubernetes

permissions:
  id-token: write # Required to receive OIDC tokens

Request the identity token from the GitHub Actions OIDC issuer service via curl inside a workflow step. When requesting the token, you must specify an audience for which the identity token should be usable. The value must match the client-id configured in the Kubernetes API server.

jobs:
  oidc-demo:
    steps:
      - name: Create OIDC Token
        id: create-oidc-token
        run: |
          AUDIENCE="my-kubernetes-cluster"
          OIDC_URL_WITH_AUDIENCE="$ACTIONS_ID_TOKEN_REQUEST_URL&audience=$AUDIENCE"
          IDTOKEN=$(curl \
            -H "Authorization: Bearer $ACTIONS_ID_TOKEN_REQUEST_TOKEN" \
            -H "Accept: application/json; api-version=2.0" \
            "$OIDC_URL_WITH_AUDIENCE" | jq -r .value)
          echo "::add-mask::${IDTOKEN}"
          echo "::set-output name=idToken::${IDTOKEN}"

          # Print decoded token information for debugging purposes
          echo $IDTOKEN | jq -R 'split(".") | .[1] | @base64d | fromjson'

The above step requests the identity token for the audience my-kubernetes-cluster and exposes it in subsequent workflow steps as ${{ steps.create-oidc-token.outputs.IDTOKEN }}".

Executing Kubernetes Commands

Now use the generated identity token to authenticate Kubernetes API calls in subsequent steps. You can use the kubectl binary, preinstalled on the GitHub-hosted runners, to interact with your Kubernetes cluster by adding the token, API server address, and CA. Adding these parameters allows you to execute Kubernetes API calls, like listing the current user’s permissions.

jobs:
  oidc-demo:
    steps:
      - name: Check Permissions in Kubernetes
        run: |
          kubectl \
          --token=${{ steps.create-oidc-token.outputs.IDTOKEN }} \
          --server="<API Server address>" \
          --certificate-authority="<API Server CA data>" \
          auth can-i --list --namespace demo

If you use multiple kubectl commands or reuse pre-built Kubernetes Actions, prefer creating a KUBECONFIG once instead of passing the options to each command. Use the Azure/k8s-set-context action to create a KUBECONFIG and automatically set it as the active KUBECONFIG for subsequent steps. Below are two workflow steps to create the KUBECONFIG based on a template and run the same kubectl without additional options to show the current user’s permissions.

jobs:
  oidc-demo:
    steps:
      - name: Setup Kube Context
        uses: azure/k8s-set-context@v2
        with:
          method: kubeconfig
          kubeconfig: |
            kind: Config
            apiVersion: v1
            current-context: default
            clusters:
            - name: my-kubernetes-cluster
              cluster:
                certificate-authority-data: <API Server CA data>
                server: <API Server address>
            users:
            - name: oidc-token
              user:
                token: ${{ steps.create-oidc-token.outputs.IDTOKEN }}
            contexts:
            - name: default
              context:
                cluster: my-kubernetes-cluster
                namespace: demo
                user: oidc-token

      - name: Check permissions in Kubernetes
        run: kubectl auth can-i --list --namespace demo

In the demo setup, both variants should show that you have the correct permissions to the deployments and pods in the demo namespace:

Resources          Non-Resource URLs   Resource Names   Verbs
deployments.apps   []                  []               [get watch list create update delete]
pods               []                  []               [get watch list]
...

Your workflow can now access your Kubernetes cluster and deploy the demo application. To test a deployment from the workflow, add two additional steps, which create a hello-oidc deployment using the k8s.gcr.io/echoserver:1.4 image and list the started pods afterward.

jobs:
  oidc-demo:
    steps:
      - name: Deploy demo application
        run: kubectl create deployment hello-oidc --image=k8s.gcr.io/echoserver:1.4

      - name: Check the starting pods
        run: kubectl get pods

The above GitHub Actions workflow steps show different approaches to set up authentication with the created identity token and use it to deploy a demo application.

Using Composite Actions to Make It Reusable

The previous steps already allow robust interaction with a Kubernetes cluster. However, If you have multiple clusters or workflows, you can create a reusable composite GitHub Action to avoid duplicating the OIDC token retrieval and KUBECONFIG set up in each workflow.

name: Kubernetes KUBECONFIG with OIDC token
description: Use GitHub-issued OpenId Connect token in KUBECONFIG for a cluster

inputs:
  server:
    description: URL of the Kubernetes API Server
    required: true
  certificate-authority-data:
    description: Certificate Authority Data of the Kubernetes API Server
    required: false
    default: "null"
  namespace:
    description: Active Namespace to use in Kubernetes cluster
    required: false
    default: default
  audience:
    description: Audience of the OIDC token. Must match the configured OIDC client id of the Kubernetes cluster
    required: true

runs:
  using: composite
  steps:
    - name: Create OIDC Token
      id: create-oidc-token
      shell: bash
      run: |
        AUDIENCE="my-kubernetes-cluster"
        OIDC_URL_WITH_AUDIENCE="$ACTIONS_ID_TOKEN_REQUEST_URL&audience=$AUDIENCE"
        IDTOKEN=$(curl \
          -H "Authorization: Bearer $ACTIONS_ID_TOKEN_REQUEST_TOKEN" \
          -H "Accept: application/json; api-version=2.0" \
          "$OIDC_URL_WITH_AUDIENCE" | jq -r .value)
        echo "::add-mask::${IDTOKEN}"
        echo "::set-output name=idToken::${IDTOKEN}"

        # Print decoded token information for debugging purposes
        echo $IDTOKEN | jq -R 'split(".") | .[1] | @base64d | fromjson'

    - name: Setup Kube Context
      uses: azure/k8s-set-context@v2
      with:
        method: kubeconfig
        kubeconfig: |
          kind: Config
          apiVersion: v1
          current-context: default
          clusters:
          - name: default
            cluster:
              certificate-authority-data: ${{ inputs.certificate-authority-data }}
              server: ${{ inputs.server }}
          users:
          - name: oidc-token
            user:
              token: ${{ steps.create-oidc-token.outputs.IDTOKEN }}
          contexts:
          - name: default
            context:
              cluster: default
              namespace: ${{ inputs.namespace }}
              user: oidc-token

Create the composite Action inside your repository as e.g. .github/actions/k8s-set-context-with-id-token/action.yaml. Reference it in workflows in the same same repository with uses: ./.github/actions/k8s-set-context-with-id-token. If you want to use it from other repositories, reference it by uses: $ORG/$REPO/.github/actions/k8s-set-context-with-id-token@main, replacing $ORG and $REPO with your GitHub organization and repository and adjusting main with the branch or tag to use a specific version of the Action.

With the composite Action, you can condense your workflow significantly:

jobs:
  oidc-demo:
    steps:
      - name: Setup Kube Context
        uses: ./.github/actions/k8s-set-context-with-id-token
        # Alternatively, if the Action is in a different repository
        # uses: $ORG/$REPO/.github/actions/k8s-set-context-with-id-token@main
        with:
          certificate-authority-data: <API Server CA data>
          server: <API Server address>
          namespace: demo
          audience: my-kubernetes-cluster

Utilizing a Kubernetes Credential Plugin

This section was added on April 12, 2023

Recently, a reader reached out to me, expressing concerns about their lengthy Helm deployments failing due to the GitHub Actions OIDC token’s short 5-minute validity. They wondered if there was an alternative approach.

Upon further investigation, we discovered that Kubernetes natively supports credential plugins. These plugins are invoked before executing a kubectl command and can retrieve a dedicated token for that specific command. The plugin’s logic also ensures that tokens are cached and refreshed as needed. Utilizing a credential plugin not only enables a more robust token retrieval process but also maintains the same token retrieval logic. Additionally, we can directly integrate the credential plugin into the KUBECONFIG, requiring only a single file.

Here’s how you can use the credential helper within a standard KUBECONFIG:

users:
  - name: oidc-token
    user:
      exec:
        apiVersion: "client.authentication.k8s.io/v1"
        interactiveMode: "Never"
        command: "bash"
        args:
          - "-c"
          - |
            set -e -o pipefail
            OIDC_URL_WITH_AUDIENCE="$ACTIONS_ID_TOKEN_REQUEST_URL&audience=${{ inputs.audience }}"

            IDTOKEN=$(curl -sS \
              -H "Authorization: Bearer $ACTIONS_ID_TOKEN_REQUEST_TOKEN" \
              -H "Accept: application/json; api-version=2.0" \
              "$OIDC_URL_WITH_AUDIENCE" | jq -r .value)

            # Print decoded token information for debugging purposes
            echo ::debug:: JWT content: "$(echo "$IDTOKEN" | jq -c -R 'split(".") | .[1] | @base64d | fromjson')" >&2

            EXP_TS=$(echo $IDTOKEN | jq -R 'split(".") | .[1] | @base64d | fromjson | .exp')
            EXP_DATE=$(date -d @$EXP_TS --iso-8601=seconds)
            # return token back to the credential plugin
            cat << EOF
            {
              "apiVersion": "client.authentication.k8s.io/v1",
              "kind": "ExecCredential",
              "status": {
                "token": "$IDTOKEN",
                "expirationTimestamp": "$EXP_DATE"
              }
            }
            EOF

Below is the complete reusable workflow (with the same signature as before), which you can use to implement this approach:

Expand for the full reusable workflow
name: "Kubernetes Set Context with OIDC token"
description: "Use GitHub issued OpenId Connct token in KUBECONFIG for a cluster"
inputs:
  server:
    description: "URL of the Kubernetes API Server"
    required: true
  certificate-authority-data:
    description: "Certificate Authority Data of the Kubernetes API Server"
    required: false
    default: "null"
  namespace:
    description: "Active Namespace to use in Kubernetes cluster"
    required: false
    default: "default"
  audience:
    description: "Audience of the OIDC token. Must match the configured OIDC client id of the Kubernetes cluster"
    required: true

runs:
  using: "composite"
  steps:
    - name: Setup Kube Context
      uses: azure/k8s-set-context@v2
      with:
        method: kubeconfig
        kubeconfig: |
          kind: Config
          apiVersion: v1
          current-context: default
          clusters:
            - name: default
              cluster:
                certificate-authority-data: ${{ inputs.certificate-authority-data }}
                server: ${{ inputs.server }}
          users:
            - name: oidc-token
              user:
                exec:
                  apiVersion: "client.authentication.k8s.io/v1"
                  interactiveMode: "Never"
                  command: "bash"
                  args:
                    - "-c"
                    - |
                      set -e -o pipefail
                      OIDC_URL_WITH_AUDIENCE="$ACTIONS_ID_TOKEN_REQUEST_URL&audience=${{ inputs.audience }}"

                      IDTOKEN=$(curl -sS \
                        -H "Authorization: Bearer $ACTIONS_ID_TOKEN_REQUEST_TOKEN" \
                        -H "Accept: application/json; api-version=2.0" \
                        "$OIDC_URL_WITH_AUDIENCE" | jq -r .value)

                      # Print decoded token information for debugging purposes
                      echo ::debug:: JWT content: "$(echo "$IDTOKEN" | jq -c -R 'split(".") | .[1] | @base64d | fromjson')" >&2

                      EXP_TS=$(echo $IDTOKEN | jq -R 'split(".") | .[1] | @base64d | fromjson | .exp')
                      EXP_DATE=$(date -d @$EXP_TS --iso-8601=seconds)
                      # return token back to the credential plugin
                      cat << EOF
                      {
                        "apiVersion": "client.authentication.k8s.io/v1",
                        "kind": "ExecCredential",
                        "status": {
                          "token": "$IDTOKEN",
                          "expirationTimestamp": "$EXP_DATE"
                        }
                      }
                      EOF
          contexts:
            - name: default
              context:
                cluster: default
                namespace: ${{ inputs.namespace }}
                user: oidc-token

Conclusion

To summarize, there are only three steps required to move from long-lived service accounts credentials to leveraging GitHub Actions OIDC together in Kubernetes:

  1. Configure the Kubernetes cluster to trust the GitHub Actions OIDC issuer
  2. Authorize the workflow identity with a rolebinding inside the Kubernetes cluster
  3. Adjust your existing GitHub Actions workflow to fetch an identity token and use it in requests to Kubernetes

With an initial effort to set it up, using OIDC allows you to ditch credentials inside your GitHub Actions workflows entirely and makes your CI/CD pipelines easier to manage and to keep secure.

Please share your experience and comments below. Have a fantastic day, and happy hacking!

Expand for the full demo GitHub Actions workflow

Below is the complete workflow file showcasing the above-described steps. First, you need to replace <API Server address> and <API Server CA data> with your values and create the .github/actions/k8s-set-context-with-id-token/action.yaml with the composite Action from above.

name: deploy-kubernetes

permissions:
  id-token: write # Required to receive OIDC tokens
  contents: read

on:
  push:
    branches: ["main"]
  workflow_dispatch:

jobs:
  oidc-demo:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Setup Kube Context
        uses: ./.github/actions/k8s-set-context-with-id-token
        # Alternatively, if the Action is in a different repository
        # uses: $ORG/$REPO/.github/actions/k8s-set-context-with-id-token@main
        with:
          certificate-authority-data: <API Server CA data>
          server: <API Server address>
          namespace: demo
          audience: my-kubernetes-cluster

      - name: Check permissions in Kubernetes cluster with existing Kube context
        run: kubectl auth can-i --list

      - name: Deploy Demo Application
        run: kubectl create deployment hello-oidc --image=k8s.gcr.io/echoserver:1.4

      - name: Check the starting pods
        run: kubectl get pods

Assigned Tags

      1 Comment
      You must be Logged on to comment or reply to a post.
      Author's profile photo John Juan
      John Juan

      Hi Tobias,

      Thank you for creating this blog post! It is super-informative and promising!

      Our team is currently leveraging Azure workload identity federation with GitHub in our CICD workflows, which seems like it is implementing this approach under the hood. We are currently evaluating SAP BTP, Kyma and would like to implement this identity federation approach instead of creating service accounts for CICD

      I wasn't sure if you were able to speak to how this would work with SAP BTP, Kyma so I've created a question in the SAP Community here.