Skip to Content
Technical Articles
Author's profile photo Aydin Ozcan

Kyma’s Transition to Modular Architecture

Kyma is transitioning into a modular architecture, shifting away from its current component-based structure. This shift means that not all of the software managed by Kyma on Kubernetes is automatically installed and activated from the very beginning.

This transformation aims to conserve resources when operating a Kyma cluster, especially when certain components are unnecessary for your specific use case. This streamlined approach also simplifies Day 2 operations, allowing you to activate only the components you truly require.

Currently, two Kyma components have started this modular journey:

  • BTP Operator: This module facilitates the integration of SAP BTP services within your Kubernetes cluster.
  • Keda: Scales your applications based on events.

Further down the line, additional components will adopt this modular structure:

  • Istio: Service mesh with Kyma-specific settings.
  • Serverless: Empowers you to execute code without concerning yourself with the underlying infrastructure.
  • Telemetry: Gathering and measuring data.
  • Eventing & NATS: Manages the distribution of events.
  • Application Connector: Connects various SAP or non SAP systems.
  • API Gateway: API traffic management.

Impact on Us

Let’s consider the case of the BTP Operator: If you don’t manually activate this module, you won’t be able to generate service instances for SAP-provided services like ‘xsuaa’ and ‘destination’. Furthermore, establishing service bindings for these services will not work.

In addition, running a command like this to obtain the cluster ID:

kubectl get cm sap-btp-operator-config -n kyma-system -o jsonpath='{.data.CLUSTER_ID}'

Won’t yield results if the BTP Operator is not activated.

Activating Modules

To activate these modules:

  1. Once your Kyma cluster is provisioned, head to the ‘kyma-system’ namespace.
  2. In the left navigation panel, locate ‘Kyma’ at the bottom.
  3. Select the ‘default’ option.
  4. Within the ‘default’ section, when you press the edit button, a new window will open, and you’ll discover choices to activate specific modules.
  5. Keep the regular channel if you are provisioning for production.
  6. It can take up to 5 minutes to activate the modules, and you may need to log off and log back in to your dashboard.

There are couple of other options for enabling modules via CLI or configuration YAML files.

If you’re working with the kyma-cli and you’ve self-provisioned your cluster in an environment like k3s, the following command should function:

kyma alpha enable module btp-operator --channel fast --wait

However, to make this work with the BTP Kyma runtime.

kyma alpha enable module btp-operator --channel fast --wait --kyma-name default

The Kyma CLI expects the config name to be “kyma-default” but the BTP Kyma uses “default” instead.

This shift towards modularity furnishes you with amplified control and efficiency. It empowers you to customize Kyma precisely to your requirements and more importantly reduces costs.

Assigned Tags

      2 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Jeremy Harisch
      Jeremy Harisch

      If you want to use the CLI to enable a Kyma module with BTP Kyma runtime you can pass the KymaCR name using the `--kyma-name`.

       

      kyma alpha enable module btp-operator --channel fast --wait --kyma-name default
      Author's profile photo Aydin Ozcan
      Aydin Ozcan
      Blog Post Author

      Thanks Jeremy .
      As I stated in the blog, kyma cli is working fine with the open source version of Kyma.

      I'll update the blog to put --kyma-name default switch to make it work with BTP Kyma.

      Even it is giving the error below . Still manages to enable the module on BTP Kyma.

      - Successfully connected to cluster (1.282s)
      - Modules patched! (332ms)
      X kyma did not get ready: All attempts fail:
      #1: waiting for all modules to become ready at 2023-08-29 18:44:41 +0300 +03
      #2: waiting for all modules to become ready at 2023-08-29 18:45:09 +0300 +03 (58.028s)
      Error: All attempts fail:
      #1: waiting for all modules to become ready at 2023-08-29 18:44:41 +0300 +03
      #2: waiting for all modules to become ready at 2023-08-29 18:45:09 +0300 +03

      It seems like there might be an issue with timeouts occurring before the command returns.