Optimising Istio sidecar usage in Kyma runtime
Kyma & Istio
Istio Service mesh is a part of the Kyma runtime. So what does this mean?
This implies every workload your run in Kyma runtime, has a sidecar proxy container running next to the application container. This sidecar container is istio-proxy and acts as a
- Proxy for inbound and outbound traffic for your application
- Service discovery
- Solves various security concerns OOTB such as ensuring secure mTLS communication inside the cluster, enabling access management, etc.
- Takes care of non-functional aspects such as monitoring, rate-limiting, traffic management, and so on.
You can discover various benefits and features of Istio on the official page. If you have worked with Hystrix or similar libraries in past, think of Istio providing the same and a lot more features. On top of that, the logic runs in a different sidecar container inside the same pod. Thus your application logic does not need to have all the non-functional logic related to traffic management, network security, or observability.
This also implies:
- Now there is a second container (Istio Proxy) running with your application container
- It will also consume memory and CPU.
This blog is focused on providing guidelines about how the resource consumption for Istio proxy can be optimized. It is based on the Istio at Scale:Sidecar blog and targeted to be Kyma runtime specific.
By default, Istio enables communication for each service in the service mesh to every other service in the mesh.
This implies each proxy sidecar will receive all the configurations for every other service in the service mesh. Now, this configuration contains a whole lot of Istio-based resource details. The same applies when a service scales or a pod restarts, istiod will always propagate the configurations about all other services to the istio-proxy sidecar.
This is no problem when there are only a few services in your Kyma runtime. However, as you start expanding your microservice architecture and the number of services grows, the following effects will manifest
- Increased network traffic and CPU usage for the Istio control plane (istiod). Since it has to now sync more configurations to all sidecar proxies.
- As the number of services increases, so does the configuration required to the stored in each proxy sidecar. This will increase their memory usage.
- Since the control plane (istiod) is working more, it will be slower to propagate changes to services.
Normal Use cases
It will be very unlikely that your workloads/business logic need to have network access to all the other services running on Kyma runtime. In most of the use cases, the logic is
- Mostly confined to a single namespace
- Send events
- Access External Services
Istio understands that it is a valid requirement and thus provides configuration to fine-tune.
There is an Istio resource called Sidecar. You can use this resource to specify what configurations the proxy should get from the control plane.
In rudimentary terms, it is a way of saying
There are 500 Services in my Kyma runtime, but my application/scenario needs to access only 5.
I need details only about those 5.
An example of a Sidecar configuration
apiVersion: networking.istio.io/v1beta1 kind: Sidecar metadata: name: default namespace: my-namespace #no workload selector, applies to all pods in the namespace spec: egress: - hosts: - ./* #Access all services in this namespace - istio-system/* #Access all services in the istio-system namespace - kyma-system/eventing-event-publisher-proxy.kyma-system.svc.cluster.local #Access the eventing-event-publisher-proxy service to publish events
The same can also be done via UI:
Note: You can also configure the workload selector to apply the configuration to specific workloads. Check out the official Istio documentation for further details.
- Inspect and understand the network and access requirements for your application
- Configure Istio Sidecar accordingly to optimize the istio-proxy resource usage in Kyma runtime.