Technical Articles
[SIT Belgium 2019 Recap] Debugging Node.js Applications in SCP CF. Part 3: Accessing Service Instance
Intro
This is a recap of the session “Troubleshooting Node.js applications: Debugging in SAP Cloud Platform Cloud Foundry” that took place in a recent SAP Inside Track Belgium 2019 event.
Slide deck of the session is available at SlideShare.
Overview of the session recap blog series:
Part 1: Remote Debugging – Environment Preparation and General Techniques |
Part 2: Remote Debugging in Production Environment |
Part 3: Accessing Service Instance |
Part 4: Time Travel Debugging |
Port forwarding through SSH tunnel to application container in SCP CF: outlook at networking aspects
In the first blog of this series, it was described how port forwarding can be set up through SSH tunnel to an application container in SCP CF. Now we will get a closer look at what happens under the hood and how we can use this knowledge to access other services within SCP CF – such as service instances.
As it has been mentioned earlier, assuming SSH access has been enabled for the application in SCP CF, port forwarding for an Inspector port of a Node.js application can be set up using a CF CLI command cf ssh
:
cf ssh [-N] -L [bind host:]{bind port}:{host}:{port} {app}
With the above setup in place, when a debugger from developer’s local machine attempts to attach to an Inspector of the Node.js application running in SCP CF, the below communication flow takes place from network perspective:
A debugger accesses an exposed port of SSH proxy, which invokes SSH daemon of the target application container on SCP CF. Here, we use it as a tunnel, and we can forward any port through it. In the above example, we forward a port of an Inspector of the Node.js application that runs locally within the application container. This does not necessarily need to be a local port within the application container (a service that runs locally and listens on a local port of the application container) – it can be any published port of any container that is accessible from the application container that we have SSH access to.
This feature becomes very handy in cases when we need to get access to service instances that have been created in SCP CF. As it is not possible to enable SSH access to a service instance container in the way it can be done for an application container, but given service instance containers are accessible over network to application containers, we can use this to access services exposed by a service instance, through SSH tunnel that has been set up to an application container in SCP CF.
In this blog, I will use MongoDB service instance in SCP CF as an example, and we will see how we can access it using SSH tunnel to the application container in SCP CF – the same approach can be applied to other services in SCP CF.
Access to service instance through SSH tunnel to application container in SCP CF
The above provided diagram that depicted communication flow from a debugger to an Inspector port of a Node.js application can be extended to illustrate communication flow from a service client (such as MongoDB client) that runs on a local machine, to a service instance (such as MongoDB server) that has been created and runs in SCP CF:
Following general principles of containerization, a service instance that runs within its container (that has allocated container IP assigned to it), exposes certain ports that are required to interact with the service. For example, MongoDB service exposes a port 27017, which is MongoDB listener / primary daemon service. Neither container IP, nor an exposed service port are accessible over network by other containers that run on other hosts – the exposed port is published by the container to a host where the container runs, and the container becomes accessible by host IP of that host, when connections to it are done from other hosts of SCP CF.
For instances of some SCP CF services, it is possible to retrieve this kind of information using corresponding service instance dashboards – MongoDB is a relevant example here. Let us create a new MongoDB service instance and apply theory to practice:
Now we can navigate to MongoDB dashboard of that service instance. Here, several properties are of interest in context of the demo:
- Service Information > Service Instance – in particular, GUID,
- Container information > Environment Variables – in particular, MongoDB database name, user name and password,
- Container Information > Network Settings,
- Container Information > Exposed Ports.
At this point, we already possess information that is required to establish connection from a local machine to this MongoDB service instance. Before we progress, it is required to restart the Node.js application – later in this blog it will be described, why and when application container restart is necessary.
Assuming SSH access is still enabled for the Node.js application in SCP CF, we can set up port forwarding through SSH tunnel for MongoDB server’s port using CF CLI command cf ssh
– following example with the created MongoDB instance, the complete command syntax is:
cf ssh -N -L 27017:10.11.241.46:41513 weather-demo
And after this is done, let us use MongoDB client to test the connection. I use Robo 3T – any other alternative MongoDB client that is compatible with the version of MongoDB server we connect to, shall be fine, too.
Note that the Node.js application is not really required here – we only need an application container that can be used for SSH access, but not the application that runs in this container.
Alternatively, connection and authentication information (as well as other service instance properties) can be obtained in either of two ways:
- Bind a service instance to an application that has been deployed to SCP CF. Connection and authentication information for the service instance can be then retrieved from an application environment variable VCAP_SERVICES.
- Create a service key for a service instance. Connection and authentication information exposed for the service instance can be then retrieved from the service key.
Why and when is application container restart required?
In some resources, it is suggested to restart an application after binding a service instance to the application. Although this is technically fine, precisely speaking, we can restart the application right after service instance creation in order to achieve required effect – even if binding has not yet been created. Let us look at mechanics under the hood to understand why that is the case.
SCP CF employs concept of application security groups, which can be defined as collections of egress allow rules that contain information about destinations (protocol, IP address(es) or their ranges and ports), to where applications (or, to be more precise, application containers) can send traffic.
By lifecycle, application security groups are divided into:
- Staging application security groups – egress rules required to be in place for an application staging process to execute successfully in part of accessing external resources. For example, when a Node.js application is staged, it needs to download dependency modules from NPM repositories – hence, it needs to be able to access NPM repositories over network,
- Running application security groups – egress rules required to be in place for an application to access required external resources at runtime. Commonly, running application security groups are less permissive then their staging counterparts.
By scope, application security groups are divided into:
- Platform-wide,
- Space-scoped.
Application security groups can be browsed using a CF CLI command cf security-groups
or using SCF CF cockpit:
After we created a MongoDB service instance, a list of application security groups has become extended by one more space-scoped group, which has been automatically created by SCP CF for a MongoDB service instance.
Note that a group name contains a reference to a service instance GUID that we observed when browsing through service instance properties in its dashboard.
For each available application security group, it is possible to list egress rules that are contained in it – this can be done using a CF CLI command cf security-group {application security group}
or using SCF CF cockpit.
The application security group that was created for the MongoDB service instance contains two rules, and one of them exposes a MongoDB server port.
It shall be noted that newly created application security groups – regardless of the way how they have been created (manually by administrator or automatically by SCP CF platform) – do not affect already started applications. As a result, if the application was started before a corresponding application security group has been created, destinations that are maintained in rules of this group will not be accessible for the application yet.
This can be checked by accessing the application container using SSH and testing network connection from the application container to a MongoDB server that runs in a service instance container – as it can be seen, network connection test failed:
Now we restart the application container by triggering restart of the corresponding application, access the application container using SSH and repeat network connection test to the same target MongoDB server – and now the attempt is successful:
Ability of the application container to establish connection to the host and using the corresponding port where the service instance container runs and listens on is a good indicator that port forwarding through SSH tunnel can be set up for the service instance following earlier described technique.
It is worth mentioning here that Cloud Foundry now provides alternative to application security groups – namely, dynamic egress policies that aim removal of some restrictions that are faced when working with application security groups – such as dynamic egress policies take immediate effect (no need to restart application) and provide possibility to configure policy on a more granular level (at an application level). Dynamic egress policies functionality is currently in beta, so stay tuned and follow updates in this space.