Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
vadimklimov
Active Contributor

Intro


This is a recap of the session “Troubleshooting Node.js applications: Debugging in SAP Cloud Platform Cloud Foundry” that took place in a recent SAP Inside Track Belgium 2019 event.

Slide deck of the session is available at SlideShare.

Overview of the session recap blog series:
















Part 1: Remote Debugging - Environment Preparation and General Techniques
Part 2: Remote Debugging in Production Environment
Part 3: Accessing Service Instance
Part 4: Time Travel Debugging


 

Demo application and environment


The application that is going to be used for illustration of debugging techniques, is a Node.js application that has been written in TypeScript, compiled (to be more precisely, transpiled) using standard TypeScript compiler into JavaScript code that can be interpreted by JavaScript engine of Node.js runtime. Given the application has been developed in TypeScript and its code requires transpiling, to make debugging more straightforward, source map files generation has been enabled in TypeScript compiler configuration to make it possible to map originally developed TypeScript source code and corresponding JavaScript code that has been generated by TypeScript compiler:



The application exposes two APIs – one is to get current temperature in a specified city using data provided by OpenWeatherMap and consumed using its public API, and the other one is an echo service that sends back originally submitted text with a predefined prefix.

Several application configuration properties (such as URL of the consumed OpenWeatherMap API and application key required to consume the API) have been externalized as environment variables.

Application’s source code and accompanying deployment descriptors can be found in GitHub repository.

The application has been deployed to SAP Cloud Platform Cloud Foundry (SCP CF) environment.

Visual Studio Code has been used for development and debugging the application.

 

High level overview of Node.js debugger and debuggee interaction


Simplified illustration of how a debugger interacts with a debuggee (a Node.js application) is provided on the picture below:



It is necessary for the Node.js application to run with an enabled Node Inspector, which listens on a specific port (default is 9229, but it can be re-defined) for incoming connections from a debugger.

When a debugger (to be more precise, an Inspector client) successfully attaches to an Inspector, debugging session is established and a developer can proceed with traditional debugging steps. It shall be noted that currently Node.js runtime provides support for legacy V8 Debugging Protocol (enabled by arguments --debug and --debug-brk) and its successor – V8 Inspector Protocol (enabled by arguments --inspect and --inspect-brk), but given V8 Debugging Protocol is deprecated in most recent release of Node.js (Node.js 12) and is not encouraged to be used on general basis, but only in reasonably exceptional cases, we will make use of V8 Inspector Protocol as much as possible.

 

When the Node.js application is deployed to SCP CF and runs remotely, it is necessary for a debugger to establish connection to an Inspector over network and make use of port forwarding through SSH tunnel to an application container where the debugged Node.js application runs, which can be achieved by using SCP CF capabilities. An enhanced illustration of debugger and debuggee interaction in such an environment is provided below:



 

Several preparatory steps are required to achieve the illustrated state:



 

Preparation of Node.js application for remote debugging


In Node.js application manifest (package.json), define a script to start the application with an enabled Inspector (an argument --inspect / --inspect-brk😞



It is also possible to modify the default script ‘start’ in such a way that it starts the application with an enabled Inspector, but I would rather prefer to keep the default start script unchanged and let it start the application with a disabled Inspector, and use a separate dedicated script to start the application with an enabled Inspector, so that it is possible to choose between starting the same application with enabled and a disabled Inspector by running a corresponding script.

 

Next, in Cloud Foundry deployment manifest (manifest.xml), use the defined script as a startup command of the deployed application:



This is required to instruct SCP CF that the deployed application shall be started not by running the default script (script ‘start’), but by running the earlier defined custom script.

Alternatively, it is also possible to specify application start command with an argument -c when deploying the application using a CF CLI command cf push.

 

After this is done, the Node.js application is pushed to SCP CF – it is now deployed to SCP CF environment and started with an enabled Inspector.

 

Preparation of Cloud Foundry environment for remote debugging


SCP CF provides possibility to enable SSH access to an application container – when such access is enabled, the container can be accessed via SSH using CF CLI.

By default, SSH access is enabled at SCP CF space level, but it is not enabled at application level.

This can be checked, as well as SSH access can be enabled or disabled using following CF CLI commands:

 

Space level

  • Check if SSH access is enabled at space level: cf space-ssh-allowed {space}

  • Enable SSH access: cf allow-space-ssh {space}

  • Disable SSH access: cf disallow-space-ssh {space}


Application level

  • Check if SSH access is enabled at application level: cf ssh-enabled {app}

  • Enable SSH access: cf enable-ssh {app}

  • Disable SSH access: cf disable-ssh {app}


 

After enabling SSH access to the application, it is essential to restart the application – otherwise, although a CF CLI command cf ssh-enabled will output that SSH access is enabled, an attempt to connect to the application container using SSH (and a corresponding CF CLI command – cf ssh) will fail.

When SSH access has been enabled, but the application has not been restarted:



And after the application has been restarted:



 

After SSH access to the application has been enabled and the application has been restarted, it is now possible to forward remote ports through SSH tunnel and bind them to local ports of a machine where CF CLI runs. Later on in this blog series we will have a closer look at how knowledge of some aspects of networking in SCP CF and particularly understanding of application security groups can help leveraging some other capabilities of SSH tunnelling to SCP CF and accessing service instances that are running in it, but for time being, we will take a note that an Inspector of the running Node.js application is just one of services that can be accessed through SSH tunnel to the application container and an Inspector port that was used during startup of the Node.js application, can be forwarded and bound to a local port. It is essential to note that when started, port forwarding through SSH tunnel does not verify if the forwarded port is opened and if a corresponding listener service is ready to accept requests – in other words, it forwards requests sent to a local port, to a remote port through SSH tunnel, but does not verify readiness of a remote listener/service to accept requests on the specified remote port. As a result, it is technically possible to start port forwarding through SSH tunnel for a remote closed or not ready port, and port forwarding will be started successfully, whereas the error will be thrown when an actual request will be forwarded to that remote port and will hit a closed port or a service that listens on that port, but is not ready to accept requests.

A corresponding CF CLI command for port forwarding through SSH tunnel is: cf ssh [-N] -L [bind host:]{bind port}:{host}:{port} {app}. An argument -N is used to omit access to an SSH shell of an application container right after setting up port forwarding. If the argument is provided, port forwarding through SSH tunnel is enabled and an SSH shell to an application container can be started at a later time using a CF CLI command cf ssh.

Following the demo, a CF CLI command to enable port forwarding of a default Inspector port of the Node.js application deployed and running in SCP CF, is: cf ssh -N -L 9229:127.0.0.1:9229 weather-demo:



In sake of simplicity, a default Inspector port (9229) is used and is mapped to the same bind port on a local machine. A bind port can be any unused port and does not need to be the same as a forwarded port.

 

At this point, not only the Node.js application has been deployed to SCP CF environment and started with an enabled Inspector, but an Inspector port has been forwarded and bound to a local port. Hence, from perspective of a locally running Node.js debugger, it is seen as if an Inspector of the Node.js application would have been running locally and listening on a local port 9229 – usage of a specified local port can be verified with the command netstat executed on a local machine:



 

It shall be noted that when started, port forwarding through SSH tunnel does not verify if the remote is accessible, if the remote port is opened and if the corresponding listener service is ready to accept requests on it.

 

Debugging configuration


The last preparatory step is to set up debugging configuration that will allow a Node.js debugger to attach to a running process of a Node.js application. As it has been mentioned earlier, with the help of port forwarding through SSH tunnel, we now have a local port 9229 that listens for incoming requests, and when such requests will be received by it, CF CLI will forward them through SSH tunnel to an Inspector port of the Node.js application that runs in the application container in SCP CF.
In Visual Studio Code, debugging configuration is maintained in launch.json, and sample relevant debugging configuration is provided below:



A debuggee (the Node.js application) already runs – hence, a debugger does not need to launch it, but needs to be instructed to attach to it (request = attach).

Port to which a debugger needs to be attached, is a local port that was used when starting up port forwarding through SSH tunnel for an Inspector port – in the demo, it is a port 9229.

Special attention shall be drawn to parameters localRoot and remoteRoot. These parameters are used to specify root directory of the application source code in a local location (workspace) and root directory of the deployed application in a remote location (application container). Parameters are essential to allow correct source code mapping between a debugger and a debuggee. If these parameters are not provided, a debugger will not be able to map debugged source code located in a local workspace with interpreted code located in the application container of SCP CF. While it is common setting for local root, it is worth looking into how a value for remote root is determined. To do so, let us connect to the application container via SSH (using a CF CLI command cf ssh) and explore application environment.

It shall be noted that when a Node.js application is deployed to SCP CF, pushed application is placed to a directory /app in the application container, for which full path is $HOME/app that resolves to /home/vcap/app. As a result, we can list this directory and observe the Node.js application’s files that have been pushed when deploying the application:



 

This can be checked in an alternative way, using Node REPL (Read-Eval-Print-Loop) console. When a debugger is attached to a debuggee process, we can check few attributes of the process – such as process.execPath and process.mainModule – to see which executable was used to start a Node process (full path to Node executable) and what main module of the Node.js application is (full path to the module), correspondingly:



In the demo, I use Visual Studio Code debugger, but it is also finely possible to use alternative tools – such as Google Chrome DevTools:



 

Remote debugging in action


When we are done with preparatory steps, we can now launch respective debugging configuration and get similar debugging experience as if we would have been debugging a locally running Node.js application:



Visual Studio Code is equipped with feature rich Node.js debugger, so I will just highlight few of them that are handy.

 

Breakpoints and logpoints


Breakpoints is probably one of the most common techniques that developers got used to when debugging programs regardless of programming language, framework and runtime. Node.js is not an exception here, and breakpoints are first class citizens in a Node.js debugger. A specific flavour of breakpoints is conditional breakpoints that allow to pause execution of the debugged program only when certain condition is met, and corresponding expression evaluates to true:



 

Although it is doubtlessly helpful to pause a program and explore its internals – variables, call stack, etc. – there are practical cases when a developer only needs to observe content of specific variables or evaluate certain expressions, and pausing the debuggee becomes an unwanted side-effect in troubleshooting process. In such cases, it is more convenient to use logpoints instead of traditional breakpoints.

Logpoint is a type of breakpoint that produces a specified log entry and outputs it to debug terminal without suspending program execution. Essentially, logpoint is a console.log() statement that is added at a specific location of an analyzed program at debug time, rather than at development time:



 

This can be verified by enabling trace of debugger and debuggee interaction, which can be achieved by activating debug adapter trace in corresponding used debugging configuration:





As it can be observed, for a location of a logpoint that was set in TypeScript source code of the application, a respective mapped location in generated JavaScript code was determined in a local workspace (thanks to source map files generated by TypeScript compiler) and passed to a matching location of that same generated JavaScript code that is located at the remote application container where the application runs (thanks to local root and remote root settings in debugging configuration).

 

Watched variables


There can be a lot of variables and properties declared when reaching certain processing steps, whereas a developer might be interested in continuously observing only few of them. In such cases, it is worth using watch expressions to get access to content of a selection of variables and to evaluate expressions:



 

Outro


In the next blog of this series, we will look into how described remote debugging technique can be used when troubleshooting Node.js applications that have been deployed to a production tenant of SCP CF or when some of prerequisites (preparatory steps) described in this blog, cannot be fulfilled.

 

There are helpful materials in SAP Community about usage of SSH and port forwarding through SSH in a Cloud Foundry environment that illustrate this technique with some other examples and that are worth further reading:

If you want to familiarize yourself with more hands-on examples on how the technique described in this blog can be applied to other application runtimes – in particular, to Java runtime – I would strongly recommend reading a blog Max’s Adventure in SAP Cloud Platform: Debug your Java application in Cloud Foundry written by iinside, where he provides detailed explanation and demonstration on how a Java application that has been deployed to SCP CF, can be debugged remotely.
4 Comments
Labels in this area