Skip to Content
Technical Articles
Author's profile photo Maxim Afonin

How to monitor PostgreSQL with SAP Focused Run.




In this article we are going to setup an Open Component Monitoring, available in the SAP Focused Run System. We assume that you already have FRUN installed and configured, so I will not spend time on explaining basics. If you want to make yourself more familiar with a product, please visit expert portal space.  With latest feature pack release we can find additional features available in the FRUN. So we will use this version as OS script execution feature nicely presented there 🙂

Working Environment

We have one test server with SUSE Linux 12.2 where SAP Hybris is installed. SAP Hybris is running on the PostgreSQL DB that we would like to monitor. Host Agent is already installed and system is registered in the FRUN as a host:


PostgreSQL version is 9.4.19. Our test landscape is the following:

Our Focused Run system is 2.0 with FP01.

Technical Background

PostgreSQL is not a DB that is supported with SAP “out of the box”. Means that we cannot connect it with standard FRun capabilities – Host Agent (SHA) and secure-storage.

So the best way to collect required metrics is to use standard PostgreSQL tools, that we will wrap into bash script and than we will pass outcome to the FRun via Open Component Monitoring.

We will execute SQL requests to the PostgreSQL statistic tables with the help of the psql tool.

Custom scripts are executed by Host Agent by using ExecuteOperation data collector. In general command syntax looks like this:

../../exe/saphostctrl -function ExecuteOperation -name [conf file name] [passing parameter]=[value1]

The whole architecture:


  1. We have to prepare configuration file that will contain execution parameters for step #4
  2. Once file is created, we can start with configuration from FRun tool side
  3. Metric created, now configuration is distributed to the SDA
  4. SDA will use hostagent as an agent in order to collect metrics according to the FRun config
  5. Host agent will execute command saphostctrl -function ExecuteOperation 
  6. Execute custom script operations.
  7. Return data back to host agent
  8. Than to SDA
  9. Metric is transformed into FRun Metric Format
  10. Sent metric to the FRun

Prepare PostgreSQL

In order to enable monitoring capabilities we have to prepare our DB.

1. Adjust postgresql.conf file by adding the following parameters:

track_activities – Enables the collection of information on the currently executing command of each session, along with the time when that command began execution. This parameter is on by default. Note that even when enabled, this information is not visible to all users, only to superusers and the user owning the session being reported on, so it should not represent a security risk. Only superusers can change this setting.

track_counts – Enables collection of statistics on database activity. This parameter is on by default, because the autovacuum daemon needs the collected information. Only superusers can change this setting.

2. In order to allow execution of the SQL from the CLI we have adjust pg_gba.conf configuration in order to allow establish connections to the postgres service. Basically this config is responsible for security rules. Additional info on how we can restrict access can be found on the official PostgreSQL wiki page.

Once the pg_gba.conf was adjusted, do not forget to reload service config with pg_ctl reload command:

OK, done. Now we can check that we can execute psql commands under postgres user. For example let us request DB up-time info:

psql -qAtX -h -p 5432 -U postgres -d hybris -c “select date_part(‘epoch’, now() – pg_postmaster_start_time())::int”

it should return number in seconds. Details on the functions that we used to calculate time:

select date_part(‘epoch’, now() and pg_postmaster_start_time

Prepare Config File

Before we will start to create metrics in the FRun side we have to create config file that will be used by the Host Agent (SHA). This config provides parameters for the ExecuteOperation collector. Config file should be placed in the SHA folder:


For example here is our conf file for the postgre scripts:

Inside we have to specify parameters. In our case this will be:

Command: /usr/sap/hostctrl/psql_scripts/ $[METRIC_NAME] $[ENTITY]
Description: Execute psql wrapper
ResultConverter: flat
Mode: Sync
Platform: Unix

Now about parameters:

Command: This is information on what we should execute. If our script contains parameters that we should pass, than we should also specify them via variables in the config file. In our case it is $[METRIC] and $[ENTITY]

Description: This is just a description of of our operation. Nothing more and nothing less 🙂

ResultConverter: How we would like to see output. available values are flat and xx

Mode: How you want to execute operation. In our case, since it is metric we need to execute it in Sync mode

Platform:type of platform where scripts will be executed

Prepare PSQL Wrapper

As next step we need to build our custom scripts that will do connection to our DB. In my example I will create a big script with case switcher inside, so based on the selected metric, I will execute different commands. My postgre script collection can be found here on GiT. This are quite basic metrics that can be either enhanced or replaced. As short example on how it should work:

# Variables
input=$2 # this is additional paramater, mainly to specify either DB name or table name.
connection=”-h -p 5432 -U postgres -d hybris”# Metric FunctionsCacheHit() {
psql -qAtX $connection -c “select round(sum(blks_hit)*100/sum(blks_hit+blks_read), 2) from pg_stat_database”
}UpTime() {
psql -qAtX $connection -c “select date_part(‘epoch’, now() – pg_postmaster_start_time())::int”
}# Here we will create selection tree.
case “$metricname” in
printf ‘{“type”:”integer”, “name”:”CacheUsage”, “value”:”%.0f%s”}\n’ “$(CacheHit)”
printf ‘{“type”:”integer”, “name”:”UpTime”, “value”:”%s”}\n’ “$secs”

echo “00”

As you can see, script is returning values in the JSON format. Why we need to spend out time on this? Here is an answer:

SHA data collector can consume either exit code values or JSON format values. In case of exit code usage we do not need to return result as JSON. If script executed successfully, we will get Exit code. In case of Exit Code usage, execute a custom operation will check the result of the script. The script result is evaluated by the exit code of the script. This use case only supports the usage of rating and text. For each script execution only one metric can be defined.

EXITCODE = 0 is mapped to green

EXITCODE = 1 is mapped to yellow

EXITCODE >= 2 is mapped to red

However, we decide to have only values. It will allow us to build up thresholds and graphs. Uhh… that is not really obvious, but I will try to show some examples below to make it clear. So, JSON, here is the JSON structure that should be returned if we want to capture real value :


{“type”:”string”, “name”:”HealthCheck”, “value”:”Health check failed: Failure in flux capacitor”, “rating”:”3″}

{“type”:”integer”, “name”:”ElapsedTime”, “value”:”4603125″}

Ok, so, w have config file and we have custom script collection, let us check that we can collect metric via hostagent:

Great, you’re breathtaking! Values are returned and you can see your custom metrics. And the last part – we should Setup this in the FRun and than we can create dashboards, setup alerts, whatever….

Prepare Focused Run

I assume that required host is already registered in the FRun. So now we should open Open Component Monitoring tile from the home screen. Than we should enable a use case:

Ok, done. More details –  Focused Run Expert portal.

Now time to setup Metrics:

Done, now we have our metric created, we should wait for a while (distribute config to Agent) and than metric will be collected. As next step we can create Dashboard in the System Analysis tool. Details on how to create custom metric can be found here.

P.S. Custom Scripts can also be used in Solution Manager :), for the details please go to this link.

Assigned tags

      Be the first to leave a comment
      You must be Logged on to comment or reply to a post.