Skip to Content
Technical Articles
Author's profile photo Martin Pankraz

BTP private linky swear with Azure – keep the auditor happy with Private Link Service

This post is part 6 of a series sharing service implementation experience and possible applications.

Find the table of contents and my curated news regarding series updates here.

Looking for part 7?

Find the associated GitHub repos here.

Dear community,

Continuing with the implementation journey of BTP Private Link Service (PLS) we will have a closer look at limiting exposure of your http endpoints and RFCs. Your ERP is shipped with roughly 40k RFCs that can be remote enabled and a similar magnitude for http endpoints. Rumour has it you won’t be using all of them 😉

Fig.1 Pinkie hitting only tip of SAP endpoints iceberg, iceberg image source here

So, if not secured they pose an attack surface. And since we are connecting from a shared environment in BTP it is required to enforce guard rails for a good security practice. Your virtual network on Azure is private and the tunnel exposed by the PLS too. However, your environment on BTP itself – before traffic enters the PLS – is still a multi-tenancy setup isolated by CloudFoundry security groups.

Back in the day the SAP Cloud Connector (SCC) in combination with SAP Unified Connectivity (UCON) gave you the power to expose only the web-based endpoints and RFCs to BTP that you wanted. For many of you this is an audit-relevant topic to prove your security measures and reduce attack surface. UCON is shipped as of NW 7.40 and enabled by default. You need to configure to start using it though.

SAP Web Dispatcher (WDisp), WebSocket RFC (as of S/4HANA 1909) + SAP Java Connector (JCo) and UCON enable the same capability for PLS.

Before we start: bear in mind this post is about guidance to limiting access through PLS with standard SAP means as you would apply them with the SAP Cloud Connector, but not an overarching comprehensive security practice to protect your backend from outside access. I will share SAP resources and documentation to get you started for that.


SAP offers the mentioned security features, governs, and controls them. Security best practices etc. are therefore driven by them. I am not a security consultant, but since the guidance is based upon standard SAP security components, I still consider it sound. SAP implemented PLS on top of the Azure Private Link Service exposing it as a CloudFoundry service in BTP for your layer 4 connectivity into your private Azure virtual network. Azure provides the capability, but SAP owns the product built with it.

Let’s have a look at the moving parts

We need to tackle http communication and WebSocket RFC slightly different. They are both located on layer 7 in the ISO/OSI protocol stack but are defined differently. This will play a role, because the SAP WDisp allows URL filtering for http, but not for RFC.

Http at SAP means OData in most cases and is therefore discussed throughout this whole series. The guidance for OData can be applied to plain http or REST since it is the less specific protocol compared to OData.

Securing OData is straight forward with URL filtering

The communication setup is done in three places and relies on standard SAP functionality independent of the PLS.

  1. The OData service needs to be published/enabled on the source system, which is the SAP Gateway. Use transaction /IWFND/MAINT_SERVICE, SMGW, SICF etc. to handle.
  2. The SAP WDisp and the ICM on the NetWeaver app server offer Access Control Lists (ACL). Furthermore, the WDisp offers URL filtering based on the authentication and rewrite handler.
  3. The destination service on BTP encapsulates the whole communication config including protocol setup, credentials, authentication flow and trust store

Fig.2 Architecture overview for secure OData setup

See down below a snippet from a WebDispatcher URL filter config for the public ping service.

# We allow access to the "ping" service, but only if
# accessed from IP of PLS load balancer and only via https

P /sap/public/ping * * *
S /sap/public/ping * * *

I am not providing any more details on the destination setup on OData since it was discussed at length in the other parts of the series.

Securing RFCs requires SAP UCON and a specific destination setup

As I mentioned for RFCs, we need to use a different protocol. SAP refers to it as the WebSocket RFC. I translate it as web-based communication for the low-level RFCs. You will see also the term Remote Function Module (RFM). That refers to the actual ABAP object being called through the WebSocket RFC.

In general, the approach stays the same as with OData: Restrict caller access on IP level at the web dispatcher (check ACL) and make conscious decisions on your backend about exposed endpoints.

Fig.3 Architecture overview for secure WebSocket RFC setup

But first things first: My development system had no UCON config yet. So, down below I will describe how to make that initial setup quickly. Your productive systems should already be setup with UCON, if you intend to resist at least a little against network intrusions 😉

UCON is enabled by default but not configured. As a first step you need to maintain parameters on transaction RZ10.

Fig.4 Screenshot from RZ10 config

The first one activates UCON for RFCs and the second one specifically for WebSockets. The third parameter allows external RFC consumption. Parameter value 2 enforces UCON for every WebSocket RFC call. Reducing it to 1 bypasses UCON. This can be useful during initial testing and troubleshooting.

After a quick restart of the app server, we can continue with the UCON Cockpit (transaction UCONCOCKPIT).

Fig.5 UCON initial config

Since my system was untouched regarding UCON, I needed to run the Setup Operation for the “RFC Basic Scenario” first. Afterwards, I was able to configure the Setup for WebSocket RFCs. Check the SAP UCON documentation for more details.

Once the initial setup of both scenarios is done you are presented with the remote function modules that your system currently exposes (RFM Whitelist) by default. We will be calling the remote enabled function module MONTH_NAMES_GET. At this stage it is not reachable and my requests from BTP are failing.

Fig.6 RFM positive list

After adding my RFC enabled function module MONTH_NAMES_GET to the list, it becomes reachable from BTP😊

Fig.7 Output from RFM MONTH_NAMES_GET

Great, but how does the destination config look like? Wait no more, here it is.

Fig.8 RFC destination config

I added a new configuration entry of type RFC. That brings a whole set of additional properties for JCo, that have interdependencies. The properties have hybrid connectivity via the Cloud Connector in mind and therefore are tailored to it (e.g., ashost or sysnr). At this point of the PLS beta we still need to configure proxy type Internet and JCo wshost/wsport even though we are by no means making a connection over the Internet.

Depending on your technical user strategy you might have a special RFC user. For my prototype I was lazy and re-used my test user with extensive rights.

The PLS host name feature will be added soon, but until then I am relying on and a custom trust store for end-to-end SSL. There will be a dedicated post on that topic in the series next, no worries 😉

For the actual RFC request I added a Java servlet to the CF app that you already know from the beginning of this series.

In addition to that I need a new environment variable setting for JCo. This enables my Cloud SDK setup to leverage pre-shipped JCo.

Fig.9 Screenshot from manifest.yml

Thanks to the SAP Cloud SDK I can load the RFC destination the same way as any other. Using the class RfmRequest we can directly execute our request to the now exposed remote function module MONTH_NAMES_GET. See below the snippet from the servlet.

protected void doGet( final HttpServletRequest request, final HttpServletResponse response ) throws ServletException, IOException
    String fmNameFromPath = request.getRequestURI().trim().split("/myRFC/")[1];
    //"***fm name: " + fmNameFromPath);
    final Destination destination = DestinationAccessor.getDestination(DESTINATION_NAME);
    try {
        //clear rfc cache to avoid structural erros on interface changes with cached values
        //"***calling destination "+DESTINATION_NAME);
        final RfmRequestResult result = new RfmRequest(fmNameFromPath,false).execute(destination);
            String resp = result.toString();
  "***great-success "+resp);
    } catch (final RequestExecutionException e) {

Going forward you would likely provide a class and marshal the response into a Java object. Have a look at this example and the corresponding GitHub source on the Cloud SDK docs to get started. They refer to BapiRequest but you can transfer the approach to RfmRequest.

Further Reading and SAP Docs references

Final Words

Uhh, that was quite the ride. I showed you how you can limit exposure of your remote enabled RFCs and OData endpoints when communicating with BTP through the BTP Private Link Service. Since we are applying the same reasoning as the Cloud Connector, we can assume your Auditor will be equally annoyed 😉

I am investigating further on RFCs for ECC, since WebSocket RFCs are available only forom S/4HANA 1909 onwards. Stay tuned for updates.

Any further inputs from you @Developers and Security Experts? Any more details you would like to see covered?

@Kudos to Robert and Markus for nudging me in the right direction for the UCON setup 😊

In part 7 I will talk about the end-to-end SSL setup that gets activated with the upcoming host name feature for BTP Private Link Service.

Find the related GitHub repos here. Find your way back to the table of contents of the series here.

As always feel free to ask lots of follow-up questions.


Best Regards


Assigned Tags

      Be the first to leave a comment
      You must be Logged on to comment or reply to a post.