Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Martin-Pankraz
Active Contributor








This post is part of a series sharing service implementation experience and possible applications of SAP Private Link Service on Azure.

Find the table of contents and my curated news regarding series updates here.

Looking for part 2?

Find the associated GitHub repos here.









🛈Note:

Nov 2022: SAP released Azure Application Gateway for SAP Private Link. It simplifies your architecture further and increase security with a managed web application firewall compared to the standard load balancer setup described in this post.

Oct 2022: SAP added guidance for "bring your own domain and certificate" for the SAP Private Link. In addition to that you can obtain the certificate from well-known certificate authorities automatically via Azure Key Vault like so.

22nd of June 2022: SAP announced General Availability of the service. References to the Beta state no longer apply!

24th of Nov 2021: SAP introduced hostname feature for PLS. Going forward host names are used instead of private IPs. Not all Screenshots below have been updated!

Dear community,

With the release of the Beta of SAP Private Link Service (PLS) exciting times dawned upon us. We finally get a managed solution to securely connect from our Apps running on BTP (deployed on Azure) to any IaaS workload running on Azure without even traversing the internet. The first options that come to mind would be SAP WebDispatcher, ECC, S4, HANA, SAP CAR, anyDB, Jenkins, Apache or HPC cluster to name a few.

Before Private Link Service, you would have typically deployed an SAP Cloud Connector (reverse connect tunnel over the public internet) or sophisticated internet-facing setups involving gateway components with web application firewalls to allow inbound. Often this required two separate VMs (primary + shadow instance) or at least additional processing power on the primary application server, web dispatcher or the likes.

In addition to that you needed to open outbound ports for the cloud connector to reach the public BTP endpoints to initiate the reverse connect. This outbound ports to the public Internet are often a no-go for customers, that has only been tolerated for the lack of a suitable alternative.

If you feel brave enough for the Beta and your focus is layer 4 connectivity those days are gone 😊See part 8 of the series for a deep dive on Cloud Connector vs. PLS.

I am referring to layer 4 and layer 7 of the OSI model throughout the post. Layer 4 addresses network level communication such as TCP/IP and layer 7 application protocols such as http and RFC.


Fig.1 pinkies “swearing”


The first scenario we are going to look at in this series, is the consumption of an OData service living on my S4 system, which is locked up in an Azure private virtual network (VNet). My BTP workloads are provisioned in an Azure-based subaccount in west Europe (Amsterdam) and my S4 is based in north Europe (Dublin).

The beta release covers one way scenarios from BTP to Azure VMs for now. Check philipp.becker post on the next planned steps.

SAP’s docs and developer tutorial focus on the CF CLI commands. In my blog I will show the process with the BTP UI instead.


Fig.2 architecture overview



Let’s look at the moving parts


To get started we need to identify our VM, its location, and the VNet where it is contained.


Fig.3 S/4Hana VM properties


We can see the system has no public IP. Furthermore, my Network Security Group on the mentioned subnet is set to allow inbound from my VNets and my P2S VPN but not from the Internet. This reflects common setups. So, my way in to reach my S4 from BTP will be the private link service.

Next, I deployed a standard load balancer within the same resource group as my S4 and configured it to target my two SAP web dispatchers. Make sure you choose NIC instead of IP Address for your backend pool configuration. The dispatchers will be addressed round-robin to achieve optimal throughput and address high availability to some extent.


Fig.4 Screenshot of Load balancer configuration


I pointed my health probes against the SSL port, that was configured on backend transaction SMICM. SAP NetWeaver exposes two “pingable” endpoints. You can verify on backend transaction SICF.

  • /sap/bc/ping needs authentication and

  • /sap/public/ping, which is open to be called by everyone in line of sight of the system



Fig.5 Screenshot of Load balancer health endpoint configuration


The load balancing rule finally ties together everything and establishes the route.


Fig.6 Screenshot of Load balancer rule configuration


Using this rule, I receive https traffic on the standard port 443 and pass it on to the https port my web dispatchers are listening on. Usually that is 443 + SAP instance number.








Note: Introducing an internal standard load balancer into your subnet will compromise your outbound connectivity (e.g. the internet) when there is no additional routing mechanism like a User-Defined-Routing table, NAT Gateway or a public load balancer in place.

The Azure NAT Gateway gets you a fully managed solution for outbound-only internet connection (no inbound considerations needed). An additional public load balancer instead will be less costly but needs to be locked down in terms of inbound connectivity. Find more details on the process on the Azure docs and the Azure CLI command to add the required outbound rule to the load balancer below.

az network lb outbound-rule create --address-pool <your public lb backend pool> --frontend-ip-configs <plb-pip-config> --idle-timeout 30 --lb-name <your public lb name> --name MyOutBoundRules --outbound-ports 10000 --enable-tcp-reset true --protocol All --resource-group <your resource group> --subscription <your azure subscription>

Now, we are all set to create the Private Link Service on Azure using the VNet info from the VM and the load balancer config.


Fig.7 Screenshot of Private Link Service Deployment settings


On the access security tab, I chose “Role-based access control only” but you can adapt to your needs.

Once the deployment finishes navigate to the Properties pane (under Settings) of the Private Link Service on Azure and retrieve the Resource ID. You will need it to complete the process on the BTP side.


Fig.8 Screenshot of Private Link Service Deployment settings


With that we move over to BTP. Open your subaccount, ensure that you assigned the Private Link service (Beta) on your entitlements and create the service. Use the name az-private-link in case you want to plug & play with my examples. Type a message to ensure you can identify the connection request on the Azure side. This is useful if there are multiple requests on the same service and you need to be able to act on them separately.


Fig.9 Screenshot of Private Link Service deployment wizard on BTP


Once you hit submit, an approval request gets forwarded to the Azure Private Link Service assoicated to the resourceId you supplied before. Mark the request and hit approve.


Fig.10 Screenshot from approval request on Azure portal


So far so good. Finally, we need to bind this new SAP Private Link Service to any app to be able to send http calls through that tunnel and see the private IP on the BTP side. Without that first binding it won’t be generated. Re-use my naming to be able to run my Java or CAP project right away.






Note: As of 24th of November SAP introduced generated host names (instead of plain private IPs). I kept the screenshots using IPs anyways, because part 7 of the blog series discusses this feature upgrade in detail.

Going forward, I will reference my Java app using the SAP Cloud SDK, but you could do with any other BTP supported runtime. Harut Ter-Minasyan provided another nice CAP example targeting the Business Partner OData service tested against an S4 CAL deployment (Be aware you might need to change/delete the public IP when using CAL).

But wait, plain http calls on code level? We have destinations to abstract away the configuration and authentication complexity.

So, let’s create the destination service “az-destinations” on our dev space to cater for that. We maintain the connection to our S4 using the private IP we got from “az-private-link”. Eventually you need to bind the destination service to your app too. With my implementations that will happen automatically on deployment, because they are listed as required on the mta.yaml.


Fig.11 Destination config on CF space dev for private link service


The additional properties make it available to SAP Business Application Studio and ensure the sap-client. In my case that is 000.

Ok great, let’s test this!


Fig.12 Screenshot from Java app start page


I open the Java app and follow the link to the Servlet as highlighted above. Aaaand private linky linky link don’t break our swear! …


Let’s check on the application log what happened.


Fig.13 SSL error message from Java app via private link service


Ah ok, fair enough. The SSL handshake checks if the response originates from a responder, that we expect. Since our app on BTP “sees” 10.220.0.4 there is a mismatch on the received server certificate. My S4 sits behind the Azure load balancer and a pool of web dispatchers, which send a certificate, that doesn’t mention 10.220.0.4. Mhm, what now? There are various options to tackle this. Here are a few.

  1. Override the SSL peer verification process in your code with the private IP of the private link service. Check my BTPAzureProxyServletIgnoreSSL.java class for more details.

  2. Change the Destination config from https to plain http.

  3. Add property “TrustAll” to your Destination.

  4. Use a dedicated SSL config like Server Name Indication (SNI) on your web dispatcher or netweaver setup and import the associated certificate in the trust store of your BTP destination.

  5. Or the most desirable: bring your own domain and certificate. See this post to learn more how to achieve that.


The first three options relax the end-to-end trust verification. For maximum security in such a shared environment as BTP you would want to tackle this properly. The fourth option requires you to generate an additional Personal Security Environment (PSE) and configure the Intern Communication Manager (ICM) parameters, so that request coming from BTP will be answered with the expected certificate. That allows you to keep your existing trust setup on the SAP backend untouched. Have a look at part 7 for more details.

If you don't want to dive in with fully blown SSL setup to start with, TrustAll is your friend.

For productive purposes I highly recommend managed well-known certificate issued through Azure Key Vault and CNAME mapping for the SAP BTP internal SAP Private Link hostname. Read more about that here.



Consuming OData via the private linky app is a piece of cake now


From here on all implementation topics like app roles, XSUAA, logging, monitoring, staging, scaling etc. stay the same as if you were using any CloudFoundry implementation. For simplicity I exposed the private linky app via another destination and created a Fiori app based on that.


Fig.14 Destination config for consuming app



Fig.15 Fiori app consuming OData via private linky app


The complete feature set of that OData service is available. We only created connectivity using this new beta service after all 😉

Restricting access to exposed SAP backend services further


If you need to lock down the SAP backend services (ICF nodes), that are visible from the SAP Private Link Service you can do so with a proxy component. SAP offers custom SAP Web Dispatcher URL filters for example. You can specify the source IP (in our case the standard load balancer behind the Azure private link service) and your desired OData service paths for instance. This is analog to what the Cloud Connector provides with the exception, that the web dispatcher acts on http protocol level. This approach doesn't work for RFC connections. Have a look at post 6 of the series to learn about securing RFCs in detail.
# We allow access to the "ping" service, but only if
# accessed from IP of PLS load balancer and only via https

P /sap/public/ping * * * 192.168.100.35/32
S /sap/public/ping * * * 192.168.100.35/32

One could argue that this approach would be extra cautious, because the traffic went through Azure VNets only and entered your private tunnel directly behind your app on BTP. The SAP backend authorizations for the "calling" user will take care of what to show or hide. However, exercising zero-trust efforts would demand such filtering.

These URL filter act as an extra layer for governance, as a fail safe for too open authorizations and as a safety net to mitigate the shared-tenant nature of BTP. You can apply any proxy (e.g. Apache, NGINX etc.) for the filtering. Just install it on a VM behind the Standard load balancer to inspect traffic and forward to your SAP backend system as per your allowed configuration. The SAP Web Dispatcher would be a SAP native approach to implement this.

Thoughts on production readiness


The connectivity components of the setup are managed by Microsoft or SAP for enterprise-grade apps. The development best-practices by SAP are not touched. You code your apps without any need to know of the private link service.

All your traffic stays on the Microsoft backbone, it is private and you get rid of the additional infrastructure components overhead mentioned at the beginning. No outbound ports need to be opened, which makes life easier for deployments such as HEC for instance. This kind of simplification speeds up roll-out and increases resiliency.

The Cloud Connector continues to play a role for layer 7 functionalities like audit logging and dedicated allow-listing of propriatry interfaces such as RFCs in one place. With PLS acting on layer 4, many of those concerns are shifted into the SAP WebDispatcher (e.g. Access Control Lists), the Azure Network Security Group or even the SAP ERP backend. This can pose a challenge for existing landscapes that relied on the isolation of the cloud connector. Read more about that in part 6 of the series.

self-signed certificate is not optimal. You may consider bringing your own domain and certificate as described by SAP here. Furthermore, you might favor trusting the root or intermediate certificate of your Certificate Authority in BTP. That way there is no need for populating the certificate anymore.

See here, how to create a certificate with a well-known certificate authority like DigiCert or GlobalSign automatically from Azure Key Vault.

Cloud Connector and PLS can co-exist. However, you need to make sure, that you close the interfaces exposed to PLS on the network that you want to restrict to the Cloud Connector only. Otherwise Cloud Connector could simply be bypassed.

Further Reading



Final Words


Linky swears are not to be taken lightly. I believe SAP is making good use of the Azure portfolio creating another integration scenario that will become p foopular going forward. Today we saw the setup process for this new private link service that keeps your BTP traffic private and on the Microsoft backbone, drilled a little in the security configuration with destinations and verified the usual development approach can be applied to the apps routing through the private link service with a standard Fiori app. As they say “trust is good, control is better” 😉

In part two of this series, we will look at applying this approach with SAP Integration Suite. Any other topic you would like to be discussed in that regard? Just reach via GitHub or on the comments section below.

Find the mentioned Java, CAP and Fiori projects on my GitHub repos here. Find your way back to the table of contents of the series here.

As always feel free to ask lots of follow-up questions.

 

Best Regards

Martin
1 Comment
Labels in this area