Skip to Content
Technical Articles
Author's profile photo Martin Pankraz

SAP private linky swear with Azure – connecting to PaaS DBs with SAP Private Link Service

This post is part of a series sharing service implementation experience and possible applications of SAP Private Link Service on Azure.

Explore the available database spectrum here.

Find the table of contents and my curated news regarding series updates here.

Find the associated GitHub repos here.

Dear community,

Continuing with the implementation journey of SAP Private Link Service (PLS) we will have a closer look at latest PLS feature upgrade adding MySQL and MariaDB from the Azure PaaS portfolio.

Storage or specifically databases are a must-have for applications. On BTP you have HANA Cloud as SAP native in-memory option with nice SAP-opinionated integration capabilities. However, this might be a “pricey” overkill for your app. Simple extension scenarios often require only a couple of tables or sometimes even only storage for some key value pairs. Flexibility and customer choice is king here I would say.

You might consider piggybacking your existing SAP backends (ERP, SAP Gateway etc.) to host your BTP app data. Make a conscious decision about that, because it lives next to your precious productive ERP data and requires a separate schema to isolate. In a cloud-native world you want to treat this app with modern release mechanism like Canary, A/B testing etc. That would add risk and burden to the database powering your SAP backend.

As a result of that desire for flexibility, we see PostgreSQL and Redis being offered as hyperscaler option on BTP for some time now. Meaning SAP deploys them on their Azure subscription on your behalf and registers them on your BTP instance for you to integrate with. That addresses a good number of use cases regarding data base technology, cost, and feature flexibility.

But what about private connectivity? All above mentioned approaches with SAP BTP require public endpoints (except hyperscaler option, but traffic still on shared hyperscaler backbone) and are built for communication over the Internet by default.

PLS to the rescue!

Fig.1 pinkies using Azure PaaS

To set the stage: a quick view on existing BTP capabilities regarding Azure storage or PaaS offerings. Learn more about the overarching database spectrum here.

Open Service Broker for Azure

This approach has been deprecated by Microsoft in favour of the Azure Service Operator (for Kubernetes). The new implementation has not been picked up yet by SAP.

Find the legacy post on the approach by Holger here.

Chip in on the community discussion to shape the guidance and replacement here.

SAP BTP Hyperscaler Option

As mentioned before, BTP offers PostgreSQL and Redis out of the box on Azure.

They are integrated with the BTP cockpit, supported by CF CLI, and are serviced through your entitlements.

They are isolated by CF security mechanisms (not internet-facing) but your connection traverses the respective shared hyperscaler backbone.

Get started with SAP CAP and PostgreSQL here and with Redis here.


Above options offer “tighter” integration into BTP with features like governance of the service from the cockpit, credential management and service binding for instance. But of course, a direct integration from your app without intermediate steps through BTP services is also possible.

Microsoft offers rich SDKs for major programming languages, REST APIs and Azure Resource Manager (ARM) interfaces for all services.

Some Azure REST API examples:

Some Azure SDK examples:

So far so good. Now on to the really private party of the house – usually in the kitchen, right? Let’s cook ourselves something tasty from the recipes in MySQL.


Fig.2 Screenshot from MySQL client with cookbooks

That MySQL database shown above in fig.2 is configured in such a way it can only be accessed via Azure private links. You will see in the next section how that is done in detail.

Fig.3 Architecture overview of private access to MySQL PaaS DB on Azure (private link only access)

Once connectivity is established, you can interact with the MySQL instance through database clients and libraries like you are used to.

Let’s look at the moving parts

But first things first. Our starting point is the creation of an instance of the Azure PaaS database service. For the sake of this example, I chose MySQL. Type mysql in the Azure portal if you like an UI driven approach.

Fig.4 Screenshot from Azure portal for mysql creation wizard

Choose MySQL single server (first entry). The remaining configurations are straight forward (see Quickstart guide for more info). However, I would like to highlight the “location” and “Compute + storage” setting. App and database layer should be close to each other. That means you should choose the same Azure region as where your BTP subaccount is deployed. In my case that is west Europe (BTP EU20).

Also, the private link feature is not available for the Basic Azure SKU. Choose at least General Purpose or Memory Optimized.

Fig.5 Screenshot of MySQL create wizard

Once deployed, we need to adjust the security settings to accept connections only via private endpoints (set Deny public network access). Before you enable that, I recommend to test database connection once with a firewall rule for your local machine (Add current client IP address) to make sure everything is fine.

Fig.6 Screenshot of MySQL security settings

Now that we are “locked out”, we need to enable access for BTP and our corporate network (in my case P2S VPN) via private endpoints.

Establish access to locked up MySQL instance

Let’s start with corporate network first. Navigate to Security -> Private endpoint connections -> hit “+ Private Endpoint”. On the wizard choose the region where your MySQL instance lives, identify your MySQL resource, and move on to the next blade.

Fig.7 Screenshot of private endpoint creation for VPN access

Identify the subnet that you integrate with so, that your VPN/ExpressRoute gateway can reach the private endpoint for MySQL. Finally consider DNS within that VNet. I chose adding a private DNS entry for my private endpoint. I will need it shortly to configure routing on my local machine.

Fig.8 Screenshot of subnet and DNS setting for private endpoint for VPN access

After successful deployment check the DNS entry, copy it, and put the corresponding values on your local hosts file (on windows: C:\Windows\System32\drivers\etc\hosts).

Fig.9 Screenshot of generated DNS A record for MySQL private endpoint

See below the snippet from my hosts file for reference.

# P2S Azure DNS for SAPLab PaaS

The hosts entry ensures that requests from your local database client towards the MySQL instance are routed through the VPN. Hurray, now we have access again to the database from our local machine for administrative tasks or database modelling.

Off we go configuring the BTP side of things for application access.

I assume that you already added entitlements to your subaccount for BTP PLS. Find the inputs for the BTP creation wizard for MySQL here. Find the resource id by clicking “JSON View” on the overview page. I named my BTP service “mysql-pe”. Stick to that name if you want to re-use the example CF apps hosted on my repos out of the box.

Fig.10 Illustration of resource id mapping

Hit Create and move over to your Azure portal to approve the pending private endpoint request.

Fig.11 Screenshot of pending approval request from BTP

You can see the existing endpoint for the VPN connection we created before and the second one from BTP. Approve and move back to BTP cockpit.

I recommend using a destination for the SQL connection details to abstract from app implementation. You can piggyback an http type config (JDBC not available), because the provided app extracts hostname and credentials for the request and is not constructing the actual call from the http config.

Fig.12 Screenshot of BTP destination for MySQL

Like with the load-balancer-based implementation with PLS, we need to create a CF service binding to finish the setup. An easy approach would be to uncomment “mysql-pe”, built the project and deploy to BTP from my GitHub repos.

Fig.13 Screenshot from SAP BAS about PLS binding for MySQL

After deployment the “Credentials” become available on the PLS service. However, the hostname is identical with the server name provided on the Azure portal.

Fig.14 Credentials view for PLS on BTP cockpit

Awesome, now we are ready to test 😊 Calling the Java servlet through the approuter … drum roll …


Aaaand, voilá.

There we go. All the nice cookbooks retrieved from the “locked up” MySQL database on Azure through our Java servlet.

In case you want to test locally or from SAP Business Application Studio have a look at this dedicated post from the series.

The provided setup can be easily adapted to MariaDB. Have a look at the SAP docs for PLS for the associated JSON config. The jdbc config needs to be slightly changed too. Have a look at the MariaDB docs or my comments on the Java servlet for inspiration.

Anyone also hearing the song in their minds when thinking about MariaDB that goes something like this: “Oh, Maria, Mariaaaaa…She reminds me of a West Side story … played by Carlos Santana…”? Just curious.

How about SAP Cloud Application Programming Model?

Until this point, I showed the setup using a plain Java servlet using the SAP Cloud SDK for Java. With little effort you can map that to SAP CAP too. The setup described above stays the same. But in addition to that there is a nice community effort lead by Theo Sun to support MySQL adapter for CAP.

SAP provides HANA and SQLite only. The community added PostgreSQL and MySQL in parallel. With that you can deploy database schemas, populate data values and support your app development nicely.

Be aware of microsoft backbone routing for public sources. In case you are testing or even integrating MySQL PaaS on Azure from BTP on Azure you need to make one more Firewall configuration to be able to connect Azure services across two different subscriptions (SAP’s running BTP and your own). It is called “Allow access to Azure service”.

Otherwise, even if you allow the whole IP address space on the firewall you will still not be able to connect, when you run both workloads on Azure. Make the switch and you are fine.

This is not relevant when you deploy the SAP Private Link for PaaS. Resolution there happens through private DNS.

Anyone looking to join the implementation effort for the CAP MySQL adapter to push it over the finish line for production readiness?

Check out Theo Sun’s GitHub repos to get started. See the Azure docs for SSL config and have a look at this other repos by Theo for deployment guidance.

Thoughts on production readiness

The connectivity components of the setup are managed by Microsoft or SAP for enterprise-grade apps. MySQL and MariaDB as PaaS offerings have built-in disaster recovery with geo-redundancy and high-availability features to meet your business requirements .

The development best-practices by SAP are not touched. You code your apps without any need to know of the private link service. You simply bind the BTP service and use as is.

All your traffic stays on the Microsoft backbone once it enters the PLS on BTP.

For a complete development practice and striving towards cloud native development, I would recommend adding CI/CD capabilities to your setup. That way you can ensure consistency between your app logic and database model changes. Otherwise, it becomes a challenge to keep up with daily releases with Canary approaches for instance. Have a look at my introductory post on this topic for more details.

Azure Pipelines and GitHub Actions support MySQL natively. The community covers MariaDB. The approaches can be mapped to Jenkins too. At a first glance I didn’t find any popular projects to link here yet. Do you happen to know any? Leave them in the comments and I will add 😊

Regarding database access it is advisable to create a dedicated non-admin user for your app connection. Only for simplicity I re-used the pre-provisioned admin in this blog.

Given that the Azure side of the house is already supporting enterprise grade apps today, you are all set to move ahead once the PLS beta implementation enters general availability.

Considering above the beta is already well enough advanced for non-prod scenarios in my opinion.

Further Reading

Final Words

Uhh, that was quite the ride. We shed some light on the options that SAP BTP offers today to connect to databases on Azure. Learn more about the complete spectrum here. After that we built on top of the new beta features of BTP PLS that enabled MySQL and MariaDB on Azure.

In addition to that we saw how to make them truly private, investigated the app implementation using a Java Servlet, made a small detour on how to get started with SAP Cloud Application Programming model for MySQL, and finished with some thoughts on the production readiness of the architecture.

Any further inputs from you @Developers?

Find the related GitHub repos here.

As always feel free to ask lots of follow-up questions.


Best Regards



Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Rajesh PS
      Rajesh PS

      Martin Pankraz

      I have similar use case wherein azure microservices upserts Data to Cosmos db and S4HANA to read and perform further processing. I would like to understand which integration will be much more robust and futuristic solution''
      1) Microservice -> Apache Kafka ->SAP PO->SAP S4HANA
      2)Micro service-> Cosmos DB -> S4HANA
      3) Microservice -> HANA DB ->S4HANA
      Author's profile photo Martin Pankraz
      Martin Pankraz
      Blog Post Author

      Hi Rajesh PS,

      it is hard to comment on that without knowing the larger requirements. In general SAP recommends to migrate from PO to SAP Cloud Integration.

      Either way you need compute to orchestrate communication between the Microservice and your S4.



      Author's profile photo Rajesh PS
      Rajesh PS

      Martin Pankraz

      True that's valid. But I wanted to understand if we can connect SAP S4HANA or SAP BTP to cosmos DB?
      Is there any way we can connect this with best practice and SAP recommended approach?
      Author's profile photo Rajesh PS
      Rajesh PS

      Martin Pankraz

      Microservice -> HANA DB/CosmosDB - >S4HANA for this approach there requires no middlewares(BTP IS/CPI /PO) and reduces hops in the integration pipeline.
      Is it possible to use Odata services/API/ Db mirroring kind of replication from S4H?
      Requirement is simple here events are orchestrated in Microservices with data enrichment and logic/rules and send to Cosmos DB(5000 events per day and size not exceeds 1MB)
      Apache Kafka -> Azure Micro services  -> CosmosDB  --> ?  --> SAP S4HANA
      Author's profile photo Martin Pankraz
      Martin Pankraz
      Blog Post Author

      Technology-wise very much possible and offered through various parties. Have a look here for example. You may of course hit CosmosDB directly without a messaging component depending on your requirements. We maintain an ABAP SDK to simplify the integration with Microsoft services.



      Author's profile photo Rajesh PS
      Rajesh PS

      Martin Pankraz

           But custom coding /sdk brings lot of problems and surprise in future like new features not accepted, version management frequent change, high dependency with customs & its related libraries, performance lockups sometimes. I see this library is 3years ago.
      Any strong alternative Solution from SAP ? 🙂
      Author's profile photo Martin Pankraz
      Martin Pankraz
      Blog Post Author

      Double posted here.