Skip to Content
Technical Articles

Custom Schema Separation in Multitenant Applications

A bit of background…

 

When approaching multitenant application development you quickly come up against the topic of data persistence.  How do you store each client user’s data from all the others?

The answer lies in how critical it is that the data be separated and to what extent the security surrounding the handling of the data be performed.  Should it be handled in the application layer or maybe the database layer?

There are several blog posts that get into the background of multitenant development and discuss the theory of data separation.  Some are mentioned here.

Multitenancy Architecture on SAP Cloud Platform, Cloud Foundry environment

Using SaaS Provisioning Service to develop Multitenant application on SAP Cloud Platform, Cloud Foundry Environment

Developing Multitenant Applications on SAP Cloud Platform, Cloud Foundry environment

Developing multi-tenant applications on the SAP Cloud Platform- Introduction

Also there is some guidance in the official SAP docs that address the subscription mechanism, but it don’t address the requirement to store the client’s data.

Developing Multitenant Business Applications in the Cloud Foundry Environment

Thankfully my colleague Philip of the HANA Academy has put together a series of videos with accompanying project that lays out the critical mechanisms and a approach using a single database schema(using a discriminator column approach).

Hands-on Video Tutorials for Developing Multitenant Business Applications

 

I’ve taken the groundwork of Philip’s sample framework MTApp and extended it to include a strategy for custom schema separation (using HDI containers).  You can find the extended repo here.

https://github.com/alundesap/MTApp

 

Per-Client(Subscriber) schema separation with a common(Master) container

 

Although the goal of keeping your client’s data separated is a key desire, you also often don’t want to duplicate common data that all your clients may need to access.  Also, by having one common place to store some data that’s relatively static(think postal codes, configuration data) or updates often(think stock prices. weather forecasts) but is in common to all clients, you’ll need a strategy for sharing access to this data.  This extended example illustrates such a strategy.

Keep in mind that this is one person’s idea of how to approach this issue and your project’s needs may differ significantly or have hard requirements that render this approach insufficient.

The project contains four modules.  The MTAppRouter and MTAppBackend modules are taken from the MTApp example with some modifications to the MTAppBackend(which we’ll get to in a minute).

I’ve added two new database(DB) modules.  One to hold the common data called MTAppDB_Master and one to be used as a per-client template called MTAppDB_Client.  All the configuration to allow for the client container to access the common container is provided in the example.

 

Per-Client deployment doesn’t cut it

 

My first inclination was to create an mta extension file(mtaext) that I could generate for each client with the specific client info and deploy the same project mtar file over and over passing a unique mtaext file for each client.  This didn’t work due to that fact that when you redeploy over an existing deploy the current behavior is to unbind existing containers.

I manually pushed a client container and bound it to the MTAppBackend of the already deployed project and was able convince myself that this would be a workable solution as long as I could programmatically distinguish which of the (now)multiple containers would be selected for data access.  This led me to create a few utility scripts to automate the process.  The bash scripts add_client and del_client in the tools folder are designed to do this as well as set up the routes for the subscriber properly.  You may find that these need customization for your needs, but they should be pretty easy to modify.  These scripts were written for Linux so if you have difficulty modifying them for your preferred platform, you can use this docker container that I’ve put together with all the tools set up for you.  They also output the commands that will be executed to the console so you can cut/paste them one at a time to make sure they’re working as you expect.

 

Meeting in the middle(ware)

 

In order to not hard code any mechanism of determining which container to select(following https://12factor.net/config) principals, I’m leveraging a convention of tagging each container in a way such that I can discriminate which is the proper one.  I did this when creating the container by passing the -t parameter.

cf create-service hana hdi-shared ALPHA_V0 -t subdomain:sub1-multi -c '{"schema": "ALPHA_V0_DEV"}'

Notice the -t subdomain:sub1-multi.  Here the subscriber’s subdomain is sub1-multi and must match it’s value in the subaccount configuration.  Also, I’m passing a configuration that forces the actual schema to be a human-legible name rather than the long system generated random name that’s normally produced when this is omitted.  This is nice for when you want to have a predictable schema name for connection via Business Objects or another non-SAP system.

I wanted the schema(container) selection to be in a single place in my MTAppBackend code.  While I didn’t break it out as a separate file, the common container selection code is consolidated into a middleware component that is defined in the following section in teh MTAppBackend/server.js file.

function mtMiddleware(req, res, next) {
	
	// Assumes the HDI container was tagged with a tag of the form subdomain:<subdomain> and is bound to the MTAppBackend
	var tagStr = "subdomain:" + req.authInfo.subdomain;
	
	// reqStr += "\nSearching for a bound hana container with tag: " + tagStr + "\n";
	
	var services = xsenv.getServices({
		hana: { tag: tagStr }
	});
	
	// If a container service binding was found
	req.tenantContainer = services.hana;
	// Else throw an error?
	//.catch(function (error) {
	//	res.status(500).send(error.message);
	//})
	
	next(); // Call the next request processing function
}

You can then add it to the request handling chain like so.

app.use(mtMiddleware);

Once this is set up, the details of the current client’s container are available in the request as req.tenantContainer and you can go about connecting to the container in the normal way.

var conn = hdbext.createConnection(req.tenantContainer, (err, client) => {
...
}

You could break this code out into a separate file and control access to it with source control to limit the ability of your development team to mess with the middleware component.

 

In conclusion..

 

This post focuses on an approach to implementing custom container schema separation of client(subscriber) data.  I invite you to look at the code in the project for more details.

 

https://github.com/alundesap/MTApp

 

Again, thanks to Philip for all the initial work!

 

-Andrew

Be the first to leave a comment
You must be Logged on to comment or reply to a post.