Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
andrew_lunde
Product and Topic Expert
Product and Topic Expert

(even when you're the vendor)



I came to this issue when trying to assist one of our strategic partners.  They needed to perform some CloudFoundry operations across a large set(approx 300) of existing customer applications each set up in its own subaccount, org, and space.  Naturally this is a perfect job for some sort of batch process so I asked the customer to provide a list of the customers(orgs) and space names so that I could set up a test in my own account.  This way I could test the batch process and also work out a way of automating a type of customer provisioning which is typical of multitenant applications.


 








WARNING:




This blog post and the techniques described apply only to BTP global accounts of type "Feature Set B".  Many existing productive global accounts will be of type "Feature Set A" and the btp cli will not work.  As of the writing of this blog post, new productive accounts and also trial accounts will be created as type "Feature Set B".

If you need similar functionality currently and have a "Feature Set A" account, you can try to use SAP Intelligent Robotic Process Automation or a "headless browser" technique described in this blog post.   Find out more about how to check the type of your global account here.

 

I've created a sample project with the results of my efforts, but I think it's an important lesson to walk through the process that got me to what ended up being a somewhat acceptable result.  But if you're anxious to start hacking at my example, clone this repo and look into the tools/btp_cli_batch_create script first.

 

https://github.com/SAP-samples/btp-batch-admin

 

The Goal


 

My goal was to perform these tasks in a loop and make sure any async tasks were allowed to complete before continuing.  There may be a way to tighten up the timing of this(like spawning off the deploy step in a separate process) but I didn't want to get into any additional dependencies.

While reading a line from a text file with colon separated org and space names...

  1. Create a new subaccount

  2. Enable the CloudFoundry environment in the new subaccount (creates an org as well)

  3. Give the subaccount enough entitlements to allow a deploy of a simple app

  4. Create a CloudFoundry space in the new CloudFoundry org

  5. Permit a HanaCloud instance that is a different org/space to allow to it from this new org/space

  6. Deploy the simple app the depends on the entitlements and permissions to work


There is a delete version of the batch file that does basically the reverse of what's described above as well but I won't go into detail on that one.

 

Approach


 

I wanted the sample to be written in a batch language so I picked BASH since this is what most dev-ops folks would likely be familiar with and they will need to modify some of the global values or extend it to be more flexible.  I could have written it with NodeJS or Python or some other programming language, but I didn't want the user to have to set up anything else for such a basic script.  Also, bash is available on most desktops like Mac(which I'm using) or Windows10(with Windows Subsystem for Linux) and even with SAP's cloud based Business Application Studio(BAS).

However, initially I wanted to exercise the API directly in the same way you would if you were writing a utility program in NodeJS for instance.  More on this later.  I would have used the curl command line program to directly invoke the API and actually ended up doing some of this eventually but for other reasons that we'll get to.

In the current incarnation of this sample, I am using two different command line programs to perform the steps except for step #5 above.  If you don't need to do something like step #5 then life for you will be simpler, but I think figuring out a way to tackle #5 is in itself a useful exercise.

 

Business Technology Platform(BTP) Command-Line Interface(CLI)


 

Steps 1-3 are account tasks that can be accomplished with the btp cli tool.  You can find the tool for your platform on the SAP tools page.  https://tools.hana.ondemand.com/#cloud .  Scroll down to the section titled SAP BTP Command Line Interface (btp CLI), and follow the directions to install the version for your platform.

If you want to use BAS instead of your local system, create a DevSpace of type SAP Cloud Business Application and check the SAP HANA Tools additional SAP Extension.  After you launch BAS, open a new terminal window and execute these commands.
mkdir -p /home/user/default-plugins
cd /home/user/default-plugins
curl -LO https://open-vsx.org/api/sap-partner-eng/sap-bas-extras/0.0.19/file/sap-partner-eng.sap-bas-extras-0...
echo "Stop/Start this DevSpace to load the extensions."

Stop and restart your DevSpace by going to your browser URL for BAS and remove everything to the right of index.html including the # and then hit enter.  This will bring you back to the DevSpace manager page.  I don't know why there isn't an easier way to do this.  Hit the icon that's a circle with a square in it(stop)and then again once it's stopped to restart.

Click the DevSpace name to again launch it in your browser.  If everything worked, you should find a bunch of new commands that start with BAS: under the View -> Find Command... menu item.  The sample scripts rely on another command-line tool called JQ(short for Json Query) that makes parsing json responses much easier and reliable.  In BAS, it is installed as a side effect of installing something called NOTROOT.  NOTROOT is a way of installing many of the standard linux packages without being the root user.  While this works for many things,  a package that relies on binaries and libraries being in certain system defined locations may not work as expected.  Still it's a handy approach and you can find out more here.  Under View -> Find Command... click BAS: Install NOTROOT and monitor it's progress under View -> Output and select the NOTROOT Installer in the output window's dropdown.  Open a new terminal window and check that jq is available with this.
jq --version

If you are using your local desktop, you'll need to follow it's instructions for installing packages and find the appropriate jq package.

If you are using BAS, you can now install the btp cli under View -> Find Command... click BAS: Install SAP BTP CLI and monitor it's progress under View -> Output and select the SAP BTP CLI Installer in the output window's pull-down.

 

Cloud Foundry(CF) CLI


 

In BAS, the cf cli is installed by default.  If you're using Mac or Windows10, follow the instructions here to get the cli itself and here for the MultiApps plugin for Multi-Target Application(MTA) deployment features.  Make sure you can login and target an org/space.

 

Digging into the Batch


 

Use git to clone this sample repo locally.

https://github.com/SAP-samples/btp-batch-admin

The repo at the top-level is a fully functional Multi-Target-Application that can be loaded into BAS or VSCode or Codium, built as an MTAR and deployed to CloudFoundry.  The batch files are located in a folder called tools.  We will look primarily at the btp_cli_batch_create bash script.  The _delete version undoes what the _create version does.  The btp_api_batch variations are a work-in-progress that do the same steps but by calling the API directly.  I'll post a follow-up blog post to cover them.

 

Before attempting to run this batch file, make sure that you've got a valid logged-in context for the btp and the cf tools.  Otherwise you'll see messages like "Refresh token expired, please log in again"

For btp:
btp login --url https://cpcli.cf.eu10.hana.ondemand.com --subdomain <global_subdomain> --user <user_email>

For cf:
cf api https://api.cf.<landscape>.hana.ondemand.com
cf login -u <email_name>

You may have to re-login again after a while if you time-out.   On to the script.

 

In the first section I have a bunch of variable set up to conditionally execute sections of the batch file.


I like to put the commands that will be run into a variable first and then echo it out to make sure it looks right before I attempt to execute it.  The do_run and do_echo variables will control if I want to see the actual commands or actually execute them.  you can set these to 0 to disable.  The force variable will allow you to pass a "-f" on the command line and override the do_run setting.  This way you don't have to change the file itself to actually run it after you're happy with what you see.

The next section is a bunch of globals you'll have to set for your situation.


The global account subdomain can be found on its subaccount page.

 

hanacloudorg is the org where your HanaCloud instance is located.

 

hanacloudspace is the space where your HanaCloud instance is located.

 

hanacloudspaceguid with : cf space prod --guid

 

hanaclouddbinstguid with : cf service <hana_cloud_service_name> --guid

 

cloudfoundryuser is the email of the user that will have initial ownership of the space.

relativepathtomtar is the file path to the generated mtar that will be deployed in each org/space.

 

 

Each line of the input file looks like this.  I just used the top-ten cryptos as of this writing as my test input.  You'll find the whole list in orgs10.txt while this is the contents of org1.txt.  Start small.

 
orgBitcoin:spaceBTC

 

This section just splits the line at the colon and puts the bits into the $org and $space variables.


Now let's look at the first invocation of the btp command.  As mentioned, i like to put it into a variable first so that I can see that the other variables that comprise the command are getting set like I'm expecting without actually running the command.


Line 62 is truncated but let's look at the gist of it.  First, I'm sticking the result into a variable called lastsubguid.  This is a string that is returned by the btp command.  Also, by passing it --format json, I'm asking it to give me valid json instead of just text as output.  This is important when we pass the result to the jq command.  The trick here is to use the jq command to pick out the guid of the subaccount that we just asked it to create.

HINT:  First run the script without $do_run turned on but with $do_echo turned on.  Then, cut and paste the command starting with the btp part and just see what gets returned in the result and how you want to handle it.  Build up slowly...
btp list accounts/subaccount

The results are easy to read but parsing the output would be a pain.


Add the --format json option to request the results as json.
btp --format json list accounts/subaccount


Now we can see that an array called "value" is being returned and it's elements are objects.

This is where the magic of jq comes in.  Pipe the json output to jq and start asking jq to parse it.
btp --format json list accounts/subaccount | jq .

Just return the member called "value".  I'm piping the output through head so I can look at the beginning.
btp --format json list accounts/subaccount | jq .value | head -n 5


If we ask for the array with [] you get it as a list.
btp --format json list accounts/subaccount | jq .value[] | head -n 5


Now we need to select which element of the list we want.  The jq command itself can pipe its output back to itself for further operations.  Since we are piping the final output we need a way to distinguish what part is for the jq command and what is handled after the jq command is done.  This is accomplished by wrapping everything you want jq to handle in single quotes.   Here we pipe the array output to the jq search function and look for a specific member with a specific displayName.
btp --format json list accounts/subaccount | jq '.value[] | select(.displayName == "subA")'

Now we're getting just that element from the list.


Say we want to pick out the guid member of that element.  We continue piping within the context of what jq is parsing.



btp --format json list accounts/subaccount | jq '.value[] | select(.displayName == "subA") | .guid'


Almost there but we don't want the double quotes.  We can use the tr command to translate the double quotes to empty.  We can do this by piping jq's output to tr but we need to do that outside of jq's single quotes.
btp --format json list accounts/subaccount | jq '.value[] | select(.displayName == "subA") | .guid' | tr -ds '"' ''


By taking it a little at a time, we can build up the command we want and not get lost.  If you want all the detail of what the jq command can do, look at the docs here.

When you try to take the above btp command and stick it in a string, you run into the issue of wanting to wrap the whole command in single quotes.  Once you do this you need to replace any desired single quotes with a single quote inside double quotes, inside single quotes.  That is why this.
| tr -ds '"' ''

Turns into this in the script.
| tr -ds '"'"'"'"'"' '"'"''"'"'

You have to be careful to understand how to control what gets expanded and when in scripting.

Now where were we?  Oh yes, checking to see if the subaccount had been successfully created.

 

Since the account is created asynchronously, we need to check to see if it's ready.  That's what the next section does but repeatedly getting its state.


When the state reports "OK" then we can move onto enabling the CloudFoundry environment in it.

 

The next step is to enable the CloudFoundry environment inside the newly minted subaccount.  This is pretty much the same but using the btp create accounts/environment-instance subcommand.  You can see I was trying to use curl on line 82 but couldn't figure out how to specify the subaccount properly.  I left some links to the docs, but still couldn't figure it out.  Also same method to check if it's finished.  This time looping until status == Processed.


The next section of the script performs the entitlement.  Use the docs and the methods described above until it's working like you want.


Now before we can permit deploys into a new space, we need to create it first.  That's what the next section on script does and get the relevant guids for later.  This should start looking familiar by now.


 

Avoiding Vendor Lock-In


 

If you were wondering when I was going to get to the Vendor Lock-In part of the post that I tease in the tile, well this is it.  Before HanaCloud there was Hana As A Service(HaaS) which was closer to a virtualized on-premise version of Hana.  When you created a service instance in CF, you could control what other orgs/spaces within the same global account that were allowed to connect to it.  This was done by performing an update-service command and passing the specifics.  This would be a naturally scriptable way to do it.

For some reason this doesn't currently work the same way in HanaCloud or I'm completely missing the documentation somewhere.  You can see in my comments the way it was working in HaaS.


But wait you say.  I know I can do this with the Hana Cockpit!



 

This comes up quite often when you're trying to do something that isn't well documented, isn't documented at all, is missing, or broken.  By the time this post hits the presses, this issue may have been fixed, but the point is that there are things we can do in the interim.

Whether it's SAP or any other vendor for that matter, you want to feel that you can get your job done even when things are misunderstood or not working to way you want or need them to.  One thing that helps with transparency is when vendors have their projects published with open source licenses.  That way at the least, you can inspect the source code to see what it's doing(or not doing).

Another thing that we can leverage is the fact that modern UIs will often use an internal API to perform its tasks.  Why not call these same APIs the same way the UI does but from our batch file.

 

HUGE CAVEAT! :  Since the UI is loaded from the server pretty much any time your browser refreshes and the back-end that it is depending on can change without warning or reason, the UI developer and the back-end developer can coordinate their changes without any apparent impact to the user of the UI.  When you call the API directly, the API can change without warning.  These interfaces are often for internal use and aren't published or supported in any way.  As a result any time you depend on an API like this, your result will by definition be fragile and may break at any time.  However, you might just need it to work long enough to get your work done so that you can move on to your next project.  That's the assumption we'll continue with here.

 

Let's poke under the covers of the browser and see what's happening.  I'm using Chrome on a Mac so your situation may be a bit different, but most browsers allow you to pull up the developer tools.


You should see something like this in the bottom of your browser window.  Make sure you're in the "Network" tab.  Initiate a create mapping request by clicking "Create Mapping".  Select an Org/Space and before clicking "Add", click on the clear icon to the left.  Now click "Add".


You should see 2 new transactions in the lower section of the screen.   The first one is the create mapping request.  See the URL, the method(PUT), and the json payload that contains the details of the org/space etc.  The subaccount is part of the URL.

This is what I used to form the curl command on line 172.  If you try to re-create the same PUT request manually in some tool(say Postman for example) you will find that it fails.  Why?

The reason is that when you manually create the request, you are not supplying a valid logged-in context.  This is usually passed to the backend in some combination of headers(cookies).  You have to figure out what's important to replicate.  This time I know that I need to preserver the JSESSIONID cookie and the x-csrf-token in order for the PUT to be accepted.  You may even need to "spoof" the User-Agent so that you look like a real browser.


Now you could cut/paste these values into your manual request, but these are a chore to not mess up and they often are associated with time period of validity that might get exceeded before you've had a chance to run your batch to completion.  Who knows, they may be tied to a specific IP for all we know.  Better to find a way to pick them up automatically and supply them on subsequent requests automatically as well.

 

Debugging By Proxy


 

While the browser developer tools are great for gaining insight, they don't help you much in a batch processing context.  There are many client based proxy debuggers(Fiddler, Charles Proxy, Proxyman) but they all rely on the desktop windowing system and aren't consistent across platforms.  The one I like best is mitmproxy.  It runs in a console in interactive mode, or just logging to stdout and is cutomizable with Python so you can script it as well.  Oh, and it's open-source to boot and can be installed with the Python package manager pip.  So wherever Python can run, mitmproxy can run.  This is handy when you want to run it on a cloud server without any user interface.
python -m pip install --upgrade pip
pip install pipx
pip install mitmproxy

Run in interactive mode.
mitmproxy

You should see your terminal look like this.


The *:8080 tells you that it's listening on all IP addresses and port 8080.  Now test it with this curl command.
curl -x localhost:8080 https://jsonplaceholder.typicode.com/posts/1

You should see a single "flow" which is a request/response pair.


Press Enter wile in the terminal window to bring up the request detail.


Hit the tab key to move to the response detail.


We see the content-type is application/json.  Now use the down-arrow to scroll to the response payload.


Use the "q" key to back out of the section and exit mitmproxy.

If you are in the root of the sample repo, you can invoke mitmproxy like this and pick up the configuration provided.  Otherwise it will use the default configuration found in ~/.mitmproxy.
mitmproxy --set confdir=./mitmproxy

 

The next thing to do is to set up a browser on your local system to use the proxy debugger.  I've found that most browsers like to pick up the system's proxy settings, but that means that all traffic generated by your system (and you'd be surprised) gets sent through the proxy.  A better pick is to use the Firefox browser.  I like the Firefox browser for this type of use because it ignores the system settings and abides by its own configuration.  It also uses its own certificate store which is handy when you want to override certificate permissions for testing but don't want to open your entire system.  You can get Firefox if you don't already have it installed.


Configure it for manual proxy, localhost and port 8080 to match your mitmproxy settings.  Check the "Allow use for HTTPS" and click "OK".


Now when you browse using firefox, you'll see all the traffic that flows through it.

In mitmproxy, click the "z" key to clear the window.

If you get "The proxy server is refusing connections" in firefox, your mitmproxy is not running or somehow stuck.

I have written a special configuration file and a python script that modifies the requests so that certain headers are persisted between calls.  I won't go into the details but if you run look at the README.md file in the mitmproxy folder it will show you how to run it with the script.
mitmproxy --set confdir=./mitmproxy -s ./mitmproxy/x-csrf-token.py

You'll see that it's set to only pay attention and process requests from hana-cockpit.cfapps.us21.hana.ondemand.com.  You'll to adjust the us21 unless your landscape is Azure running in the US.


Press the "E" key to view the output.  Now using Firefox browser, go the the hana-cockpit screen and authenticate.  Then browse to the instance mapping screen.  Press the "G" key in mitmproxy to go to the end of the output.  Look for the message "Ready for batch operations!".  You can also invoke mitmproxy in non-interactive "dump" mode in order to view the output easier.
mitmdump --set confdir=./mitmproxy -s ./mitmproxy/x-csrf-token.py


Once you see this message, you can run your batch file and it should work by supplying the current logged in context.

The browser or curl itself by default will check the validity of the ssl certs that it receives.  When we are running our requests through a debugging proxy, the proxy is supplying it's certificate and that won't match the request itself.  You can either tell the browser to make an exception for the certificate or tell it to not to check the certificate.  This is what the --insecure flag does in curl.

This is why we are directing our curl command to use the proxy debugger and ignore the certificates returned to it.


 

The rest of the story


 

The rest of the script just targets the new org/space and performs a deploy.


You will need to prepare the mtar file by running the mbt command but this blog post is getting way to long so I'll leave that as an exercise to the reader.  The important thing to note here is that the deploy is synchronous so the script will wait for it's completion before continuing.  You might want to spawn a new shell and issue the "cf deploy" command in there in order to free up the batch file to get started with the next loop.

 

In conclusion


 

If you're still with me, the most important thing I want to impress on you by going through all this detail is to not be  intimidated by things that seem to be closed off from a vendor point of view.  There is almost always a work-around and if there's not, at least you will be able to concisely describe the problem and get to resolution.  I hope I've inspired you to not be afraid to roll up your sleeves and get a little dirty.

 

Let me know if you have and questions or issues by leaving me a question below or better yet, asking it on the SAP community.

-Andrew






Partners: If you have a question, click here to ask it in the SAP Community . Be sure to tag it with Partnership and leave your company name in the question so that we can better assist you.

 
1 Comment