Announcements at SAPPHIRE NOW
Many of you have probably heard the announcement made at SAPPHIRENOW 2017. Microsoft released a bunch of new offerings and announced quite a few exciting new products and features along our SAP & Microsoft story. If you want to get a quick summary, check out the blog by Jason Zander (Corporate Vice President of Microsoft Azure at Microsoft), https://azure.microsoft.com/de-de/blog/the-best-public-cloud-for-sap-workloads-gets-more-powerful/
SAP on Azure
Compared to this I want to focus on something much more basic: benefits of running your SAP workload on Azure. In the last few weeks I have already talked to quite a few customers and heard the same story: reducing costs, becoming more agile, handling heavy productive SAP workload, what about training systems, … The “heavy productive SAP workload” is something that certainly can be done (and is already done by several customers) with the big instances that are available on Azure (revisit the blog above: 20 TB of memory for a single instance available on Azure!!! 20 Terabyte of memory!), but customers still require plenty of small systems as well.
Coming with a big SAP Gateway background I really love the SAP Developer System ES4. http://sapes4.sapdevcenter.com (https://www.sap.com/developer/how-tos/2017/02/gateway-signup-faq.html). I use this system when I want to follow a quick tutorial or in my demos when consuming an OData service.
But sometimes I want to have a little more control. I want to be able to open my local SAP GUI, call the service builder and quickly create my own OData service. Thanks to Andre and others there are plenty of great blogs out there to get you started.
One of the drawbacks of not working at SAP anymore is that it is a little more challenging getting and choosing from a number of SAP systems that are just “sitting” around in your network which you can access (if you have permissions) and work on them.
In my new role at Microsoft this is unfortunately not the case…
Luckily I do have access to Microsoft Azure and also to the Service Marketplace.
Spinning up a first system
So what I immediately did when starting at Microsoft, I went through the process of getting an S-User. I
have to admit that the process was not as easy as I had hoped (-> Thanks to the support guys both from the SAP Partner Network and the official SAP Support who very quickly worked on my ticket!), but finally I got everything I needed (including the important download permissions).
Downloading the software
The next step was to get the software. Again I have to admit: although the new Fiori-like UI is really nice, it was still a little complicated to really find out where to find what.
Once I had the required software I got going. Obviously I used Microsoft SQL Server as the database for my SAP NetWeaver system. Our MSFT / SAP team has created so many fantastic blogs on this topic so it is pretty easy to get started with installing NetWeaver (and lots of other SAP workload) on Azure, https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/get-started
When everything was up and running, I started to do the basic configurations and had my working SAP Gateway system up and running in no time.
In order to produce some load I also ran SGEN which caused a good spike in CPU utilization:
Snoozing the system
Once that was done, I actually didn’t need the system anymore right now. So I just stopped and shutdown the system. I was not paying anything for it anymore.
Over the week I used the system from time to time and starting and running the VM with its ABAP stack is extremely simple. In addition to an “ad-hoc” start, I could also configure a schedule when the system should be shutdown. By this simple “trick” I was able to reduce the (paying) uptime of my system from 24×7 to probably 12h per week. Obviously this is not the case for all systems, but think of your training systems, the sandbox systems, regressions systems, … They don’t have to run 24×7 and by using a simple “shutdown-schedule” you can save quite a lot…
Downsizing the system
One other thing that I learned (and saw when looking at the Azure Optimization) was that the average CPU utilization of my system was extremely low. Yes, there were some spikes, but overall the system was idling quite a lot. So instead of the VM that I had originally used, I downsized my system. Why should I use a 12 core machine, if 8 or maybe even 4 cores are more than enough for the work that I am doing.
Productive example from Microsoft IT
I later I heard something similar also from Microsoft IT (which is a big SAP customer as well). Rick Ochs has a super interesting presentation about this where he talks about some very interesting numbers! Apparently your average CPU utilization is pretty low and Microsoft IT analyzed the overall system usage and then recommended other VMs or other schedules to their users. Just a few remarks from the Blog post here: https://www.microsoft.com/itshowcase/Article/Content/861/Optimizing-resource-efficiency-in-Microsoft-Azure
- A 38 percent reduction in cloud spending from optimization activities such as:
- 9,000 Azure resize requests.
- Almost 30,000 Azure virtual machine snooze requests.
- 7.7 million cumulative virtual machine hours in snoozed state where we weren’t being charged.
- An increase of almost 400 percent in CPU utilization. In six months, Microsoft IT has moved from 4.5 percent average CPU utilization to 16 percent average CPU utilization across our Azure IaaS instances.
Check out his video at: http://Aka.ms/itsca?id=885