Only a couple of days remain of 2013. I cast my last few glances into life’s rear view mirror only to see the Christmas holiday fall away into the past but as much as I enjoyed the time off hanging out with family and friends, I endlessly do enjoy the view of the future from my perch inside the SAP Co-Innovation Lab and am happy to carve out some time at the office today to start prepping for a fast start to 2014.


I did give some thought to my last blog of the year being some sort of predictable recap of all of our amazing projects this year and yet with the future of the lab steadily embracing all things cloud, I feel more inclined to specifically look back on a couple of COIL cloud projects done in and around the Cloudframe initiative that I think sets the stage for the project focus in 2014.  From my earlier blog post on the topic, I made readers aware of COIL’s own affinity for Cloud from describing a bit how having the Cloudframe project work enabled from COIL has made perfect sense  just from COIL’s own perspective of cloud value. Additionally we’ve seen first hand how this project work and the commitment from the project teams and stakeholders, that SAP can and will continue to exceed customer expectations with an exceptional cloud strategy and then executing to their benefit.

It came as no surprise to COIL when SAP announced (May 7) its HANA Enterprise Cloud (HEC) which the company is confident can address the aspirations and goals of its customers to run not only simple applications but mission critical systems from the cloud in an open, real time fashion securely and without compromise.  As the world comes to understand features and functionality of HEC today, we can tell you that much of what you find there is something first explored in the COIL. 


By gaining a glimpse into the Cloudframe project work, we can consider the value propositions of SAP HEC and why they resonate even more when we consider the time and energy committed to the ongoing Cloudframe project work in COIL and how the results of this work contributes towards the productive, innovative services and overall delivery now possible from the Hana Enterprise Cloud.  The richness of HEC is comprised of invaluable hours and hours of highly iterative engineering engagements and tacit knowledge exchange among several SAP colleagues and partners continuously active in co-innovation work. The successful outcomes from this important co-innovation project work demonstrates how SAP and its partners have become so adept in exploring new and optimal ways in which to deliver SAP platforms and applications from the cloud.


Customers respond to HEC’s compelling vision for a myriad of reasons but one business driver is certainly to include the one which most influenced COIL’s own need for cloud architecture; the simple desire to address power and cooling challenges while simultaneously seeking to optimize how to implement and manage cloud resources and capability with a limited pool of skilled, technical talent.  These are real dependencies today for anyone tasked with meeting the computational needs of a business by discovering how best to reduce the known dependencies and to reduce time to provision.


The COIL Cloudframe Project(s)-

The Cloudframe project work at COIL has become a key enabler of HEC and its automating and provisioning of large scale SAP Hana systems where it has reduced time from days into minutes and all of it done in the cloud using an open, multi-vendor approach. The scope of the Cloudframe project work includes exploration and optimizing the use of high speed interconnects and fabrics, using lower cost Intel x86 Xeon computing technology and to leverage service provider-centric storage. Additionally, there is an exploration of new persistence models for In-memory computing and developing cloud computing manageability frameworks for elastic, physical provisioning and management of high performance compute, network, and storage for large scale in-memory computing.


There are multiple COIL projects which consider SAP’s Hana dB as a powerful solution for analysis and management of Big Data.  Necessarily, such a solution requires a highly tuned cluster of compute, storage, and networking nodes.  With today’s converged infrastructure focus in the datacenter, there are several frameworks and utilities available to build a SAP Hana compliant cluster.


In an effort to unify the integration of SAP Hana into several vendor datacenter solutions SAP created a thin provisioning and orchestration framework called SAP Cloudframe.  Using Cloudframe, developers have a unified API for creating, expanding, reducing, suspending, and resuming HANA dBs.

Such unification allows applications to point at different SAP Hana Service Providers and use the same API to provision the SAP Hana Service.  In this way applications can use public cloud Hana service offerings for test and development phases.  With this a customer could then switch to using Hana Enterprise Cloud for qualification and production.  Or they could point to an on premise SAP Hana Cloudframe instance and run production there. Cloudframe takes advantage of existing software defined networking, dedicated high speed communication fabrics, and visionary in its approach towards multi-level certificate based security, Cloudframe has enabled a software company like SAP to create a vast resource poolof high performance machines with cloud-like properties.


While we sometimes speak to the Cloudframe project at COIL as it if it were one project; it’s more an initiative comprised of many projects. Here is a list of some of the key Cloudframe projects that have taken place in the COIL since early 2011:

HA Storage integration with private HANA SAN
40GbE for private HANA Cluster Network
SDN development for Cloud Frame Provisioning
High Density/Power Saving Cells for Cloud Frame delivery in HEC

For the sake of keeping to a manageable blog posting (and yes I am aware that brevity and I often part company when I blog), I’ll just share some detail from one of these project focus areas. In this case, SDN development for Cloudframe provisioning for no other reason perhaps than the fact that it continues to receive a lot of industry attention as well as how it relates to the overall expansion of virtualization in the network.

However, one contrast of interest is the fact that the Cloudframe infrastructure is not predicated upon virtualization. Instead the Cloudframe Initiative places an emphasis upon a vertical architecture so as to alleviate the need to use virtualization by providing its own management tools which provide similar flexibility while preserving all the performance of bare metal.


We already know SAP Hana as a data platform is game changing for the industry. With SAP Hana, its core architecture comprises a columnar dB and a data processing engine and it presents an entire new paradigm relative to data management, app development and analytics. It furnishes linear scalability so its special capabilities can be readily delivered from public, private or hybrid clouds.

Both historically and present day, databases often get vertically scaled by adding resources to the existing logical unit, most specifically to memory. By adding more memory to the SQL server and by adding more resources to help the identified problem in general, an IT implementer always want to obtain more effective use of the available infrastructure resources and technology.

Some solutions attempt to offer a bit of scale up and scale out ability but this is not necessary in a SAP Hana enterprise deployment or as implemented from the cloud; adding no reason for deploying and managing separate cache servers.

With Cloudframe, SAP supports both vertical (scale up) and horizontal (scale out) seamlessly. Cloudframe has resulted in an ability to alleviate any interference between co-located deployments by entirely separating the two commonly encountered bottlenecks (networking and storage). The Cloudframe project’s focus is about managing the infrastructure via a unified API for creating, expanding, reducing, suspending, and resuming HANA dBs. This lends itself to a much better way of vertically scaling SAP HANA in the cloud without the need to resort to hybrid scaling which even if the results are acceptable adds complexity.

The way we currently manage SAP Hana within the cell enables us to scale only horizontally (scale out) this is by virtue of the fact that we only have 1TB nodes and therefore scale up/scale down are not options. This is currently only a limitation of the equipment and not the architecture itself. The progression is to have multiple node sizes within the cell and HEC. The degree of scale up / scale down is dependent only upon physical barriers (min / max server size).


Accelerator technologies are being investigated for both storage (flash arrays) and compute (FPGAs) but these have not yet made it into the standard design. We will take a closer look at this dimension of the project work further into the New Year. It would seem easy to at least predict that in 2014, many in our industry will be looking to climb out over the top of the known I/O barriers And it will be interesting to see if the breakthroughs remain anchored to hardware or if new concepts root to SDN.


Specifically with respect to the SDN project work the Cloudframe team pursued at the COIL it’s interesting to learn a bit about how the team collaborated with engineers from Arista Networks.   To work with Arista led to improved ability to derive more intelligence from the fabric itself.  This starts with manageability, but there has been ongoing interest to take this even further. 


One very positive outcome stemming from the two companies collaboration at COIL came during a team effort to develop the SAP Hana provisioning framework where the team first went to automate much of the network functionality, ACLs, VLANs etc. The team had planned on performing this using the previous standard method of scripting ssh command lines and analyzing return values. From a number of iterative and useful discussions with Arista, it became evident that there was an opportunity to replace this functionality with the Arista’s eAPI.  Following a thorough demonstration of the eAPI, where its ease of use and power was illustrated, it enabled the Cloudframe team to seamlessly program the switch directly from python code used to develop the framework.

 

As we dive into the new year I am very much looking forward to seeing HEC technology continue to progress and evolve from the efforts made by the Cloudframe team. One related project in addition to the Cloudframe activities that we are eager to watch gain even more momentum is focusing directly upon security in the cloud for SAP Hana. Natively, SAP Hana is secure but with respect to Cloud there is a real imperative to explore security matters from a perspective of the entire stack. A COIL project including participants from SAP, Intel, Virtustream and Vormetric directly examines how Virtustream can leverage the Vormetric Data Firewall into its cloud infrastructure as means for delivering enhanced security architecture and offer to customers a capability for both implementing and then fully controlling even stronger data encryption with more granular access controls. We will share more details as this project moves deeper into its 2nd and 3rd phases but I am already appreciating the defense in depth aspects being learned and applied.

There will be some new things for the COIL team to develop and implement across its own cloud infrastructure in its preparation to not only support these two important projects but to further enable an array of new cloud focused projects already being proposed and developed. It’s an exciting time for our lab as we anticipate the pursuit of new projects between SAP and its partners spanning the entire stack from applications to provisioning, deployment and infrastructure. We look forward to seeing a spectrum of such projects, all hopefully producing results that will continue to influence clouds services and advance worldwide adoption of SAP Hana.

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply