This has been the year marking some of the most significant announcements regarding the SAP Cloud computing journey. The new strategic partnership between SAP and Google was announced, focused on developing and integrating Google’s cloud and machine learning solutions with SAP enterprise applications. We then witnessed the name change for the platform as a service from “HANA Cloud Platform” to “SAP Cloud Platform” which brought the general availability of Cloud Foundry within the SAP Cloud Platform together with the rollout of the multi-cloud architecture underneath. All of these announcements are in tune with the choice today’s customers expect in extending the reach of the service to operate on the major public cloud infrastructures like AWS, MS Azure and Google Cloud Platform.
The goal of this blog series is not to compare GCP to AWS or MS Azure cloud providers. These are all quite solid platforms, and you will find no shortage of material to guide your cloud journey. In fact, the focus of this blog series will be on the possibilities of the multi-cloud journey and how quickly one can migrate an existing S/4HANA instance to the Google Cloud Platform infrastructure as a service offering.
Architecting your SAP landscape on GCP does require a mindset change to take advantage of the many GCP features on offer. You will also want to understand how SAP architectures leverage various GCP services. The following picture shows basic details of a 2-tier architecture running on the Google Compute Engine:
- To access the GCP, you will need to have a GCP account first. If you do not already have one, feel free to sign up for GCP.
- The basic understanding of the overall landscape of the Google Cloud Platform knowing what is available and how all the parts work together in various scenarios. When running SAP NetWeaver on the GCP, our primary focus is on utilising the IaaS-based services offered through the Google Compute Engine, Cloud Networking, and Google Cloud Storage, as well as some platform-wide features, such as tools.
- SAP NetWeaver Planning Guide which provides details that must be used when planning the migration of your existing SAP NetWeaver system.
For simplicity of the exercise, we will focus on one type of migration which is relatively simple, and that is VM migration to SAP on the GCP. This guide does not cover the specifics of deploying or patching the S/4HANA system. The baseline assumption for all of your work going forward is that you have a supported maintenance level and have successfully completed all initial tests.
We will be taking an existing MS Azure deployment of S/4HANA – (AS IS) without any major change in the way services are working. We begin with the highest patched system in our landscape (S/4HANA 1610 FP02 with HANA DB 2.0 SP1).
The GCP supports ‘lift and shift’ of any VM via a process known as VM Import/Export. The joint CloudEndure/GCP VM Migration Service allows you to migrate virtual machines and physical servers from any existing environment. Log in to you GCP cockpit and navigate to Compute Engine to import your VM.
In order to use the CloudEndure VM Migration service, you will need to link it to your Google Cloud Platform Console project using a service account key. You will need this service account key for the CloudEndure portal.
The CloudEndure import will implicitly provision enough disk space and VM capacity to match the source system. Below is a high-level network diagram showing the networking and port requirements for the migration set up, explaining how the VM migration process works. It is advisable to spend some time building individual network topology & port requirements. This weblog intentionally avoids the network security topic, as this requires special considerations which will be discussed in upcoming weblogs.
When the initial transfer is complete, CloudEndure will enter the ‘continuous data replication mode’ where delta changes since the initial transfer to the source VM are mirrored to the GCP target as snapshots of the initial replication disks. If you haven’t already stopped your running source SAP instance, please do so and wait for the delta replication to complete, then test* a cutover of the VM or proceed straight to cutting over.
* Test cutover will start a target VM on GCP without stopping continuous data replication from the source VM, in case there is an issue with the test cutover. Proceeding straight to cutover will stop source VM replication, uninstall the CloudEndure agent on the source VM and start the target VM.
As you can see there were three VM’s created on the target infrastructure. As per the source import, we expect two VMs for the S/4HANA system – the third VM is the temporary infrastructure required to facilitate CloudEndure’s transfer of the source VM to the GCP. This will disappear on cutover. This transfer took around 4 hours to move from MS Azure to GCP in Sydney (the un-optimised transfer rate for approximately 1.5tb data).
Once the target SAP system is started on the GCP we can compare it with the baseline created on the previous MS Azure infrastructure.
I guess we don’t really need to explain this into too many details (ST06), but you should be able to see that the application server is running on Google Compute Engine:
The Fiori 2.0 Launchpad on SAP S/4HANA 1610 running on GCP – with fully functional Fiori 2.0 overview pages
This concludes part one “The Good” of this blog series. By now, you should have a high-level understanding of how Good it is to transfer a running S/4HANA 1610 instance to the GCP.
Please look out for part two of this blog series where we will disclose “The Bad” experiences during the transfer process, followed by Part three “The Ugly”.
I trust this blog has helped to provide some insight into how the GCP should form part of a robust multi-cloud roadmap for your organisation – and of course showcasing how ready for any cloud offering S/4HANA truly is.
This initiative would not have been possible without the collaborative technical capabilities of the SAPWorks team.