on 08-01-2019 8:09 PM
Background:
In the process of evaluating systems for hosting S/4HANA on Google Cloud (conversion from ECC)
(1) The sizing report run against ECC has estimated the initial memory requirement for SAP HANA to be ~16 TB
(2) Largest HANA certified GCP VM today is a m2-ultramem-416 system with 12 TiB memory. Given that our requirement (16 TB) exceeds this, S/4HANA will need to be deployed in a scale-out architecture requiring GCP VMs with clustering support enabled.
(3) Largest HANA certified GCP VM today with clustering support enabled is a 4 TiB system (n1-ultramem-160)
(4) According to SAP note 2408419 (SAP S/4HANA - Multi-Node Support), S/4HANA scale-out is only supported on nodes with a minimum size of >=8 CPUs per node and >=6 TB RAM per node for Intel hardware for a maximum of 4 nodes.
Given the above, the question is:
(a) What are the recommended VMs in hosting S/4HANA on GCP for a 16 TB memory requirement - given the above info/constraints? I suppose we cannot choose n1-ultramem-160 x 4 nodes as it will not be supported for hosting S/4HANA per note 2408419 since it is a node with mem < 6 TB.
Thanks,
Rakesh
Mario,
Thanks for your thoughts and tips. We're working on the tests currently.
Regards,
Rakesh
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
72 | |
7 | |
7 | |
7 | |
6 | |
6 | |
5 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.