Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
nicholas_chang
Active Contributor
Disclaimer: The views presented in this blog are my own, and do not represent the views of my employer. This article is for idea and knowledge sharing, neither promoting any platform nor marketing related.

 

SAP HANA on Google Cloud + NetApp Cloud Volume Service: Resizing volume size and performance to fit your workload needs in a non-disruptive manner.

If your HANA instance is running on Google Cloud, utilizing NetApp CVS, you can take advantage of its non disruptive, flexible volume scaling that fits performance needs. It provides you the flexibility to increase / decrease volume size to juggle between performance and cost in uptime.

For example, you can easily increase the volume size to boost up disk throughput to improve the duration of HANA startup, Data Loading, System Migration, S/4 Conversion, Import/Export, Backup/ Restore, eliminate system standstill/ performance issue during critical workload (month end processing, high volume of change activities, etc) that could possibly caused by long savepoint duration due to disk I/O bottlenecks and etc. Once the ad-hoc workload is completed, the volume can be scaled to a size in uptime that fits your HANA DB size and meets the HANA disk KPIs during normal operation to save some unnecessary cost.

In below, you’ll see the difference on disk throughput varies between volume sizes. Testing Environment: HANA DB Size: ~2TB Row Store size: ~120GB (done with purpose) Reboot the server before initiating HANA startup to ensure RS is loaded from persistent instead of shared memory.

 



 

HANA Startup:


Although Row Store Startup is I/O intensive, the performance is also significantly impacted by amount of log replay, undo, garbage collection, consistency check etc.



With 3TB x Performance Extreme disk, you can see the disk throughput is averaging at 390MB/s and the startup time is close to 2 hours for HANA to back to its full operation.

Next, adjust the volume from 3TB to 10TB during uptime. Shutdown HANA, disperse the row store shared memory and issue “HDB start”. You will notice the disk throughput dynamically increase up to 1GB/s and the full operational startup time is reduced from 2 hours to around 50 mins.





After we have achieved what we want (to shorten startup and CS table reload duration), now we can easily reduce the volume size back to its initial value without bringing down HANA, with SGEN currently running.





During the allocation back to 3TB, there’s no disruption nor system standstill on the ongoing SGEN:



NetApp CVS + Google Cloud is SAP HANA certified, and it provides flexibility to scale volume size dynamically to fit performance needs in a non-disruptive manner. If your systems are running on this solution, you can probably play around it in your sandbox environment, test for improvement on different workloads before running it on production systems.

Refer to https://cloud.netapp.com/blog/introduction-to-cloud-volumes-service-for-gcp on how you can calculate and get the desired performance. Also, it will be interesting to hear what’s the highest possible disk throughput after you’ve played with it in your environment here.

 
4 Comments
Labels in this area