SAP ERP and SAP HANA in scale-out configuration
I want to tell you about our idea to use scale-out HANA with SAP ERP.
If you have SAP ERP on Oracle which using SAP HANA in scale-out configuration for accelerating reports (SAP HANA Accelerator), you may have a problem with distributing loading in HANA.
The problem is HANA in scale-out can’t correctly distribute load to all nodes and SAP ERP only connect to one node by default, and as result master node will down under load.
What we have:
- SAP ERP with EHP 7, EHP 8 on Oracle
- SAP HANA in scale-out configuration (3 worker and 1 standby)
- Connection from SAP ERP to SAP NANA
- Custom reports which connecting to SAP HANA for accelerating some heavy opertions
- SAP ERP can’t work correctly with SAP HANA in scale-out configuration ( 1825774 – SAP Business Suite Powered by SAP HANA – Multi-Node Support )
Steps to solve the problem:
- Create Z-class which will check less load node and open connection to this node
- Create connections to all nodes (worker and standby)
- Replicate heaviest and most used tables to all nodes
Example of ABAP code to get one HANA connection from DBCON table.
METHOD get_hana_con_and_check. DATA: lv_conn_name TYPE dbcon-con_name, lr_conn TYPE REF TO cl_sql_connection. SELECT SINGLE con_name INTO lv_conn_name FROM dbcon WHERE dbms = 'HDB' IF sy-subrc <> 0. RAISE no_connection. ENDIF. TRY. lr_conn = cl_sql_connection=>get_connection( con_name = lv_conn_name sharable = abap_true ). lr_conn->close( ). CATCH cx_sql_exception. RAISE no_access. ENDTRY. r_hana_conn = lv_conn_name. ENDMETHOD.
To select needed node you should connect to HANA and run SQL query:
SELECT HOST,INSTANCE_TOTAL_MEMORY_USED_SIZE FROM "SYS"."M_HOST_RESOURCE_UTILIZATION" WHERE HOST != ( SELECT HOST FROM "SYS"."M_LANDSCAPE_HOST_CONFIGURATION" WHERE INDEXSERVER_CONFIG_ROLE = 'STANDBY' ) ORDER BY INSTANCE_TOTAL_MEMORY_USED_SIZE ASC LIMIT 1
Result of SQL:
In ABAP code find needed node in DBCON table and connect to it.
You create first connection for checking memory and second, third and so are for connection to particular node (worker).
You find all heaviest and most used tables and replicate it to all nodes (table replication).
I think it will help you.