Skip to Content
Technical Articles

SAP Analytics Cloud Story with HANA Live Connection ABAP Cache Warmer

Next to most current and correct data, performance is another very important factor in analytics. In the blog “How to trace widgets in SAP Analytics Cloud stories connected via HANA Live Data Connection ( https://blogs.sap.com/2020/02/24/how-to-trace-widgets-in-sap-analytics-cloud-stories-connected-via-hana-live-data-connection/ ) it was already described how to grep performance data and link them to a widget in a SAP Analytics Cloud Story with SAP HANA Live Connection. This analysis was done on browser frontend side.  

In this blog, it is described how to capture the performance data in backend. Further a concept will be described how to use this data for a cache warmer for a SAP Analytics Cloud Story with SAP HANA Live Connection.  

In case you want to improve performance of a certain story, for certain users or simply for calls exceeding a certain runtime the described concept can be an option for you. 

Capture Hana Live Connection Performance in Backend 

When a story is executed in SAP Analytics Cloud MDS statements are executed through the backend stack in case of Live Data Connections. In SAP Analytics Cloud these statements can be analyzed with the help of Google Development Tools.  

In backend the same statements can be found by accessing database view “SYS”.”M_MULTIDIMENSIONAL_STATEMENT_STATISTICS” via SQL.

The following SQL is accessing “SYS”.”M_MULTIDIMENSIONAL_STATEMENT_STATISTICS”. and restricting result set to a specific story with id 24B912FD7C2FC0C88E82179312EB5E03.

SQL to get Performance Data of a story

select

STATEMENT_HASH

,STATEMENT_STRING

,LAST_PERFORMANCE_DATA

,USER_NAME

,APPLICATION_USER_NAME

,APPLICATION_NAME

,STATEMENT_TYPE

,LAST_EXECUTION_TIMESTAMP

,LAST_EXECUTION_STATUS

,TOTAL_METADATA_CACHE_HIT_COUNT

,TOTAL_DATA_CACHE_HIT_COUNT

,AVG_EXECUTION_TIME

from

"SYS"."M_MULTIDIMENSIONAL_STATEMENT_STATISTICS"

where

SUBSTR_REGEXPR('(?<="STORY:t.G:).*?(?=")' IN json_query(STATEMENT_STRING,  '$.ClientInfo.Context.StoryId'  EMPTY OBJECT ON EMPTY EMPTY ARRAY ON ERROR))  = '24B912FD7C2FC0C88E82179312EB5E03'

order by

STATEMENT_HASH,

LAST_EXECUTION_TIMESTAMP desc

The result of the SQL delivers performance data of the last MDS call in column LAST_PERFORMANCE_DATA in JSON format. Further the executed MDS by SAP Analytics Cloud can be retrieved in column STATEMENT_STRING.

The column STATEMENT_HASH represents a HASH value generated out of the STATEMENT_STRING and marks equal statements.

AVG_EXECUTION_TIME is showing the average execution time of a statement and can also be used as a performance indicator for the statement.

 

With the help of this SQL we are able to identify bad performing MDS calls out of SAP Analytics Cloud. In addition, we can extract their MDS statements causing this.

Execute MDS via ABAP

In SAP Note 2550833 (https://launchpad.support.sap.com/#/notes/2550833) it is described how to execute MDS statements by using EXECUTE_MDS via an SQL interface and what you have to keep in mind when doing this. Calling a MDS statement will also lead to warm the cache for the executed statement. A manual way to warm the cache for a specific MDS statement would be to extract the statement as described out of view “SYS”.”M_MULTIDIMENSIONAL_STATEMENT_STATISTICS” and execute them in SQL interface via EXECUTE_MDS.

With the help of a custom ABAP report named ZPA_UTIL_MDS_CACHE such “Cache Warming” can also be done on regular bases and also triggered by certain events like data loads. Cache warming could be scheduled with the help of SAP BW process chains and integrated in daily process.

ABAP Implementation

The following coding enables you to execute all MDS statements that are lasting longer than a defined average performance. The reports act now like a MDS caller, fed by performance data statements out of view “SYS”.”M_MULTIDIMENSIONAL_STATEMENT_STATISTICS”.

One very important remark on that point is that there is a size limit ok 65KB accessing column STATEMENT_STRING via ABAP. As a matter of fact statements over this limit are not complete and won’t be executed correctly.

In our case we execute all MDS statements of a story with id “24B912FD7C2FC0C88E82179312EB5E03” having an avg performance over 1000 milliseconds parameterized with the help of the 2 parameters in initial screen.



*&---------------------------------------------------------------------*

*& Report ZPA_UTIL_MDS_CACHE

*&---------------------------------------------------------------------*

*&

*&---------------------------------------------------------------------*

REPORT zpa_util_mds_cache.




"parameter

PARAMETERS:

cv_stid  TYPE string DEFAULT '24B912FD7C2FC0C88E82179312EB5E03',

cv_perf  TYPE string DEFAULT 1000.




"build SQL statement




DATA: lv_select  TYPE string.

lv_select =  | select | &

|    "STATEMENT_HASH" | &

|    ,"STATEMENT_STRING" | &

| from | &

|    "SYS"."M_MULTIDIMENSIONAL_STATEMENT_STATISTICS" | &

| where | &

|     "AVG_EXECUTION_TIME"  > | && cv_perf && | | &

|     and substr_regexpr('(?<="STORY:t.G:).*?(?=")' IN json_query(statement_string, '$.ClientInfo.Context.StoryId'  EMPTY OBJECT ON EMPTY EMPTY array ON error))  =  '| && cv_stid && |' |.




"execute sql and get mds

TRY.

TYPES: BEGIN OF ty_return,

STATEMENT_HASH   TYPE string,

statement_string TYPE string,

END OF ty_return.

DATA:  rt_return       TYPE STANDARD TABLE OF ty_return.

DATA:  mv_conref       TYPE REF TO cl_sql_connection.

CREATE OBJECT mv_conref.

DATA(lo_statement_ref) = mv_conref->create_statement( ).

DATA(lo_result_ref)    = lo_statement_ref->execute_query( statement = lv_select ).

lo_result_ref->set_param_table( REF #( rt_return ) ).

lo_result_ref->next_package( ).

lo_result_ref->close( ).

CATCH cx_sql_exception INTO DATA(lo_sql_exception).

MESSAGE lo_sql_exception TYPE 'E' DISPLAY LIKE 'E'.

CATCH cx_parameter_invalid_type INTO DATA(lo_invalid_type_exception).

MESSAGE lo_invalid_type_exception TYPE 'E' DISPLAY LIKE 'E'.

ENDTRY.




"loop over mds and warm cache

DATA:

cv_startdate        TYPE  dats,

cv_starttime        TYPE  tims,

cv_runtime          TYPE  i ,

cv_json_request     TYPE  string,

cv_json_response    TYPE  string,

cv_starts           TYPE  tzntstmpl,

cs_performancedata  TYPE  string.




"only one execution by statement

sort rt_return by STATEMENT_HASH.

delete ADJACENT DUPLICATES FROM rt_return COMPARING STATEMENT_HASH.

"execute statements

LOOP AT rt_return ASSIGNING FIELD-SYMBOL(<fs_return>).

"do some tracing of runtime

cv_startdate = sy-datum.

cv_starttime = sy-uzeit.

GET TIME STAMP FIELD cv_starts.

GET RUN TIME FIELD DATA(l_timestart).

ZPA_UTIL_MDS_CALL=>call_mds( exporting iv_request  = <fs_return>-statement_string

importing ev_response =  cs_performancedata ).

GET RUN TIME FIELD DATA(l_timefinish).

cv_runtime = l_timefinish - l_timestart.

write / | STATEMENT_HASH   | && <fs_return>-statement_HASH && |  ExecutionTime: |  && cv_runtime .

endloop.

An ABAP class (ZPA_UTIL_MDS_CACHE) is used like an SQL interface to call SYS.EXECUTE_MDS for executing the MDS statement. This is needed to include the interface marker if_amdp_marker_hdb needed for the SQL call.

class ZPA_UTIL_MDS_CALL definition

public

final

create public .

PUBLIC SECTION.

INTERFACES if_amdp_marker_hdb.

class-methods CALL_MDS

importing value(IV_REQUEST) type STRING

exporting value(EV_RESPONSE) type STRING .

protected section.

private section.

ENDCLASS.




CLASS ZPA_UTIL_MDS_CALL IMPLEMENTATION.

METHOD call_mds BY DATABASE PROCEDURE FOR HDB LANGUAGE SQLSCRIPT.

CALL SYS.EXECUTE_MDS('Analytics',

'',

'',

'',

'',

IV_REQUEST,

ev_response);

ENDMETHOD.

ENDCLASS.

Warm up MDS Cache

Before creating new MDS Cache entries we will flush the cache with the following statement via SQL interface.

CALL SYS.EXECUTE_MDS('Analytics',

'',

'',

'',

'',

REQUEST=>'{"Analytics":{"Actions":[{"Type":"FlushCubeCache"}]}}',

RESPONSE=>?);

Now we check via SQL interface view M_CACHES in row MDS column entry_count.

select * from M_CACHES

Now we execute are ABAP Report for the story with the following parameters.

As an output we get a list of statement hashes that are executed and execution time.

Now we check again via SQL interface view M_CACHES in row MDS column entry_count, having now 8 new entries added to the cache.

select * from M_CACHES

Now we will have a closer look at performance data for statements with the help of the following SQL.

select

STATEMENT_HASH

,USER_NAME

,APPLICATION_USER_NAME

,APPLICATION_NAME

,STATEMENT_TYPE

,LAST_EXECUTION_TIMESTAMP

,TOTAL_METADATA_CACHE_HIT_COUNT

,TOTAL_DATA_CACHE_HIT_COUNT

,AVG_EXECUTION_TIME

from

"SYS"."M_MULTIDIMENSIONAL_STATEMENT_STATISTICS"

where

SUBSTR_REGEXPR('(?<="STORY:t.G:).*?(?=")' IN  json_query(STATEMENT_STRING, '$.ClientInfo.Context.StoryId'  EMPTY OBJECT ON EMPTY EMPTY ARRAY ON ERROR))  = '24B912FD7C2FC0C88E82179312EB5E03'

and LAST_EXECUTION_TIMESTAMP > '20201221000000'

order by

STATEMENT_HASH,

LAST_EXECUTION_TIMESTAMP desc

Line 13 and 14 is showing the same statement_hash for 2 different USER_NAME. In that case SAPZW2 is our ABAP System User executing the statement via our ABAP Report. Due to the fact we are deleting cache before the execution could not hit any cache and column TOTAL_DATA_CACHE_HIT_COUNT is showing 0. Line 14 with APPLICATION_USER_NAME MNADEGER already having 10 TOTAL_DATA_CACHE_HIT_COUNTs.

Now we call the story in browser in private mode and check again performance data.

Conclusion

Performance is always an issue in analytics an especially for dashboards. The old principle to warm up the cache could help to optimize end user experience. Steering this warm up for SAP Analytic Cloud Stories connected via HANA Live Data Connection via ABAP report could be an option the use this old principle for your new cloud based reporting.

Be the first to leave a comment
You must be Logged on to comment or reply to a post.