Skip to Content

While exploring the features of BW on HANA, I came across HANA optimized transformation. I believe everyone would aware of this already. I just wanted to share my observations on this.

____________________________________________________________________________________________________________________________________

In BW 7.4 on HANA, we can even move the transformation processing to HANA DB level.

As we see in the below figure, now the transformation also is possible to process (the Mappings, formulas, conversions etc.) at HANA DB level.

/wp-content/uploads/2014/09/overview_532692.png

                              Figure 1(taken from one of the SAP document)


HANA optimized Transformation is possible only in below cases:-

  • Only DSO as Targets.
  • Source should one of these:  PSA,DSO,Infocube,Mutliprovider,SPOs
  • No routines should be written in transformation(Start,end,field routines/transfer routine, expert routines)
  • Only for Mappings, Conversions, formulas, Read data from Master/DSO.
  • Transformation with SAP HANA Expert script.

__________

  • There is a check in the transformation which will tell us, whether it is possible to push down the processing to HANA DB.
  • On Activation, the system will enable this checkbox “SAP HANA Processing possible” if the transformation satisfies the rule types for HANA processing.

               /wp-content/uploads/2014/09/trans1_532693.png

                                                                                    Figure 2: Transformation

  • On activation, we can see 2 programs generated for Transformations.
  • One for normal processing and other one for HANA processing

                         /wp-content/uploads/2014/09/trans2_532835.png

                                                                                               Figure 3: Transformation

  • If you click on HANA transformation, you can see a HANA Analysis process (Source->Function/Script->Target) is created for our object.

                              /wp-content/uploads/2014/09/trans3_532836.png

                                                                           Figure 4: Generated HANA Transformation

  • Since my data source used here is PSA, it had created with database table as PSA table name, and target as my DSO name
  • In between, there is a standard Class used for the mappings etc.
  • The data analysis part will be executed at DB level.

  • In DTP, we can choose to process Transformation either in HANA or normal processing(in Application server)
  • For HANA execution, the Semantics/Error handling should be disabled in DTP. Otherwise, the SAP HANA execution option won’t appear in processing mode.

/wp-content/uploads/2014/09/dtp_532837.png

                                                                 Figure: DTP

So this is how the transformation is pushed down to HANA DB level, by generating a HANA transformation(which is HANA Analysis process).

P.S:  I couldn’t compare the performance between these modes as of now. I will share the same, once I had done.

To report this post you need to login first.

18 Comments

You must be Logged on to comment or reply to a post.

  1. Aaron Batchac

    Great post Sakthi! Nicely explained and illustrated 🙂 . Only thing missing is a reference to the “Transformation with SAP HANA Expert script”. I assume that this somehow is accessible and adaptable for custom logic.

    (0) 
  2. Srinivasa Tanigundala

    Good one. One thing I noticed is system generated analysis process while using this method accepts standard DSOs as target , but when you try to create HAAP from scratch , only Direct update DSOs are accepted as data target .

    (0) 
  3. Michael Bruhn

    We have experienced really bad performance with a hanaoptimized vs “traditional” execution of transformations (Both tests was carried out on the same hanabox) the only difference was (the flag on/off) on the DTP. Our case had some 65M rows. The traditional approach was completed in 1h20mins and the hana optimized was 2h45mins to complete. During the hana optimized execution we noticed heavy cpu utilization(spikes several times at 99% utilizations) thru out the duration of the load. We had an even larger load which dumped due to memory issues when executing thru hana but ran ok using the old approach.

    Does anyone here have other experiences or pointers 😉

    /MiB

    (0) 
    1. Andreas Tenholte

      Hi Michael,

      actually I have made opposite experiences with similar amount of data.

      When we did our performance testing for the 0EC_PCA_3 data flow with very similar number of data records (61 million), we measured the following:

      DTP runtime with ‘classic’ mode took 1 hour 35 minutes.

      DTP runtime with HANA execution took only 11 minutes 12 seconds.

      As your observation was a few months ago, not really sure, but maybe support packages / SAP Notes have solved the runtime issue in the meantime.

      So, I would be also interested in other experiences regarding DTP runtime in HANA execution mode…

      Best regards,

      Andreas

      (0) 
      1. Michael Bruhn

        Hi Andreas,

        Thanks for sharing some positive numbers ;-). Would you know which hana revision and bw version/patch you guys are on ?

        /MiB

        (0) 
        1. Andreas Tenholte

          Hi Michael,

          we are on BW 7.40 SP10 and we have implemented all SAP Notes related to the HANA execution as mentioned in SAP Note 2067912.

          Our HDB is on a very high revision now (HDB Release 1.00.101.00.1435831484), but I think initially we were on Revision 93 or 94 with same performance.

          Reading your other post, it might be also related to the logic implemented in the transformation. Our BI content transformation for PCA is straight forward and does not contain much transformation logic.

          Best regards,

          Andreas

          (0) 
  4. Andreas Tenholte

    Hi,

    as Sakhti mentioned in his blog, one of the restrictions to allow HANA execution is that no ABAP routines (start routine, field routines, end or expert routine) must be used in any transformation that is called by the DTP.

    How can you achieve this without writing the entire transformation logic in SQL (SAP HANA Expert Script)?

    Here are some hints and recommendations:

    1) Use ‘Formulas’, ‘Read from Master Data’ and ‘Read from DSO’ instead of field routines

    There are some new formula functions provided by SAP in the formula builder that should allow you to convert more ABAP field routines to formula functions.

    Some examples (added in BW 7.40 SPs):

    – LTRIM (to remove leading white spaces)

    – RTRIM (to remove white spaces at the end of the character string)

    – IS_INTEGER (to identify pure numeric strings)

    – ALPHA (Alpha Conversion)

    Note: If you have use cases of ABAP field routines that cannot be expressed by formula functions yet, please post it here!

    2) Switch off unnecessary time conversions, example PERI7

    Some date conversions are not yet supported for HANA execution, e.g. PERI7 for InfoObject 0FISCPER.

    In many cases you actually do not need the conversion as the source provides the field already in PERI7 format. In this case you can simply switch off the conversion exit in the “Rule details…” for the time characteristic.

    (See SAP Note 2170312 for more information).

    3) Avoid InfoObjects with Transfer Rules

    Some BW Content InfoObjects are defined by Transfer Rules in RSD1. If you use one of those InfoObjects as target in the transformation, the transformation cannot be pushed down to HANA.

    Example:

    0DOC_CATEG (Sales Document Category)

    A new InfoObject is provided here as part of the HANA-optimized content (0IMODOCCAT) that can be used instead.

    Good to know: The same does not apply for source system InfoObjects, e.g. 0SOURSYSTEM and 0LOGSYS which are also defined with transfer rules in RSD1. You can use these InfoObjects in your transformation without a problem, the DTP will be still HANA executable.

    4) Move Start Routines to DTP filter if possible

    In many use cases start routines are used as a kind of filter. You might want to consider using a DTP filter instead.

    Feel free to add your own observations, comments, tips and tricks or wishes!

    Best regards,

    Andreas

    (0) 
  5. Michael Bruhn

    Sharing..

    We did some tests in relation to using the standard “read masterdata”(with hana transformations) vs doing the read in an endroutine(classic transformation). We could see that using a small number of reads performance wise was similar while doing more reads (our test was 10 from the same MD object) was faster using the end routine. So again depending on your scenario – you could choose to factor this in.

    ..do note the earlier discussion with Andreas – it might be that there is something seriously wrong with our box 😉

    /MiB

    (0) 
  6. Dario Haberkorn

    Hi all, I found that in some transformation is not available switch to push down in Hana the transformation logic. If I check the transformation or DTP I got the following error:

    Transformation RSDS 3FI_GL_XX_TT –> ADSO ZXXXXXX must not be executed on Hana.

    The transformation is from a ECC Datasource to an Advance DSO and I am not using any kind of routines. it is 1 to 1 mapping. I am missing any configuration in Hana studio to allow this process?

    (0) 
    1. Andreas Tenholte

      Hi Dario,

      might be best to implement all SAP Notes mentioned in collective Note 2067912 “SAP HANA transformations and analysis processes: SAP Notes for SAP NetWeaver 740 with Support Package 8 or higher” that are valid for you BW 7.40 SP.

      Usually the check if a transformation can be executed in HANA should give you some more context like:

      – Target ADSO XXXX not supported for HANA execution (e.g. all characteristics are key)

      – Time conversion PERI7 not supported

      – …

      Best regards,

      Andreas

      (0) 
  7. Marcel Scherbinek

    Hi all,

    I’m testing SAP HANA transformation as well and you can be more specific in one point: “Read data from Master without time-dependant data

    As far as you have “Reading data from Master with a time-dependancy no SAP HANA optimization is possible… And I’m really upset that time-dependancies in general is not supported in SAP HANA.

    Regards,

    Marcel

    (0) 
  8. Vince Lu

    Hi Sakthi,

    regarding “P.S:  I couldn’t compare the performance between these modes as of now. I will share the same, once I had done”, may I know if there’s any follow-up for that?

    plus, I see this post is in 2014.. quite a lot happened during these 3 years for HAP.. I’m wondering if you’d like to modify this blog a bit? thanks!

     

    regards,

    Vince

     

    my p.s.  I just realized I’ve once commented on this in Sept 2016..

    (0) 

Leave a Reply