Skip to Content
Last Update:

25. August 2016: Added Performance on ABSL video.

In order to develop quality and fast applications with the SAP Cloud Applications Studio, you need to educate yourself about how to use the toolset properly.


Check for updated features


SAP Cloud for Customer is getting updated every 3 months. The added features are highlighted in the What’s New section in the documentation at help.sap.com/studio_cloud. These features are often capable of replacing expensive workarounds you had to do in the past to archive the same functionality.

Event execution iterations


Be aware of the execution logic and always think about how often events are called at runtime and if you can reduce the amount of iterations.


  • AfterLoading: Is executed when a document is loaded (buffer is filled, UI is not yet displayed)
  • AfterModify: Is executed for each node update. Can generate long loops when updating other nodes.
  • BeforeSave: Is executed for each node save. Not as expensive as the AfterModify


Keep in mind that modifications done inside any script can trigger other scripts. For example, let’s consider you have After Modify script on Root and Item node of Opportunity BO. Inside the Item->After Modify you change values of the Root node fields then the Root->AfterModify will also get called. Further if you change Item fields in the Root->AfterModify again the Item’s modify will be called and you will be stuck in a loop. Hence it’s best to think about the implications of modifications on a node before coding it in a script.


  Also think about alternative channels that can update a business object: 

  • Integration: You can exclude code execution from integration by checking if Identity.BusinessPartnerUUID() is set.
  • Migration: You can exclude code execution from integration by adding an identifier to the migration template that can be filled and checked in the code.


Sometimes it is advisable to do expensive calculations within a dedicated action that can be called from the UI. For example calculate the item summary on the header only when the add or delete button is clicked on an item list. 

Too many retrieves by navigating through instances


The ABSL language makes it very easy to navigation through objects. Behind the scenes objects are being retrieved and discarded. In a nutshell, every dot retrieves something. Either a node or an associations. Accessing a node is fast, retrieving an object by association much slower and doing it over and over again easily adds up to several seconds.


For performance reasons, retrieves by association should be kept at the possible minimum. Results from retrieve by associations should be buffered in the coding if possible.


Example:

In the below code, toParent, toBusinessPartner, EmployeeResponsible and DefaultAddress are representing associations. The code results in 13 retrieves on the server side.



if(this.toBusinessPartner.IsSet()) {
  if(this.toBusinessPartner.EmployeeResponsible.IsSet()) {
    if(this.toBusinessPartner.EmployeeResponsible.Address.DefaultAddress.IsSet()) {
       this.toParent.RespEmplAddrStreet = this.toBusinessPartner.EmployeeResponsible.Address.DefaultAddress.Street;
       this.toParent.RespEmplAddrPostCode = this.toBusinessPartner.EmployeeResponsible.Address.DefaultAddress.PostCode;
    }
  }
}

A better code example would save the instances in local variable. The example below works with 7 associations, would speed it up by factor two.

if(this.toBusinessPartner.IsSet()) {
  if(this.toBusinessPartner.EmployeeResponsible.IsSet()) {
    var emplResp = this.toBusinessPartner.EmployeeResponsible;
    if(emplResp.Address.DefaultAddress.IsSet()) {
      var emplRespAddress = emplResp.Address.DefaultAddress;
       var parent = this.toParent;
       parent.RespEmplAddrStreet = emplRespAddress.Street;
       parent.RespEmplAddrPostCode = emplRespAddress.PostCode;
    }
  }
}

Keep in mind that .IsSet() leads to an association. Avoid redundant “retrieve by association” operations by storing the result of an operation in a variable / collection

Using association in trace statements: Even if the trace is not active, the content inside the trace statement is evaluated. This also might lead to “retrieve by association” operations that can be easily avoided

Remove trace statements


A pretty easy improvement is the removal of trace statements. If you have code like this in your project:


Trace.Info("Instance Count", this.toAnotherBO.Count());


The this.toAnotherBO.Count() is getting retrieved and executed even when the trace is not set to active.


Avoid save events


As a best practice, you should not trigger a save from the UI at all. While a user is in edit mode, he should be able to work on a document. If the user hits cancel, everything he did before hitting save the last time should be rolled back. This does not work when a save is triggered as a part of the application logic.


Use buffered retrieves instead of queries


The retrieve method retrieves an object from the current buffer. This is the fastest and the best choice to get access to an object. Using the query does bypass the buffer and is therefore slower in most of the cases.

Database queries


Due to the fact, that a query bypasses the buffer and is executed on database level, the query is slow. Try to find ways to use retrieve. Even if you have to retrieve an intermediate object first, it is often faster than using the query.


If you have to use the query, use query.ExecuteDataOnly() if you’re only interested in the result data and not the object instances as instance type.


Queries without or empty selection parameter


If you are using queries, make sure the result set is restricted. If your code runs without a query parameter selection, or the selection is done, but empty, the full database table will be returned. Most of the time, this is not expected. Before you run a query.Execute, you should check if the selection parameter is set.


Usage of QueryByElements (auto generated query)


The default query QueryByElements does not support full index search. It has in general a linear dependency on the number of instances in the business object node t = O(n), where n is the number of BO instances in the database).


Therefore it should be used only if:


  • The expected number of records in the node is small (< 1000), for example in case of a object with configuration data, or
  • The selection parameter list contains an equal condition on an element that is alternative key of the node. An alternative key is supported by an index so that the runtime dependency is t = O(log n).


In all other cases, an application-defined query (defined with the Query Wizard) has to be used. An application-defined query supports full index search on all query elements t = O(log n). This advice holds for query calls in BO implementations, UIs, web services, etc. Independently from the used query, the number of selected instances must be as small as possible, as the time depends with linear dependency from the number of selected instances (t = O(m), where m is the number of BO selected(!) instances. If possible define a join query in the wizard instead of selecting a large amount of data and do the selection in your coding.


Where and Sort operations on collections are available and make it possible to reduce the number of nested loops.


Mass enabled events


Mass-enabling of actions and events is supported. In mass-enabled script files, the “this” operator is a collection of business object nodes instead of a single instance. A mass enabled script won’t be invoked for each instance, instead it will be bundled as a single call. This makes sense in large nodes. On the root node, it only has a benifit in case of file upload and integration, when multiple instances are imported at once.

You can also further optimize your logic by doing retrieves and queries you need for all nodes or instances once, and then loop through the “this” collection.

There is a dedicated document about this topic from Pradeep Kumar N: Performance best practice with Mass enabled event


Nested loops


Nested loops (foreach, while) on collections with a large number of members should be avoided, because they lead to a runtime t = O (n * m * …).


  • Where and Sort operations on collections are available that make it possible to reduce the number of nested loops.
  • Mass-enabling of actions and events is supported. In mass-enabled script files, the “this” operator is a collection of business object nodes instead of a single instance.


You can also improve performance by avoiding Retrieve calls in a loop. Every dot access inside a loop will do retrieve as many times as the loop is executed. If you need to read some static data, do it outside the loop and store it in a variable. This way you can avoid redundant calls.


Reuse Library

Generally a reuse library is created to perform a specific function without the need of writing code for it every time. But one should be careful while calling the reuse library in a loop. If for example, the reuse library has a query to read data and the ID for the selection parameter of the query is passed to the library from inside the loop then performance will be affected. This can be avoided by passing a collection of IDs to the reuse library at one go so that the query result has  all the data you need and you don’t need to call the library again.


Execution times 

These numbers have been collected on a small test solution and may be higher in bigger objects. These are by no means official numbers and not meant to be a KPI or performance indicator.


  • Retrieve: 44ms
  • Create Node: 67ms
  • Execute Query (1000 records): 16ms
  • Raise message: 67ms

Keep in mind that a retrieve runtime is stable while the query goes up logarithmic, which makes it extremely expensive on larger tables. Also keep in mind that this number only applies to defined queries and not the auto generated QueryByElement query, which are much slower.

Lazy load UI components


It is possible to influence the UI component loading sequence by enabling Lazy Loading.


  • Lazy Loading can be activated when adding Embedded Components to standard screens. It will result in an initialization of the embedded component when it gets displayed instead of an initialization when the host object gets loaded. This is often a good idea, but might lead to unintuitive behavior when the logic in the embedded component writes data back to the host BO for example. Then you would see different data on the host BO before you navigated to the embedded component the first time, and different data afterwards.
  • Lazy Loading can be activated on custom thing inspector level by turning the data scope handling attribute. This will lead to lazy loading of all thing inspector facets.

Enable operation clubbing

This feature allows the packing of multiple UI resources (javascript, css etc.) into one package. This effectively leads to a smaller number requests required on client side. It can be enabled by setting the floorplan property “Enable Backend Operations Clubbing”  to “true”. This has an effect mostly on very slow connections with a high latency (mobile 2G/3G networks).

Performance Tipps on ABSL

This is a recording from a session held by Sameer Kumar regarding performance optimizations on ABSL scripts:

To report this post you need to login first.

20 Comments

You must be Logged on to comment or reply to a post.

    1. Stefan Hagen Post author

      Hi Rafael,

      I’m rarely using it. But if you end up looping at this.toParent.Node all the time, it’s better and faster to do it in a mass enabled event.

      One use case would be a duplicate check. Or do calculations on all node items.

      However, the use is limited, as you can’t create a normal AND a mass enabled script. I have used it for XML file uploads where operations need to be fast and are always performed for all items.

      Best Regards,
      Stefan

      (0) 
  1. Prasad Sundara Raghavan

    I was shown in one of our technical meetings that it is also a good idea to clearly separate the READs and WRITEs on the CONTEXT to improve the performance.

    Probably it is usual programming convention to use the CONTEXT and play around (READ / WRITE) to fulfill the requirement.

    However, a clear separation of READ and WRITE using some local data structures (and having the CONTEXT update in the END) can have a major impact on the performance.

    1. The idea is to first perform the READ (of the context) into READ_local_data_structures.
    2. The WRITE operations could also be made on the WRITE_local_data_structures(instead of the context)
    3. Finally update the context using the WRITE_local_data_structure (and this could be done only if a change is detected comparing context and WRITE_local_data_structure).

    The reasoning behind this is that any WRITE / UPDATE on the context triggers some system flush operations; Each flush operation could also trigger some redeterminations and buffer updates. So if there are multiple WRITE / UPDATE operations on the CONTEXT, there would be those (hidden)system operations which potentially has an impact on the overall performance.

    Probably this can also be added to the best practices guide (with refinements to the above sentences).

    Thanks

    Prasad

    (0) 
    1. Stefan Hagen Post author

      Hi Prasad,

      are you sure you’re talking about the SAP Cloud Applications Studio? Unfortunately I’m not able to understand you.

      The “Context.*” reuse Library gives context information and is always read only.

      If you’re talking about the runtime context variable “this.*”, it doesn’t matter when you update it within a roundtrip. Of course you gain performance if you just don’t update anything in a roundtrip as the BO will not update at all in this case.

      As far as I know, a flush is not triggered on a buffer write. It’s only triggered when executing an action or performing a roundtrip.

      Best Regards,

      Stefan

      (0) 
      1. Prasad Sundara Raghavan

        Hi Stefan

        Apologies for not being clear earlier.

        I was referring to the runtime context information “this.*” in the Cloud Application Studio. 🙂

        I remember pretty well about the information shared in the meeting that the updates to “this.*” being pushed to the end to improve performance.

        Let me recheck on this and revert so as to not confuse the other members in the community. If you think the information is misleading, please feel free to mark it inappropriate.

        Thanks

        Prasad

        (0) 
          1. Horst Schaude

            Hello Stefan, Prasad,

            The update of the changed data in the “this” structure happens normally at the end of the script. So, this kind of separation will not help here.

            BUT: Every traversing of an association triggers such an update too. Because the retrieve need to collect the most current data.

            A suggestion from my side is (a little bit similar to Prasad’s):

            1. READ all data, especially the data which must be accessed via associations into local data
            2. UPDATE the “this” structure and any other node data structures

            No separtion between writing local structures and any context is required. 🙂

            Bye,

                Horst

            (0) 
  2. Davis Krastins

    Hi Stefan,

    Is this best practice still actual?

    I would assume that since C4C now runs on HANA, accessing database should be much faster than before.

    Thus are these points still accurate:

    Use buffered retrieves instead of queries

    Database queries

    Execution times

    Usage of QueryByElements (auto generated query)


    (0) 
    1. Stefan Hagen Post author

      Hi Davis,

      HANA helped us greatly in reducing data percistancies as well as speeding up analytics. Regular database access is not automatically 1000 times faster because it is HANA. HANA allows in-memory operations. So parts of the application logic can be moved into the database layer where it can be performed extremly fast. C4C is not (yet?) leveraging from these benifits in all areas.

      I don’t know the details, but I think this is a huge architectural change that’s difficult to do on a plattform where customers expect it to be stable.

      In short: The blog article is still up to date.

      Best Regards,
      Stefan

      (0) 
  3. Uldis Kalviskis

    Hi Stefan,

    right now I’m trying to figure out what is less expensive, when I need to find specific record in collection.

    So main question is Where() vs. foreach. So in one case I retrieve a new sub-collection which I could use as validation against the collection. Is it effective over loop and executing if statement?

    (0) 
    1. Stefan Hagen Post author

      Hi Uldis,

      I can’t say for sure, but I would assume that Where() is faster. If it’s really performance relevant, think about modelling a query for the sub collection and use this to search.

      Best Regards,
      Stefan

      EDIT: Correction: Where() interanally translates to a “for” loop and is therefore very slow (and doesn’t scale very well)

      (0) 
      1. Daniel Weinberg (DUSER)

        Hi Stefan,

        I attended a performance session given by SAP, where the teacher made a clear statement when it comes to performance. Do not use where. Reason is that Where will do a table copy, which is more expensive than traversing with foreach. We applied this correction to some of our coding that had really bad performance (> 8 hours) and improved it immensly (< 1 hour).

        Hope that helps.

        Daniel

        (0) 
        1. Stefan Hagen Post author

          Hi Daniel,

          this is very true. Internally, the where Statement gets translated to a simple sequential “for” loop.

          Best Regards,
          Stefan

          (0) 
  4. Sampath Kumar Narayanan

    Nice Blog! Thanks for sharing the same with us.
    I am interested specifically on Buffered Retrieve.
    In ServiceRequest (SR) – Item level – Add button option brings a new window (where onsave validation is not possible).
    So, validation have to done in OnSAve event of ITEM node of ServiceRequest BO.
    However, this event (Item OnSave Event) does NOT reflect the changed value for a field in SR Item – Add Screen (lets say it is an amount field).
    But, object value during debugging shows the previous value (checked during debugging) which is not correct

    (Ex.)
    “New” Pop-up Screen – Entered value for amount as £105.00. Click OK/Add.
    Item List Screen – Change the amount value to £100.00 instead and press enter. During Save of Service Request – it still shows Item Amount value as £105.00 rather than changed value of £100.00

    Any inputs to overcoming this limitation would be helpful.

    Thanks
    Sampath

    (0) 
    1. Stefan Hagen Post author

      Hi Sampath,

      I don’t know all details about how your screen is buil, but there are a couple of things you can check.

      – Is the field that contains the changed value “roundtrip relevant” (property in the data model).
      – Are the popup screen fields bound to the data model fields?
      – Does a UI event SyncDataContainer help to being the change into the data model?
      – In the AfterModify, you can call an empty action. This triggers a flush and should sync the data model.

      Just ideas… I think I never experienced the situation you described.

      Best Regards,
      Stefan

      (0) 
  5. Uldis Kalviskis

    Hi Stefan,

    is there any bit more deep dive detail how exactly “Backend Operations Clubbing” is working. Can it impact way how events are triggered or data are buffered?

    Thanks,
    Uldis

    (0) 

Leave a Reply