Skip to Content

Category Archives: Uncategorized

Dear Readers,

In this blog, I am providing step by step details about how to perform table redistribution and repartitioning in a BW on HANA system. Several steps described in this document are specific only to a BW on HANA system. 

In a scaled-out HANA system, tables and table partitions are spread across several hosts. The tables are distributed initially as part of the installation. Over the time when more and more data gets pumped into the system, the distribution may get distorted with some hosts holding very large amount of data whereas some other hosts holding much lesser amount of data. This leads to higher resource utilization in the overloaded hosts and may lead to various problems on those like higher CPU utilization, frequent out of memory dumps, increased redo log generation which in turn can cause problems in system replication to the DR site. If the rate of redo logs generation in the overloaded host is higher compared to the rate of transferring the logs to the DR site, it increases the buffer full counts and puts pressure on the replication network between the primary nodes and the corresponding nodes in secondary.

The table redistribution and repartitioning operation applies advanced algorithms to ensure tables are distributed optimally across all the active nodes and are partitioned appropriately taking into consideration several parameters mainly –
➢ Number of partitions
➢ Memory Usage of tables and partitions
➢ Number of rows of tables and partitions
➢ Table classification

Apart from these, there are several other parameters that are considered by the internal HANA algorithms to execute the redistribution and repartitioning task.



You can consider the following to decide whether a redistribution and repartitioning is needed in the system.

➢ Execute the SQL script “HANA_Tables_ColumnStore_TableHostMapping” from OSS note 1969700. From the output of this query, you can see the total size of the tables in disk across all the hosts. In our system, as you can see in the screen shot below the distribution was uneven with some nodes holding too much data compared to other nodes.

➢ If you observe frequent out of memory dumps are getting generated in few hosts due to high memory utilization of those hosts by column store tables. You can execute the following SQL statement to see the memory space occupied by the column store tables.
select host, count(*), round(sum(memory_size_in_total/1024/1024/1024)) as size_GB from m_cs_tables group by host order by host
As you can see in the below screen shot, in some hosts the memory space occupied by the column stores tables is much higher compared to other hosts.

➢ If new hosts are added to the system, tables are not automatically distributed to those. Only new tables created after the addition of hosts may get stored there. The optimize table distribution activity needs to be carried out to distribute tables from the existing hosts to the new hosts.

➢ If too many of the following error message are showing up in the indexserver trace file, the table redistribution and repartitioning activity also takes of these issues.
Potential performance problem: Table ABC and XYZ are split by unfavorable criteria. This will not prevent the data store from working but it may significantly degrade performance.
Potential performance problem: Table ABC is split and table XYZ is split by an appropriate criterion but corresponding parts are located in different servers. This will not prevent the data store from working but it may significantly degrade performance.



✓ Update table “table_placement”
✓ Maintain parameters
✓ Grant permissions to SAP schema user
✓ Run consistency check report
✓ Run stored procedure to check native HANA table consistency and catalog
✓ Run the cleanup python script to clean virtual files
✓ Check whether there are any business tables created in row store. Convert them to column store
✓ Run the memorysizing python script
✓ Take a database backup
✓ Suspend crontab jobs
✓ Save the current table distribution
✓ Increase the number of threads for the execution of table redistribution
✓ Unregister secondary system from primary (if DR is setup)
✓ Stop SAP application
✓ Lock users
✓ Execute “optimize table distribution” operation
✓ Startup SAP
✓ Run compression of tables



Update table “table_placement”

OSS note 1908075 provides an attachment which has several scripts for different HANA versions and different scale out scenarios. Download the attachment and navigate to the folder as per your HANA version, number of slave nodes and amount of memory per node.
In the SQL script, replace the $$PLACEHOLDER with the SAP schema name of your system. Execute the script. This will update the table “Table Placement” under SYS schema. This table will be referred by HANA algorithms to take decisions on the table redistribution and repartitioning activity.

Maintain parameters

Maintain HANA parameters as recommended in OSS note 1958216 according to your HANA version.

Grant permissions to SAP schema user

For HANA 1.0 SPS10 onwards, ensure that the SAP schema user (SAPBIW in our case) has the system privilege “Table Admin”.

Run consistency check report

SAP provides an ABAP report “rsdu_table_consistency” specifically for SAP systems on HANA database. First, ensure that you apply the latest version of this report and apply the OSS note 2175148 – SHDB: Regard TABLE_PLACEMENT in schema SYS (HANA SP100) if your HANA version is >=SPS10. Otherwise you may get short dumps while executing this report if you select the option “CL_SCEN_TAB_CLASSIFICATION”.
Execute this report from SA38 especially by selecting the options “CL_SCEN_PARTITION_SPEC” and “CL_SCEN_TAB_CLASSIFICATION”. (You can select all the other options as well). If any errors are reported, fix those by running the report in repair mode.

Note: This report should be run after the table “table_classification” is maintained as described in the first step. This report refers to that table to determine and fix errors related to table classification.

Run stored procedure to check native HANA table consistency and catalog

Execute the stored procedures check_table_consistency and check_catalog for the non-BW tables and ensure there are critical errors reported. If any critical errors are reported, fix those first.

Run the cleanup python script to clean extra virtual files

If there are extra virtual files, the table redistribution and repartitioning operation may fail. Run the python script available in the python_support directory to determine whether there are any extra virtual files. Before you run the script, open it in VI editor and modify the following parameters as per your system., self.port, self.user and self.passwd
Execute the following command first to determine whether there are any extra virtual files.
If this command reports extra virtual files, execute the command again with remove option to cleanup those.
Python –removeAll

Check whether there are any business tables created in row store

The table redistribution and repartitioning operation considers only column store tables and not row store tables. So, if anybody has created any business tables in row store (by mistake or without knowing the implications) those will not get considered for this activity. Big BW tables are not supposed to be created in row store in the first place. Convert those to column store using the below SQL
Alter table <table_name> column;

Run the memorysizing python script

Before running the actual “optimize table distribution” task, execute the below command –
Call reorg_generate(6,’’)
This will generate the redistribution plan but not execute it. The number ‘6’ here is the algorithm id of “Balance landscape/Table” which gets executed by the “optimize table distribution” operation.
After the above procedure gets executed, in the same SQL console, execute the following query-
Create table REORG_LOCATIONS_EXPORT as (select * from #REORG_LOCATIONS)
This will create the table REORG_LOCATIONS_EXPORT in the schema of the user with which you executed this. Execute the query – Select memory_used from reorg_locations_export

If you see the memory_used column has several negative numbers like shown in the screen shot below, it indicates there is a problem.

You can also execute the below query and check the output.
select host, round(sum(memory_used/1024/1024/1024)) as memory_used from reorg_locations_export group by host order by host
If you get the output as shown in the below screen shot, this indicates that the memory statistics is not updated. If you execute the “optimize table distribution” operation now, the distribution won’t be even and some hosts may end up having far larger number of tables with high memory usage and whereas others will have very less number of tables and very less memory usage.

This is due to a HANA internal issue which has been fixed in HANA 2.0 where an internal housekeeping algorithm corrects these memory statistics. As a workaround for HANA 1.0 systems, SAP provides a python script that you can find in the standard python_support directory. (Check OSS note 1698281 for more details about this script).

Note: This script should be executed during low system load (preferably on weekends).

After the script run finishes, generate a new plan with the same method as described above and create a new table for reorg locations with a new name, say reorg_locations_export_1. Execute the query – Select memory_used from reorg_locations_export_1. Now you won’t be seeing
those negative numbers in the memory_used column. Executing the query – select host, round(sum(memory_used/1024/1024/1024)) as memory_used from reorg_locations_export_1 group by host order by host, will now show much better result. As you can see below, the values in the memory_used column is pretty much even across all nodes after executing the memorysizing script and there are no negative values.

Take a database backup

Take a database backup before executing the redistribution activity.

Suspend crontab jobs

Suspend jobs that you have scheduled in crontab, e.g. backup script.

Save the current table distribution

From HANA studio, go to landscape –> Redistribution. Click on save. This will generate the current distribution plan and save it in case it is needed to restore back to the original table distribution.

Increase the number of threads for the execution of table redistribution

For faster execution of the table redistribution and repartitioning operation, you can set the parameter indexserver.ini [table_redist] -> num_exec_threads to a high value like 100 or 200 based on the CPU capacity of your HANA system. This will increase the parallelization and speed up the operation. The value should not exceed the number of logical CPU cores of the hosts. The default value of this parameter is 20. Make sure to unset this parameter after the activity is completed.

Unregister secondary system from primary (if DR is setup)

If you have system replication setup between primary and secondary sites, you will need to unregister secondary from primary. If you perform table redistribution and repartitioning with system replication enabled, it will slow down the activity.

Stop SAP application

Stop SAP application servers

Lock users

Lock all users in HANA except the system users SYS, _SYS_REPO, SYSTEM, SAP schema user, etc. Ensure there are no sessions active in the database before you proceed to the next step. This will also ensure that SLT replication cannot happen. Still if you want you can deactivate SLT replication separately.

Execute “optimize table distribution” operation

From HANA studio, go to landscape –> Redistribution. Select “Optimize table distribution” and click on execute. In the next screen under “Parameters” leave the field blank. This will ensure that table repartitioning will also be taken care of along with redistribution. However, if you want to run only redistribution without repartitioning, enter “NO_SPLIT” in the parameters field. Click on next to generate the reorg plan and then click on execute.

Monitor the redistribution and repartitioning operation

Run the SQL script “HANA_Redistribution_ReorganizationMonitor” from OSS note 1969700 to monitor the redistribution and repartitioning activity. You can also execute the below command to monitor the reorg steps

select IFNULL(“STATUS”, ‘PENDING’), count(*) from REORG_STEPS where reorg_id=(SELECT

Startup SAP

Start SAP application servers after the redistribution completes.

Run compression of tables

The changes in the partition specification of the tables as part of this activity leads to tables in uncompressed form. Though the “optimize table distribution” process carries out compression as part of this activity, due to some bug tables can still be in uncompressed form after this activity completes. This will lead to high memory usage. Compression will run automatically after the next delta merge happens on these tables. If you want you can perform it manually. Execute the SQL script “HANA_Tables_ColumnStore_TablesWithoutCompressionOptimization” and HANA_Tables_ColumnStore_ColumnsWithoutCompressionOptimization”  from OSS note 1969700 to get the list of tables and columns that need compression. The output of this script provides the SQL query for executing the compression.



As an outcome of this activity, the distribution of tables evened out across all the hosts of the system. The memory space occupied by column store tables also became more or less even. Also you can see that the size of tables in the master node has reduced. This is because some of our BW projects had created big tables in the master node which have been moved to the slave nodes as part of this redistribution activity. This should be the ideal scenario.

Size of tables on disk (before)

Size of tables on disk (after)

Count and memory consumption of tables (before)

Count and memory consumption of tables (after)

I hope this article helps you all whoever is planning to perform this activity on your BW on HANA system.





OSS note 1908075 – BW on SAP HANA: Table placement and landscape redistribution

OSS note 2143736 – FAQ: SAP HANA Table Distribution for BW

No Fiori 2.0 a SAP introduziu a “Me Area” onde é possível obter informações do usuário, fazer logoff, encontrar apps, editar os grupos do Launchpad, setar valores default, etc. Umas das coisas que acho legal dessa área e da um “charme” é a fotinha do usuário.

Nas apresentações da SAP ela está sempre presente mas… quando você começa a utilizar o Fiori talvez se pergunte: “Que diabo de lugar eu posso colocar minha foto nesse sistema? Será na SU01? Será clicando no ícone do bonequinho que aparece?”

A nota 2381470 acaba com a brincadeira. Para utilizar a foto do usuário só integrando com o SAP Jam.

Uma opção para contornar a integração é a criação de um plugin para o Launchpad, no qual pode ser visualizado no blog abaixo.

Abaixo, segue outra opção onde vamos “dar um gato” nos serviços que fazer a integração com o SAP Jam.

Na primeira parte, o sistema fará a busca por informações do usuário. Uma das informações que é obtida é a ativação do Jam.


Crie um Enhancement na classe cl_smi_collab_platf_checker, método IS_JAM_CONFIGURED, ativando o SAP Jam.

rv_is_jam_configured = abap_true.


Com o Jam ativo, é hora de burlar as duas próximas chamadas.

O código abaixo pode ser encontrado no arquivo ContainerAdapter, explicado no final.

            //set user image
            if (oUser.isJamActive && oUser.isJamActive()) {
                    function (oResponseData) {
                        var sJamUserId = oResponseData.results.Id,
                            sJamUserImageUrl = "/sap/bc/ui2/smi/rest_tunnel/Jam/api/v1/OData/Members('" + sJamUserId + "')/ProfilePhoto/$value";

                    function (message) {
              "Could not recieve JAM user data");

Crie um Enhancement logo no começo do método IF_HTTP_EXTENSION~HANDLE_REQUEST.

Insira o código abaixo.

   lvz_json_content TYPE string.

  IF abap_true = me->handle_xsrf_token( server = server ).

  DATA(lvz_path_info) = server->request->get_header_field( '~path_info' ).

  me->parse_path_info( EXPORTING iv_path_info        = lvz_path_info
                       IMPORTING ev_service_provider = DATA(lvz_serv_provider)
                                 ev_resource_path    = DATA(lvz_resource_path) ).

  "Parte 1 - retorna o ID do usuário
  IF lvz_resource_path CS 'api/v1/OData/Self'.

    "Retorna em formarto JON. Exemplo { "d" : { "results" : { "Id" : "MY_USERNAME" } } }
    lvz_json_content = '{' && |"d":| && '{' && |"results":| && '{' && |"Id": "{ sy-uname }"| && '} } }'.

    server->response->set_cdata( data = lvz_json_content ).

    server->response->set_content_type( 'application/json; charset=UTF-8' ).

    server->response->set_status( code = 200 reason = 'OK' ).

  "Parte 2 - Com o ID do usuário, retorna a imagem
  ELSEIF lvz_resource_path CS 'api/v1/OData/Members'.

    DATA(loz_mr_api) = cl_mime_repository_api=>if_mr_api~get_api( ).

    CALL METHOD loz_mr_api->get
        i_url              = '/SAP/PUBLIC/ZUser/userphoto'
        i_check_authority  = ' '
        e_content          = DATA(lvz_mime_content)
        e_mime_type        = DATA(lvz_mime_type)
        parameter_missing  = 1
        error_occured      = 2
        not_found          = 3
        permission_failure = 4
        OTHERS             = 5.

    IF sy-subrc EQ 0.

      server->response->set_content_type( lvz_mime_type ).

      server->response->set_data( data = lvz_mime_content ).

      server->response->set_status( code = 200 reason = 'OK' ).


      server->response->set_status( code = 500 reason = 'MIME: Error occured' ).





  • Na primeira parte o sistema retorna o ID do usuário que será utilizado para compor a URL que fará a busca pelo conteúdo da foto.
  • Na segunda parte, com o ID do usuário, retorna a foto do mesmo.
    OBS: No exemplo coloquei um caminho fixo mas você pode fazer um tratamento para buscar o ID do usuário que está na URL e retornar a foto correta.




Por fim, para encontrar o código com as chamadas para o serviço do Jam, entre no Fiori Launchpad no modo debug.

URL: …/shells/abap/FioriLaunchpad.html?sap-client=100&sap-language=PT&sap-ui-debug=true

O arquivo ContainerAdapter está no caminho abaixo.

This post is based on the open online course “Using SAP Screen Personas for Advanced Scenarios” Week 5 Unit 5. You can go there for more details.

1. Purpose of flavors versioning

Instead of copying flavors again (e.g. to add and/or test new feature) you should start versioning them. That will keep your flavors in order and will allow you to easily go back to the previous versions of them when something goes wrong.

Flavor copies should not replace flavor versioning

2. Creating a new version of flavor

Open your flavor and click on Edit Flavor:

Go to Release tab and click on New Version:

Add Description and Create it:

Now in Release tab you can see version History:

Version History contains three types of versions: User Versions (versions created manually by yourself), Generated Versions (versions created automatically on save) and All Versions (User and Generated versions together):

3. Restoring version of the flavor

You can easily restore previous version of your flavor – just pick the flavor version you want to restore from the Version History and click on Rollback Version:

After you confirm rollback you will be able to see it in History:

So far we have looked that the Top 1 to 5 of the Top 10 IBCS, so this time we look at the Top 6 topic, which is around Chart labels.


The chart labels is often something that people struggle with in adopting the IBCS recommendation, not because they would disagree with the approach, but more around the fact that it is something the users are not used to and what they have been using so far is very different.


As you can see in the small screenshot above, Top 6 on Labels is around the data labels of a chart and it talks about avoiding axes and grid lines and especially the “avoiding axes” is something a lot of users are not necessarily used to.

Let’s look at a concrete example:

Above you see the bar chart created with SAP Analytics Cloud showing the scale on the X-Axis and in the given example, showing the sales revenue per product line in Million USD.


So if we now follow the IBCS recommendations:

  • We remove the X-axis
  • We enable the data labels


.. and perhaps I should have mentioned that this was the default chart that SAP Analytics Cloud did create to start with and that I did switch on the X-Axis and labels and removed the data labels to show the “standard” chart.


So far we look at the data labels, the axis line, and the axis labels. In addition to those elements, there is also a very specific recommendation on the category labels.

Above you can see another example trying to follow the recommendations of Top 6.

  • We moved the data labels to the inside of the stacked column chart
  • We added a total on top of the stacked column
  • We removed the Y-Axis and Y-Axis labels

… and we removed the legend, and placed the category labels on the outside of each category on the right hand side.


I would hope that you find these steps helpful and look at options to leverage some of the recommendations as part of your next dashboarding / data visualization project.



Additional Information:

SAP Analytics Cloud and Visualization Standards – Why ?

SAP Analytics Cloud and IBCS – Part 1

SAP Analytics Cloud and IBCS – Part 2

SAP Analytics Cloud and IBCS – Part 3

openSAP – Semantic Notations





This episode of ‘On the way to…’ is brought to you by…my house! I thought I would get this blog out before I actually travelled this time around.

So, I’m off to the 2018 installment of ASUG/Eventful’s BI+A Conference. I have really come around to this conference because it *feels* different. And different is a good thing, to be honest. There are plenty of conference that have tons of detailed sessions, others that have a few high-level overview sessions and yet others that can be analytics industry focused. I would like to describe BI+A as ‘all of the above’. I like that I have a few good sessions in each of those categories: when I want to go high-level overview, I can. Deep dive into a product? Analytics industry discussions? Check.

And then there’s location. Nashville in 2017 was pretty cool. But as Philly is forecasting for snow on Saturday night, I will gladly wake up in California. (Huntington Beach seems to be a very nice place for a February conference.) I know it’s a bit of a ‘humble brag’, but I am definitely ready for a winter break!! And BI+A has a keynote from Pixar…if you know me, you know I’m a Disney nerd and am excited for this keynote. °O°

By now, most folks who have an ear to the SAP BI ground have heard about the updates SAP has made to their data discovery tool roadmap, specifically around SAP Lumira Discovery. (If you haven’t, please check out the ‘SAP 2018 Strategy and Roadmap for Business Intelligence’ on I suspect that the sessions around the Roadmap, the Ask the Expert booth and the overall conference will be buzzing with this news. If you are interested in following the conversation, I would keep an eye out for news articles, blog posts and tweets during and post-conference. There should be some good content coming out.

For those on-premise (ha,ha), my ‘3 Sessions on Derek’s Agenda (+1)’ are below. If you’re looking for a list of recommended sessions delivered in GIF form via a Twitter thread (and if you’re not, you are missing out) then Jamie Oswald has you covered:

3 Sessions on Derek’s Agenda (+1)

One- Monday, 11am
SAP Analytics Cloud Hybrid Use Cases (Adrian Westmoreland – Product Manager, SAP)

Hybrid is the way of the future according to the latest SAP BI Roadmap analysis. Adrian is a longtime SAP Product Manager and hails from the old BusinessObjects side of the house. And i trust him, so if you want real information and opinions- then I would definitely put this session on your agenda.

Two- Monday, 2:55pm
Leaner Staff, Deeper Analytics: Driving Greater Accountability with Fewer Resources Using BusinessObjects (Chris Josefy – Manager, Business Solutions, EP Energy)

Another reason I like ASUG/Eventful conferences are the customer stories. As a BI professional with limited resources in my team, this seems to call to me!

Three- Tuesday, 2:15pm

Interactive Session: Come with Your Analysis for Office Migration Concerns and Leave with an Action Plan (Mustafa Mustafa – Senior Director – IT, Ferrara Candy Company)

Folks always look for a reason to attend conferences, right? If you are thinking about migrating Analysis for Office, then you may just come home with some information to justify sending you to a conference!

+1- Monday, 2:00pm

Panel Discussion: BI + Analytics in Healthcare (Derek Loranca – SAP Mentor & Lead, BI COE, Aetna | Jamie Oswald – SAP Mentor & Manager, Data Analytics & Engineering, Mercy | Jennifer Cofer – ASUG Volunteer)

As always, my +1 is my session. Jamie, Jennifer and I will be hosting a panel discussion to talk about how some folks in healthcare utilize BI and analytics. A few things you can expect: jokes, knowledge and a special guest moderator!

There are a few conferences covering SAP Analytics in the next few weeks, so I expect that there’s going to be a lot of news and analysis coming out. And then there’s ASUG Annual Conference/SAPPHIRE in June, where I’m sure there will be more information to cover. Should be interesting, so buckle in!

(Cross-posted on my personal blog)

Perhaps you heard the news at SAP TechEd last year, or read Murali Shanmugham‘s excellent blog about developing Fiori Apps, or you’ve scene this video:

That’s right Mendix and SAP are joining forces, and this partnership will bring a lot of exciting opportunities for SAP developers to be even more productive!

Join product managers from Mendix and SAP on Monday, March 5th to learn more about the partnership and about ways to take advantage of new solutions available to you now. As always, these SAP Community Calls are hosted by SAP Mentors.

Stand Out in a Software-Driven World

Abstract of the session:

Do you want to be more productive, love to try out new software, and work with innovative technology? Then make sure you attend this session!

Speed up your app development process with SAP’s low-code solution partner: Mendix. The Mendix Platform provides a visual modeling tool that lets you quickly build apps. It simplifies and speeds up app development by adding layers  of abstracting on top of user experience, interface design, and logic.

In this SAP Community Calls session, we’ll focus on the basics of SAP Cloud Platform RAD Mendix, where Erno Rorive from Mendix will cover the key principles of the platform, followed by a demo and round of Q&A.

During this webinar we will present

  • The key principles of Mendix
  • A short demonstration of the Mendix Platform
  • Share examples of Mendix on top of SAP
  • Q&A

This webinar will be delivered by Ohad Navon Product Manager @SAP and Erno Rorive Product Manager @Mendix.


Joining the session:

Date: Monday, March 5, 2018

Time: 9:00am PST; 12:00pm EST; 6:00pm CET

Webinar link:

Dial-in: Either have the system dial your phone (free); or find a dial-in number here.

Register for the call if you would to download the event for your calendar, and receive an email reminder from us.

See you there!

Ya puedes agendar el próximo 21 de junio de 2018 para participar del SAP Inside Track Bogota en su segunda edición.

Hablamos de la reunión más importante de la Comunidad SAP que congrega a los mejores profesionales del mercado SAP, para compartir sus conocimientos y experiencias con todos los asistentes.

Los expositores darán a conocer los temas más relevantes que tienen que ver con casos de éxito, herramientas del sistema y las principales innovaciones que SAP ha ofrecido en los últimos años.

Hablamos de un día para aprender, enseñar y compartir entre nosotros.

Este evento es sin costo.


Algunos posibles temas serian:

SAP S/4 HANA Ariba Java
SAP ECC SAP Solution Manager Development
Fiori SAP ERP SAP Transportation Management
SAP Activate UI5 SAP Lumira
SAP Hybris Project Management SAP BO
MRP on Hana Application Lifecycle Management SAP HCM
Analytics ABAP SAP Screen Personas
SAP Leonardo BRF+ SAP Ariba
Design Thinking with purpose Exponential Thinking SAP Next Gen



Por definir

Lugar y Fecha

Cuando: 21 de JUNIO del 2018

Horario: 9:00 a 14:00

Costo: Gratuito

Numero de participantes: 50 participantes

Donde: Oficinas de la SAP – Carrera 9 # 115-06, piso 24, oficina 2404
Edificio Tierra Firme, Bogotá, Colômbia

Para mas información y comentarios:





Twitter hashtags:


Organizacion SAP Mentors LAC





Y mas¡¡

Quieres poner tu logo aqui??

Se un Patrocinador

Escribe para averiguar como:

If you have been following along with my blogs about SAP Build, you have heard from me about why I love the product, the Product Managers on some of the latest and greatest features, and now it is time to hear from someone who actually uses BUILD in their day to day work.

I reached out to a community member who works with BUILD in their day work. Not an SAP employee, but someone from our vast ecosystem of developers. I wanted to get the outsiders perspective on SAP tooling.

So thank you to Rajen Patel for answering my questions about how he and his team work with SAP Build! And check out some of his other contributions to the community like this blog.

Q: With all the available design and prototyping tools available, why did you choose SAP Build?

A: Our main reason for choosing SAP Build is that it’s being promoted by SAP as a prototyping tool for developing Fiori compliant UI5 apps. We wanted to go to the source. The BUILD team at SAP did an amazing job at developing this easy to use prototyping tool. The OpenSAP course for Build was great way to learn basic functionality.

SAP Build provides integration with SAP Web IDE, so that helps us to jump start development. Whatever we developed in the prototyping phase is not throwaway work. All screen elements and UI designs can be imported directly into the SAP Web IDE and save development time. One last key feature we liked was availability of ‘Fiori compliant’ controls. It goes without saying that ability to design adaptive screen is big bonus i.e. Design one prototype that can fit all three major screen sizes: desktop, tablet and phone.


Q: Are there any features in BUILD that you have found to be very helpful? Is so, what and why?

A: SAP Build has almost all the features we needed for prototyping. Some were more useful than other in our case. Here I have listed few useful features for our last prototype.

  • Screen building using ‘drag and drop’ functionality was great in creating forms (that’s bulk of our prototype)
  • Data modeling tool was very helpful in creating data to mimic real life scenarios. This allowed us to make the prototype very personal for our end-user community
  • Get feedback: This allowed users to test new business process ‘hands on’. We were able to view user’s interaction with the screen using the heatmap functionality and screen flow that users followed.


Q: What is your use case for BUILD?

A: We used SAP ECC scenario for building maintenance and repair functionality. Our prototype included worklist, forms (transactions), and reports.


Q: How do you (and your team) utilize it?

A: This was the first time in my career I was able to create an interactive prototype. Most of the analysts like to design screens for end-users, and BUILD allowed our inner designers to go wild. Okay not wild, but at least it brought out our creativity that can be channeled through the standard UI5 framework and the team can collaborate in meaningful way. We created a project and shared it amongst our team. Personas were defined with the key information for each user group. This helped helps us to create empathy for end-users in our project team. This also provided focal for any discussion i.e. How a user will react to this feature?

We collaborated using various collaboration features such as comments and feedback. We used the project to conduct various demos as well.


Q: Is there a feature missing from BUILD that you wish was available?

A: Yes, few extra features that could have helped us. For example: End user should be able to provide feedback without creating ‘BUILD login’ – that would have been great. One more user ID and password reduces user engagement.

From development perspective, we could have used few more ‘Fiori Launchpad’ elements like Fiori screen header. Going forward we will need to create more complex forms for desktop application. Current form design is difficult to work with when one needs to create forms with 30-40 fields. (Yes, 30/40 fields to map really complex business scenario)


Q: Would you recommend for other developers to try/use BUILD? If so, why?

A: Yes. If one is developing a prototype for SAP products then SAP Build will provide a great value. SAP Build provides all three element of high fidelity prototypes such as 1) Screen (UI) 2) Realistic data and 3) Process flow.

You can’t go wrong with Build to develop Fiori compliant UI5 screens. It’s easy to use and provide lots of values for your efforts.


have you heard about the SAPinsider Basis & SAP Administration and BI & HANA 2018 conference coming up February 26 – March 1 in Las Vegas?

I will be there to present and demo the latest and greatest about SAP Landscape Management or LaMa. In order to show you most of the key capabilities I divided the LaMa topic into the following two sessions:

Please join me in my two sessions to hear everything about this great solution!

I am happy to answer your questions in the Q&A part of the session or afterwards. I am also available for a longer conversation. Please contact your SAP Account Executive in order to schedule a meeting with me.

I am looking forward to this exciting event!

Best regards,


Introducing SAP Fiori, including the SAP Fiori launchpad as the entry point for end users, there are several deployment options available. Which deployment options fits best, depends on several aspects, like the usage scenarios, the system landscape setup and the general strategic direction regarding a cloud or on-premise environment.

The new document SAP Fiori Deployment Options and System Landscape Recommendations  describes the main SAP Fiori scenarios, the recommended system landscape setup, and the SAP Front-end server deployment options. It also includes insights into the SAP Front-end server hub and embedded deployment and what aspects to consider. An FAQ section provides answers to common questions regarding the SAP Fiori deployment options and finally you will get an idea on the strategic direction regarding the central entry point of the future.

Please notice that with SAP S/4HANA, there is an important update on the recommended SAP Fiori Front-end server deployment: For SAP S/4HANA systems it is recommended to use an embedded deployment instead of a hub deployment. This means that every SAP S/4HANA system has its own SAP Fiori launchpad.

To offer a single entry point to multiple SAP S/4HANA systems and to central services, as the My Inbox app, you still can define one ‘leading’ SAP Fiori launchpad on a dedicated SAP Fiori Front-end server. From there you can access other system-local launchpads or apps within a new browser tab/window.

This setup simplifies lifecycle and version compatibility aspects and allows a clear separation of FLP content responsibility between business areas.

Please check out the document for details and SAP Fiori scenarios: SAP Fiori Deployment Options and System Landscape Recommendations

For further information, see also: