Fiori My Paystubs v3 and Performance in a Non-HANA Environment
You’ve implemented a shiny new Fiori Front End Server, configured Fiori Launchpad, and enabled the latest My Paystubs app (v3, from Fiori for SAP HCM 2.0). You’ve connected it to your ERP backend. You’re excited to show your management how easy it’s going to be for employees to get access to their entire history of payslips.
But there’s a problem. When you run the app, it just spins for a minute, and eventually terminates with the dreaded error:
500 Connection Timed Out
You close the error dialog, and a secondary message tells you No paystubs currently available.
No worries, you think, this is freshly implemented, perhaps it just needs to compile and load cache. You run the app again, but you get the same result. Indeed, every time you run it, you get the same result: it spins for a full minute, then times out.
You check to see if you’ve missed a critical Note, but no, you’re fully up-to-date. You have the latest and greatest support pack for UIHR002 in your frontend, and the latest and greatest for GBX01HR5 in your backend. The implementation guide from the Fiori app library definitely states that this app is fully compatible with an EhP8 backend; S/4HANA is not required. It’s also fully compatible with AnyDB; you don’t require HANA as your database.
This is an important point, because your organization is one of the many that have not yet migrated to HANA. You’re running on SQL Server (or perhaps Oracle). So what could be wrong?
You can see with browser developer tools that the connection to the HCM backend is working, as the app is able to pull your PERNR. There are no obvious errors showing up in either /IWFND/ERROR_LOG in the frontend, nor /IWBEP/ERROR_LOG in the backend. Other apps using the same alias to connect to the same backend work just fine. And, what’s more, your old-fashioned Enterprise Portal, running Web Dynpro Java apps for ESS, can pull up payslips without a problem.
You pull your hair out for a while, but eventually you realize that the app simply performs horribly. It is attempting to load at least ten years worth of payslips from the database before displaying anything other than a spinning icon. If you test with a “new hire” user, one with only a year or two of pay history, the app returns results. It’s slow, but it shows payslips.
So it’s a performance issue.
With some debugging and tracing, you determine that the app, or rather the OData service HCMFAB_MYPAYSTUBS_SRV that it calls, calls the backend Method CL_HR_PAY_ACCESS~GET_PARTICIPATING_PERNRS. The Method spends an inordinate amount of time reading the table HRPY_RGDIR (Directory for Payroll Results). The specific line of code where it gets hung up is:
* Read from the transparent payroll cluster directory SELECT DISTINCT pernr FROM hrpy_rgdir "#EC CI_NOFIRST INTO wa_py_rgdir WHERE persdata = period-persdata AND person = period-person AND seqnr = period-seqnr. APPEND wa_py_rgdir TO pernr_tab. ENDSELECT.
All well and good, except when you look at the definition in SE11 for HRPY_RGDIR, the only index is the primary key, which is composed of fields MANDT, PERNR, SEQNR, in that order.
MANDT, of course, is built-in to the ABAP processor, so that’s not the problem. However, PERNR is the field you’re trying to select, rather than supply as a WHERE condition, and your WHERE condition includes two fields that aren’t in the index, PERSDATA and PERSON, ahead of the only other field that is, SEQNR.
That means that this SELECT statement cannot make effective use of the primary key, so in order to satisfy the WHERE condition, it has to scan the entire table. If you’ve been using this system for payroll for a while, you could easily have millions of rows in this table. No wonder it times out!
You’re going to need a secondary index. We can discuss why SAP doesn’t include one as standard in a minute, but meanwhile your boss is expecting results, and you can’t show him an app that takes forever to load and only works for recently-hired employees.
DBACOCKPIT clearly shows you the particular seek resulting from the above code as consuming an inordinate amount of time on your database, but what’s more, it suggests an improvement. Because your underlying database is SQL Server, there’s a built-in index analyzer tool that DBACOCKPIT is able to make use of. It recommends an index of:
- Columns in equality predicates = MANDT, SEQNR, PERSON, PERSDATA
- Included column = PERNR
The index analyzer tells you that such an index will improve performance of this seek by 99.96%. That’s a lot of improvement!
There’s just one issue, however. When you go to SE11 to create the index, there’s no function for “Included columns.” How to solve this?
The answer is hinted at in Note 1775008 (Definition of index include columns for MSSQL Server). This Note is intended for BW systems, and yours is not a BW system, however, the technique will work just fine in your ECC system.
I won’t make you go read the Note, don’t worry. I’ll spell it out for you here.
Use SE11 to create an extension index for HRPY_RGDIR:
Call it Z01, or something that works for your naming convention, set it as Non-Unique, and For selected database systems. Choose MSS as your database. Add the index fields MANDT, SEQNR, PERSON, and PERSDATA, in that order:
Save and Activate the index. Don’t forget that you’ll need to assign it to a Z package. On my small sandbox system, even with over 3 million rows in the table, the activation took only about a minute.
Now go to SA38 and run program RSMSS_ANALYZER. Switch to the Index Processing tab, enter table HRPY_RGDIR, then hit Refresh. The refresh is necessary before you’ll be able to see your custom index. Enter your new index, and hit Refresh again. Now you should see the columns assigned to your index, and a list of available table columns. From that right-hand list, find and select PERNR, then click the left-arrow button to move it into the Assigned include columns list:
Check the box for Online creation, then click Schedule index update. Set it for immediate execution. In my system, it took about 75 seconds to run.
Now go try your app again!
Voila! Now you can pull up a decade or more of payroll results, beautifully formatted, in five seconds or less, and you don’t even need HANA to do it. What’s more, your new secondary index should not have any deleterious effect upon payroll processing performance (so far, we’ve not detected any).
So why doesn’t SAP include this index as standard? We can’t know for sure. My suspicion is that testing for this app, and perhaps most Fiori 2.0 apps, is done with HANA in the backend, even if it’s Business Suite and not S/4HANA. On HANA, this table may be columnar indexed, which is entirely different, so this sort of secondary index is likely unnecessary. Or perhaps just the raw performance of HANA is such that a little extra time spent scanning an entire table goes relatively unnoticed.
So my advice to SAP is, as long as you continue to support non-HANA systems, perhaps you should test your Fiori performance with them and adjust accordingly. Yes, eventually we’ll all be on HANA, but there are still six years to go before that deadline.
One caveat: I have only tested this secondary index on SQL Server. I do not know how this might work (or how it might be achieved) on an Oracle system. It’s quite possible it would be significantly different.