SAP Lumira Data Geek Challenge: HCM Challenge
After accidentally stumbling across the SAP Lumira Data Geek Challenge, I was quite interested in seeing what this was about. Although I am largely a functional consultant these days, the techie inside me still likes to play and my passion for analytics got the better of me. I was in.
I wasn’t 100% sure what my aim was for doing this, but I wanted to see how intuitive the solution was to use and how easy it was to find out what SAP Lumira is and what it can do. Finding a single source of information on what SAP Lumira is was a challenge, and the best I could find was this FAQ in the SAP Lumira space on SCN. Even the Product Tour seemed a bit salesly and not really that helpful.
I downloaded and installed SAP Lumira and also downloaded the Workforce Distribution Report dataset from the Data Samples for SAP Lumira document as this is a HR-focused dataset. To get prepped I read the great blog by Yunus Kaldirim called SAP Visual Intelligence Data Geek, since he also use the same dataset that I chose, and Tammy Powlas’ blog SAP Visual Intelligence #SAPVisi 1.0.6 is now out – Can you use Visi to Predict the Future?.
I ran SAP Lumira and got the welcome screen, as seen below. Following the instructions I selected New Document and located the Excel file of the dataset I had downloaded.
I got the preview of the dataset – as seen below – and after quickly checking it I selected Acquire to get the ball rolling.
Once the dataset had been “acquired” by SAP Lumira I was presented with the dataset Grid, as displayed in the below screenshot. So far, so good.
As Yunus had done, I checked the data for duplicate records. I ordered the Name column by A-Z sort descending and immediately found a high number of duplicates. Since the dataset does not contain effective dates it is hard to understand why this data has duplicates. Nevertheless, I looked for a way to remove these duplicates. By selecting the Facets button I was able to see all of the unique values in each column and the number of times they occurred, as shown in the screenshot below. The first record I had spotted duplicates for – Adam Cook – was showing 10 beside his name, which aligned with the number of records I had seen before. However, there were not 10 identical records for Adam Cook, rather around 3 unique records listed multiple times. And also, the Facets option had not identified unique records; rather, it had just identified unique values in each column. My quest to clean the dataset was still ongoing, so I selected the Grid option to take me back to the previous screen.
I decided to take a look at the Manipulation Tools sidebar – as seen in the below screenshot – but couldn’t find anything to help me fix my duplicates. I did, however, find a number of helpful options to manipulate the dataset, such as find and replace, fill with, and trim. I decided that I really needed to evaluate the data that was being fed into SAP Lumira, rather than use the solution to correct issues the data. I continued to my investigation into the solution.
I decided to add some Measures and some Hierarchies to my Object list so that I could analyze my data. It was here I decided that I wanted to measure the average performance appraisal score based on the year of hire. This would tell me whether the season veterans among the workforce performed better or worse than the more recent hires.
First of all the dataset would not support this type of measurement by default. I had to create a copy of the Appraisal Rating Level Status column and replace each value with a numeric value using the Manipulation Tools sidebar. This was relatively straightforward, if not a bit basic. I then selected my new column – called Appraisal Rating Level Status (2) – and from the column menu selected Convert to Number. This created a new column called Appraisal Rating Level Status (3) that contained my genuine numeric values.
I then created a Measure for the new Appraisal Rating Level Status (3) column by selecting Create a measure from the column menu, as demonstrated in the screenshot below. I clicked on the menu icon next to it to change the type of calculation to Average and also renamed my Measure to Appraisal Rating Level Status – Average. After this I created a time hierarchy for the Hire Date column to measure my values on a time period based on when employees were hired, which was achieved by selecting the Create a time hierarchy… option in the column menu.
Once my Measure and Hierarchy was prepared I used the Split button to bring up an empty chart and dragged my Appraisal Rating Level Status (3) Measure to the Y axis and the Year value from the Hire Date Hierarchy to the X axis. The result can be seen in the screenshot below.
Hovering over a column in the chart gives me more details about the year and value:
By using the Visualize button I can show just the chart, and by selecting the Maps chart I can see a more interesting graph of my values:
By selecting the Save button in the bottom corner I can save my chart and then I can use the Share button at the top to export it. Below is the export that I made of my Maps chart:
That concludes my Data Geek Challenge for SAP Lumira. I can see how with other data – just as sales or financial data – that this tool could be extremely useful. As Tammy had demonstrated in her blog, the predictive analytics functionality in these scenarios can provide real value to end users.
I think with better HR data it could also be very valuable for a HCM professional, but without having the right type of data available to me I am not able to validate this claim. But with the ability to import Excel/CSV files (which could be data from SuccessFactors, for example), BI/BW, or databases then there is certainly enough opportunity to introduce data to an extremely user-friendly interface for data visualization. I believe there is a lot of potential and I enjoyed my brief “play around” with SAP Lumira.