“If I would’ve known this earlier, I would’ve saved so much time.” That’s a phrase many of us have whenever working with emerging technologies (or established ones). As I am ready to wrap up my first HANA project, I thought it would be a good time to share my first impressions and some of the developing practices I’ve came to adopt for my future HANA projects.

It’s Fast…

I really mean it, it’s fast. I had done my research and found the underlying technologies to be really neat, but I still had to see it for myself. When the appliance got installed and did my first trial joins and saw the results I was thoroughly impressed with the end results. Having seen these speeds, I eagerly started the development of the business scenario. Long and behold after many days of developing I go the customer and end up testing the design and they did not share my enthusiasm with the end results. Sure enough, I had forgotten about the human factor. I knew that the manipulation of billion of records in a handful of minutes was a really impressive feat, but to the end users it was still taking too long. First thing I realized, was that a grand design for the business scenario was not going to work, it was time to rethink my strategy. Which brings me to my next point.

…But, It probably can be faster.

I realized that my first development practice did not come from not understanding HANA itself, but from a design perspective. Sure I was not going to introduce everything into a single calc view, but a lot of the main components were still encapsulated into a main view. First thing I did was to decentralize the design, this was a really great first step to achieving a better performance. Testing sessions with the customers started to improve, they were still adamant with the timing, but they were acknowledging an improve in performance.  At this point we had functionally achieved the required business scenario, but I still had to find a way to improve the speed. The next step was to start looking at the data itself.



Trim the fat


Reducing your data size from the start will make it faster. Looks like something very simple to do, but it requires a lot end user input. “Is there data that you will not use?” is one of the questions I regret not asking earlier in this project as it would’ve have saved me a lot of time during the development and also it would’ve the customer happier with the performance. Analyze your data and look at the patterns where that data is not being used. e.g. this join condition only happens with these values, I can then only include those values at my data foundation level.


Take advantage of HANA parallelization.

It can happen that I get so involved in the business scenario that I forget to view it from a technology perspective. Once I worked enough times with the the design I can start viewing patterns and identify which cases are independent of each others, even though the customer gave an algorithmic approach (as is the usual case). This is the perfect case to take advantage of the HANA capabilities. I realized that I could start the design from at any point in time during the development phase as long as the results where independent from another. For example, the joining of these two tables does not affect the joining of those other two although they were given in an order. I meet them in the middle. this helped speed up a lot data manipulation as data. Obviously this will vary by each business case, but chances are that somewhere in the business case there is a chance for parallelization. Given this case where data sets are independent of each other at certain points in time, it is helpful to start thinking of the integration of the end-user display applications.

Leverage HANA with the front end data consumption tools

The big break in the project really came with how the front end tools queried the HANA designs. I figured that the end users will only be seeing small data sets at certain points during their data manipulation. Working closely with my front end colleagues, we figured that querying the HANA queries with only the necessary brought the speeds to what the end users wanted.

Play around with the design.

“If it ain’t broke…” It tends to happen as developers that once we’ve spent enough time with a design and achieved a desired performance and data correctness is maintained that we are scared to “break” it. The customer and I were coming to terms that the design was about to reach a steady state. I figured I had done all that was technically possible to speed up the design without compromising data correctness. I figured if they wanted even better performance we had to start thinking of other designs. It so happens, that there was a simpler design, but they had not utilized it because previous tools did not allow the manipulation of that data. I explained that HANA was capable of dealing with such data volumes and I started working with this new design using all the practices I’ve been learning along the way. This new design took me a week of work as opposed to the couple of months that had taken me with the old design.

Stay up-to-date with HANA

Updating your HANA system regularly will give you access to the new tools that SAP rolls out that will aid in your design. For one particular part of my design I had a hard time dealing with 1:n relationships. I had to write a SQL script, wrap it around in a calc view and expose it to the main design. With the newer HANA versions ranking became available and it has become easier to handle these type of scenarios.

.. In conclusion

There still a lot of work for me ahead, and I’m looking forward for more HANA projects and I’ll try to post more design philosophies as I go along. I’ll leave some pointers that I believe will be helpful in your HANA development:

–  Decentralize your design. Piecewise is better.

–  Reduce your data size as much as possible.

–  Ask your business: Do you another way of doing this?

–  Don’t be afraid to play around once you finish your design.

–  Don’t always go for the serialized approach.

–  Stay up-to-date with HANA.

Best,

Luis

To report this post you need to login first.

1 Comment

You must be Logged on to comment or reply to a post.

  1. Lucas Oliveira

    Hi Luis,

    You described your success story but in no point you provided a glimpse of technical content.. even though it seemed it would fit perfectly here. That leaves the reader wondering.

    It would be great to address (with deeper tehcnicality) questions such as: what was exactly was fast / slow and what was done to make it quicker? where was the ‘fat’ on your application and how did you trimmed it off? How did that affect your performance? How did you manage to use better paralelism? and so on…

    It’s difficult to even start imagining what went through on your case. Big chance others will feel the same.


    BRs,

    Lucas de Oliveira

    (0) 

Leave a Reply