Performance Impacting Mistake No 4: Easing Your (SAP Business One) Performance Anxiety by following a few simple design rules
In architectural circles, they have an expression that form follows function, which basically sets out the premise that a building’s design
should reflect first and foremost the usage of that building.
Is it domestic or commercial, will it be open plan or compartmentalised, high traffic or minimal area reserved for thoroughfare. All these considerations are weighed up during the design phase of the building to make sure that the users of the building have the best experience possible and that the building is as efficient as possible.
A poorly designed building that is not used for the purpose it was designed for uses resources badly, is wasteful and delivers a working environment for its
tenants that is at best unpleasant or even – at worst – unusable.
The same rules need to be applied when designing your SAP Business One deployments and then the solution should be built from the ground up to meet the needs and purpose of the company and users.
Of course, like any building you need to have a strong foundation and in our case that’s going to be the basic hardware platform that the solution will run on, it should meet the work practices of the users – that’s what we cover in our workflow, alert configuration and report design andit should do the job it was designed for – at this point we are talking about complementary solutions, custom UI and DI API code and of course the configuration of the core Business One solution.
So what are the design principles that tend to be problematic from a performance perspective that need be called out as a set of “Golden Rules” for deploying an SAP Business One solution?
I believe there are 3 key things that sometimes tend to be overlooked and I see this time after time on customer escalations and I see the beginnings of these problems often when I am asked to give my opinion as to whether SAP Business One is a “good fit” for a particular company or size of deployment.
By the way, these rules are actually appropriate for any transaction processing system when thinking about maximising system performance.
Rule No 1 – Consider the Transaction Patterns (aka Measure Twice , Cut Once)
Rule No 2 – Consider the User Experience Level
Rule No 3 – Schedule regular Performance Logging and Review
So let’s take a look at these rules in a bit more detail.
Rule no 1 is basically about making sure that you do an analysis of the way that transactions will flow through the system. Will they be evenly spread throughout the day, are there periods of peak load are there periods where high availability is more critical for certain groups of users.
This is one of the areas where it can help to take the same approach that a network design engineer takes when thinking about prioritisation of network traffic, do you have to apply some kind of Quality of Service (QoS) rules to the system to deliver what is required, do you need to over specify the hardware to guarantee system availability, what will the utilisation look like.
For example, I can usually look at the performance logs for a company database and tell you what industry the company is in based on the disk traffic that occurs at what time of the day – and often the system will fail under those peak periods because the system was designed for the average throughput and not the peak load.
So talk to the users , map out their transaction loads, consider their peak usage times – for example in a distribution business there will often be a morning peak of data entry as orders from the preceding day are being keyed and a corresponding peak in the afternoon when orders are being picked, packed and shipped. So you would scale the hardware based on those times otherwise at those times users will complain about system speed.
Also, a critical area that often gets forgotten is reporting – how often are reports being run, what reports are needed and are reports even required or could the needs be met by using dashboards or even simple queries – or should report processing be offloaded to a replicated database or OLAP cube.
I have seen multiple implementations where the performance issues were solved by simply adjusting some of the reporting processes so that transaction audit reports were not being run at the same time as the actual peak transaction entry.
So its like being a carpenter, measure twice and cut once – it’s a lot easier to avoid issues rather than fixing them when they have become a sore point.
Rule no 2 is about taking time to understand the level of experience that the users of the system have in working with the kinds of processes that you are proposing and what they have already been used to.
As an example, and it doesn’t happen that often any more, users who are used to older character based systems can often be used to not even looking at their screens during data entry because they are so familiar with the process and they are used to systems with very little flexibility that require minimal user interaction – all the exceptions during data entry are handled via pre-configured rules.
So this can be interpreted as a performance issue – “the system is slow” or “the system can’t keep up with me” when in fact it just comes down to understanding the users level of experience, what they see is important and making sure that you do some system design to factor those things in.
It sounds simple and “common sense” but it pops up time and time again and could be easily avoided with a little extra time understanding the user’s capabilities and needs.
Of course, you don’t always get things right the first time, you don’t get the right information from the client or of course things change over time and this is where rule 3 kicks in.
I have found its always a good practice to set up a key series of performance measurements using the standard Windows Performance Management tools and run these on a regular basis to keep an eye on the system and have this information sent to you in the background for analysis so you can pro-actively make suggestions and tune the system to avoid the performance bottlenecks.
The 3 key areas to look at of course are CPU, memory and disk throughput performance counters and here’s a set of those performance counters that you can quickly set up and run on a monthly basis to track how things are running – . Of course you can also consider setting up alerts as well to let you know that there is a pending issue – take a look at a free tool called Spiceworks which can help you monitor your clients systems and provide them with a regular system report – it’s a great value add that can provide additional reasons for your customers to renew their annual maintenance.
These are the main areas where I see issues that manifest themselves as performance problems that could have been avoided with a little more time at design time.
Whats your experience – are there areas that you think should also be considered that you have find are critical?
If you can, please take the time and share your experience with the community of what you have discovered as best practices that help you avoid performance anxiety with your SAP Business One implementations.