Skip to Content

iStock_000017158446XSmall.jpgIn architectural circles, they have an expression that form follows function, which basically sets out the premise that a building’s design
should reflect first and foremost the usage of that building.

Is it domestic or commercial, will it be open plan or compartmentalised, high traffic or minimal area reserved for thoroughfare. All these considerations are weighed up during the design phase of the building to make sure that the users of the building have the best experience possible and that the building is as efficient as possible.

A poorly designed building that is not used for the purpose it was designed for uses resources badly, is wasteful  and delivers a working environment for its
tenants that is at best unpleasant or even – at worst – unusable.

The same rules need to be applied when designing your SAP Business One deployments and then the solution should be built from the ground up to meet the needs and purpose of the company and users.

Of course, like any building you need to have a strong foundation and in our case that’s going to be the basic hardware platform that the solution will run on, it should meet the work practices of the users – that’s what we cover in our workflow, alert configuration and report design andit should do the job it was designed for – at this point we are talking about complementary solutions, custom UI and DI API code and of course the configuration of the core Business One solution.

So what are the design principles that tend to be problematic from a performance perspective that need be called out as a set of “Golden Rules” for deploying an SAP Business One solution?

I believe there are 3 key things that sometimes tend to be overlooked and I see this time after time on customer escalations and I see the beginnings of these problems often when I am asked to give my opinion as to whether SAP Business One is a “good fit” for a particular company or size of deployment.

By the way, these rules are actually appropriate for any transaction processing system when thinking about maximising system performance.

Rule No 1 – Consider the Transaction Patterns (aka Measure Twice , Cut Once)

Rule No 2 – Consider the User Experience Level

Rule No 3 – Schedule regular Performance Logging and Review

So let’s take a look at these rules in a bit more detail.

Rule no 1 is basically about making sure that you do an analysis of the way that transactions will flow through the system. Will they be evenly spread throughout the day, are there periods of peak load are there periods where high availability is more critical for certain groups of users.

This is one of the areas where it can help to take the same approach that a network design engineer takes when thinking about prioritisation of network traffic, do you have to apply some kind of Quality of Service (QoS) rules to the system to deliver what is required, do you need to over specify the hardware to guarantee system availability, what will the utilisation look like.

For example, I can usually look at the performance logs for a company database and tell you what industry the company is in based on the disk traffic that occurs at what time of the day – and often the system will fail under those peak periods because the system was designed for the average throughput and not the peak load.

So talk to the users , map out their transaction loads, consider their peak usage times – for example in a distribution business there will often be a morning peak of data entry as orders from the preceding day are being keyed and a corresponding peak in the afternoon when orders are being picked, packed and shipped. So you would scale the hardware based on those times otherwise at those times users will complain about system speed.

Also, a critical area that often gets forgotten is reporting – how often are reports being run, what reports are needed and are reports even required or could the needs be met by using dashboards or even simple queries – or should report processing be offloaded to a replicated database or OLAP cube.

I have seen multiple implementations where the performance issues were solved by simply adjusting some of the reporting processes so that transaction audit reports were not being run at the same time as the actual peak transaction entry.

So its like being a carpenter, measure twice and cut once – it’s a lot easier to avoid issues rather than fixing them when they have become a sore point.

Rule no 2 is about taking time to understand the level of experience that the users of the system have in working with the kinds of processes that you are proposing and what they have already been used to.

As an example, and it doesn’t happen that often any more, users who are used to older character based systems can often be used to not even looking at their screens during data entry because they are so familiar with the process and they are used to systems with very little flexibility that require minimal user interaction – all the exceptions during data entry are handled via pre-configured rules.

So this can be interpreted as a performance issue – “the system is slow” or “the system can’t keep up with me” when in fact it just comes down to understanding the users level of experience, what they see is important and making sure that you do some system design to factor those things in.

It sounds simple and “common sense” but it pops up time and time again and could be easily avoided with a little extra time understanding the user’s capabilities and needs.

Of course, you don’t always get things right the first time, you don’t get the right information from the client or of course things change over time and this is where rule 3 kicks in.

I have found its always a good practice to set up a key series of performance measurements using the standard Windows Performance Management tools and run these on a regular basis to keep an eye on the system and have this information sent to you in the background for analysis so you can pro-actively make suggestions and tune the system to avoid the performance bottlenecks.

The 3 key areas to look at of course are CPU, memory and disk throughput performance counters and here’s a set of those performance counters that you can quickly set up and run on a monthly basis to track how things are running – . Of course you can also consider setting up alerts as well to let you know that there is a pending issue – take a look at a free tool called Spiceworks which can help you monitor your clients systems and provide them with a regular system report – it’s a great value add that can provide additional reasons for your customers to renew their annual maintenance.iStock_000017372409XSmall.jpg

These are the main areas where I see issues that manifest themselves as performance problems that could have been avoided with a little more time at design time.

Whats your experience – are there areas that you think should also be considered that you have find are critical?

If you can, please take the time and share your experience with the community of what you have discovered as best practices that help you avoid performance anxiety with your SAP Business One implementations.

To report this post you need to login first.

1 Comment

You must be Logged on to comment or reply to a post.

  1. Johan Hakkesteegt

    Hi Richard,

    First, great post, thank you so much !

    Per your request, my 2 cents on this subject: I would like to share a few tips for high transaction rate systems, which I have learned about the hard way.

    Background:

    We have an “old” database, started in December of 2003. With some 10K items and 4K BPs, and an average of 300 transactions per business day, it is now about 80GB large. We have enjoyed 4 major version upgrades and countless patch level upgrades in between.

    Tips:

    • We are several months away from buying our third server. The first was measured according to expected transaction rates and usage. It worked perfectly, and all we did after two years was add an extra CPU and 2GB of memory (Windows 32). With the second server I made the mistake of only looking at the CPU and memory, and of letting the seller set it up, and configure it. Our first server used straightforward mirroring (RAID 0,1). The new (second) server, according to server system administration base rules, was configured with a striped disk configuration (RAID 5). Lesson learned: hard disk configuration is key. Databases like to have hard disks to themselves. Tip: do not use RAID 5, use a configuration that lets your database have its own dedicated hard disk.
    • Each system upgrade, be it B1 or SQL, inherently brings with it the risk of system corruption (I do not mean literal database corruption): along the lines of: a field that was not needed in the old version, and therefore got NULL values, is now all of a sudden needed, but the new version expects 0. Or a table that was used by the old version has been deprecated, but does not get deleted by the upgrade process. Lesson learned: the older the database and the more upgrades, the more problems. Tip: If at all possible, try to start with a clean database every once in a while. Some B1 partners have ready implementation plans for such scenarios.
    • The larger the database gets the slower your system gets. Take the deprecated table from the previous tip. Not a problem in normal use, but when such a table is 500MB large, it is dead weight to performance. Another source of big and useless tables is upgrading. The upgrade process will sometimes create intermediary tables, that it does not remove afterward. Lesson learned: size matters. Tip: anything you can do to reduce the size of your database (other than shrink) will improve performance. You do not necessarily always need to improve hardware. Find deprecated and temporary tables (preferably with help from SAP), and remove them. In some cases splitting database files over several hard disks may be the answer.

    Regards,

    Johan

    (0) 

Leave a Reply