Skip to Content

SAP has grown at an exponential rate in terms of the Solutions that SAP today provides to solve business problems. Until few years ago, implementing SAP was comprised of implementing a product called SAP R/3 which had various modules focused on different business processes or functions in the organization. Now these business processes have become complex due to changing market dynamics and customer focus that the R/3 alone wasn’t enough! For example, the planning function in SAP which traditionally was PP (Production Planning) module in the R/3 days today has complex needs to incorporate functions such as Network Planning, Demand Planning and Available to Promise. Being able to create a Sales Order on behalf of customer was already a complex process in the SD (Sales & Distribution) module in R/3 and today it also needs servicing the same customer with the disputes, managing discounts under various pricing agreements … and I can just keep talking about it.

However, the bottom-line is today’s SAP installations just don’t start and stop with the ERP product but goes beyond that for all the above reasons mentioned. Therefore it is also getting complex with implementing the new technologies and products such as SAP SCM, CRM, BW, PI, Portal, etc. Being smart at planning is essential and so Standardization is important even during installation of these multiple SAP systems. Remember installation is a complex process that requires various teams such as Server & OS team, DBA’s, Storage team, of course the Network team and none-the-less, the Basis team.

In this blog, I will try and provide various opportunities for standardizing your SAP installation process. The goal to drive towards is to shorten the deployment time of the SAP systems, reduce chances of errors and … be disciplined – just because it makes life less stressful (trust me on that one!)

System IDs:

Standardizing on System IDs is not an old practice. However, when you choose a SID naming convention, try to think about expansion of your company. A three letter SID is small to account for all the information, key is agreeing and standardize. A SID should at least convey – 1. Type of the system (D, P, Q, S, etc.), 2. Type of product installed (ECC, CRM, SCM, etc.) and either of 3.a. business segment or 3.b. just a number to allow a similar product to be implemented again (such as 2 CRM’s depending upon the business requirement). We have also had our SIDs standardized for the n+ 1 landscape.

Server Names:

You might not have a lot of play in this since your server naming conventions might already be set. Some companies have a rule based name or something not so typical rule based. However, ensure the server names are less than 13 characters. Also make sure all the servers are for a type of the system (DEV, QA, etc.) are under the same domain. I’d like to add one more important aspect of server names here considering the play in virtualization and cloud computing. Make sure each SAP instance is installed using a virtual host name. For example, the CI of a system whose SID is AEM runs on a server called aemci01 and the dialog instances of the same to be installed using virtual names such as aemdi01, aemdi02, etc. and obviously the database instance to be as aemdb01. This can be achieved using the sapinst variable or paramter called SAPINST_USE_HOSTNAME during installation. This helps in a number of ways, such as – 1. Moving applications from one server to another, 2. HA, 3. DR, or maybe just even if you were to 4. Upgrade your hardware. Nevertheless, this is also helpful (in fact mandatory) if you were to use SAP Adaptive Computing Controller (ACC).

Instance Numbers:

If you think about this, Instance Numbers is already a way of SAP to standardize the applications port numbers. The 36<nr>, 32<nr>, etc. port numbers are few of the examples. How about standardizing them further such that it serves a larger purpose with other aspects of networking and application management.

I like to standardize the instance numbers based on the products. For example. If the instance number for ECC systems, irrespective of system type such as DEV, QA, PRD, etc. is 10; that for CRM could be 20. All app servers could be the same instance numbers since hardly do we install multiple dialog instance on the same servers. In worst case if you do for any creative reasons, you may decide to provision some rules around that too.

This may have many possible use cases. Standardization in the SAP Logon Pad entries, standardization in the firewall rules, and standardization in services file and so on are some of the use cases of this approach.

File Systems:

In the case of Windows try and separate drives for SAP executable, Database and Operating Systems.

In case of UNIX or Linux, try to use NAS if not SAN to build the file systems for SAP executable. The goal is to avoid NFS since that is a single point of failure and in case of network issues, the whole system is impacted. NAS provides better protection against failures (of course depending upon how the file systems are architected in your specific case).

For database, obviously SAP provides list of file systems for archive logs and data and some other file systems such as stage in oracle and instance home in DB2. A small note from performance perspective – make sure the disks are stripped appropriate to provide maximum throughput for data and RAID protection is used appropriately for data (read and write intensive) and logs (write intensive).

Also keep a perspective of HA and DR. Logical maps of LUNS to file systems in a standardized way will go a long way to keep the entire SAP suite of SAP systems similar at the infrastructure levels.

Bottom-line, try to standardize on the file systems as much as possible to the level of sapdata(s). This will allow your UNIX / Linux and database teams for that matter to be efficient. Like I mentioned earlier, it’s a joint effort between various technical teams to speed up the entire SAP installation process.

UNIX group IDs:

Clearly, this could be over-engineering, but if we at least follow some standardization in the GIDs for groups such as sapsys or database groups will go a long way. This especially helps a lot if you are using Identity management tools such as Vintella. This also has a huge advantage when using SAP ACC to relocate instances from one server to another.

Server capacities:

I probably should have mentioned about server capacity standardization at the beginning, but I guess this seems to be a logical point to bring this up at this point of time because as Basis administrators, these are some areas where we traditionally lack pushing ourselves. We typically end with sizing a server and providing the SAPS to our hardware teams / vendors.

I think we can do a better job here working with the server teams and / or hardware partners in bringing standardization here. Based on your company’s structure and size, defining standards around server sizing helps in a huge way. Try and have a standard around building blocks of server configurations. For example, a DEV server starts with 4VCPUs, 16GB RAM as the base configuration and the next building blocks are 1VCPU, 4GB RAM.

This model should help is a number of ways. This may help in your chargeback model and may also help to budget the infrastructure cost during early phases of the project. Another area where such standardization might help is around cloud strategies such as public clouds or hybrid clouds and managing shifting of workloads. Obviously this will help in SAP ACC as well.

Database schema:

The standard database schema for SAP systems is sap<sid>. So you would think there is not much of standardization required. Well, think about System Refresh / System Copy. What if after system copy you didn’t have to worry about the schema? Well, therefore if we try to standardize the schema names during installation that might have many applications where we can save time in some of the maintenance tasks. One of the ideas that specifically work for me is standardizing schemas based on the product installed. For example, the schema for all ECC systems (DEV, QA, etc.) can be saperp, for BW could be sapbiw, etc.

RFC Destinations:

This is in no means something new. So I am not going to write a lot about this here, but this is just a placeholder or a checklist item for your consideration and completeness to this document.

Transport directories:

Mount the transport directory from a single location, preferably NAS, onto all SAP servers. This will avoid the issues arising due to NFS failures and also will help manage just one file system centrally. Typically this is also a file system that Basis folks like to use as a shared directory across all servers.

Have a single /usr/sap/trans mounted from a single location, but here is the approach to be able to manage the transport files more effectively. If you use just the standard /trans and its subdirectories, it won’t take long to get difficult to find the cofile and data file if need arises. The directory list command will also run longer and this defeats the purpose.

Create a subdirectory called <sid>_trans within the …/trans directory for each SID. You will also have to create the other transport subdirectories such as cofiles, data, tmp, etc. under the …/trans/<sid>_trans and set the permissions appropriately. Now the SAP system needs to be configured to use the appropriate …/<sid>_trans as its transport directories. This can be achieved by changing 2 profile parameters – DIR_TRANS and DIR_EPS_ROOT.


These are some of the key opportunities for standardization and there could be more of such. I’d like to repeat that SAP installation is an infrastructure effort and all infrastructure teams play an important role. Standardization is not only going to bring ease of administration but also plays a huge role in speeding up the total SAP build process (installation is one part of the end to end build process).

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

    1. Pise Mangesh Post author
      What you would do is just have one single /usr/sap/trans created on the NAS and mounted on all the SAP servers (CI as well as DI’s of the ECC’s, SCM’s, CRM’s, etc). That way all the servers see one single …/trans directory. You then create subdirectories within …/trans such as …/trans/aem_trans (for example ECC SID was AEM and it had say, 4 application servers) and configure the 2 profile parameters on AEM mentioned to have the transdir be /usr/sap/trans/aem_trans which will be also true for all 4 application servers.

      The goal of this excercise is bring flexibility and standardization. That way when you ask for disk space to extend trans, you should be able to ask for just 1 extension and not on individual trans’ from all landscapes. Also, if you add environments in the future, you are already standardized with your transport directory.

      Let me know if that does not answer your question.

        1. Pise Mangesh Post author
          That is correct to the best of my knowledge since Storage is not my area of expertise. There might be a way to be able to show the same LUN or disk on multiple servers in a read-write mode, which I am not sure about. I’ve also stated using NAS in my earlier response as well as in the blog:
          “Mount the transport directory from a single location, preferably NAS, onto all SAP servers.”
          Hope this answers what you are looking for 🙂

Leave a Reply