SAP IQ Best Practices From The Field
First and foremost, let me apologize for the look and feel of this blog. This is a “blog in transition”. I had initially written a white paper to cover this topic. I am converting this over to the blog format so that it is much easier to keep up to date and get the information out to those interested.
This section tries to cover some general best practices for SAP IQ implementations. This is meant to draw our attention to areas that may not have been tuned in the past. It is not an indication of issues that were observed while onsite.
As customers progress through various versions of SAP IQ, it is important to consider that newer versions and new features can change system sizing.
The current SAP IQ Sizing Guideline is updated frequently and can be found here:
There are many post-installation steps outlined in the IQ manuals when migrating to IQ 16 from IQ 15. It is important to call out certain areas that should be visited as part of a best practices when migrating.
Documented details can be found in this manual: SAP IQ 15.4 to 16.1 SP02 Upgrade Tasks
What is not documented in this section are the following best practices:
- Is using IQ UNIQUE, make sure to remove that option from all DDL or set it to 0. Using this option takes power away from IQ to properly format the n-bit indexes for current data sets
- Rebuild all columns using the following syntax
- sp_iqrebuildindex ‘<TABLE>’, ‘column <COLUMN>’
- This will convert all existing FP indexes to the more efficient n-bit indexes
- If moving to IQ 16.1 SP03 or later, this also builds in the Zone Map feature to each column index
- Discontinue the use of LowFast (LF) and HighNonGroup (HNG) indexes (covered in other sections)
Using login profiles with SAP IQ is a great way to enforce security to groups of users. It is important to think through the full effect of this security and whether or not it can make a system unusable.
For instance, the DBA account and any account with elevated privileges should not be bound to a login profile that locks an account indefinitely. Should this happen, there is no way to unlock the DBA account(s) or administer them.
The DBA login, or any login with DBA authorizations and elevated privileges, should be handled by a different login profile that allows the account to be automatically unlocked after a certain period of time. This is critical to production instances as having locked DBA accounts will force that system to be recovered from a known good backup or rebuild altogether.
There are a core set of options that are used internally by engineering to set SAP IQ behavior. When opening cases with SAP Support you may be asked to use some of these options as a workaround for an issue. It is imperative that these options be reviewed with every version change to SAP IQ.
These options are controlled by SAP IQ engineering and are used to implement various code execution paths or behavior changes in IQ. The behavior of these options can change from release to release. Now, most of them don’t but do you want to risk performance issues because you didn’t check?
And I don’t mean check them with every major or minor version. Check them with every version change you make to your SAP IQ system.
Not checking these options can have a drastic effect on performance. For instance, one query that was running in 1hr 15min, had four options set and not changed since IQ 15.4 was released. These options changed behavior over the past few years and consequently, query performance was impacted. When the options were set back to default, the query now ran over 35% faster, in just over 45 minutes.
The most critical options to watch are those that begin with these phrases:
While we’ve covered the internal options, this is also a good time to review the options that you’ve set as part of your normal setup. More often than not, the current options that you’ve set will be fine. Take this time, though, to go back through and review them.
How do we reset an option to default without having to look through the manuals? Quite easily, actually:
set option PUBLIC.OPTION_NAME=;
The “=;” tells IQ to reset the value to default. Simply replace OPTION_NAME with the actual name of the option that you wish to reset for the PUBLIC group. If you have options that are set to other groups of users, the syntax is the same. Simply replace PUBLIC with the user or group name.
Other documented options should also be reviewed to make sure that the current value is in keeping with the new version, new features, optimizer enhancements, etc.
These options are set differently when upgrading to IQ 16. We recommend setting these options back to the default values so that you can take advantage of the new features in IQ 16 and realize the performance and space gains associated with this migration
- FP_NBIT_IQ15_COMPATIBILITY (upgrade default is ON, should be reset to OFF)
- CREATE_HG_WITH_EXACT_DISTINCTS (verify that this is ON)
- CREATE_HG_AND_FORCE_PHYSICAL_DELETE (verify that this is ON)
- REVERT_TO_V15_OPTIMIZER (upgrade default is ON, should be reset to OFF)
It is imperative that all columns be rebuilt post-migration. The rebuild (using the COLUMN syntax in sp_iqrebuildindex) will force a rebuild of all FP indexes to the n-bit structures.
Rebuilding columns also converts the variable structures (varchar, varbinary) into variable width structures. This was a new feature in IQ 16.0.
Rebuilding columns also allows IQ to build some statistics of the variable structures (varchar, varbinary) that store the actual width along with the defined width. This optimization was introduced in IQ 16.0 SP08 and allows IQ to use the actual width for data processing. This can greatly reduce memory consumption for queries as well as improve performance by having much smaller data sets in memory.
One behavior that I have observed at many customers is the existence of both HG and LF indexes on the same column. This should be avoided as it adds overhead to IQ (greater size) and may lead the optimizer down a path that is not desirable.
This scenario includes capturing the case where there is a primary key and LF index on the same colum in a table as well.
This sample code will print out the object names for the duplicate indexes:
drop procedure sp__get_dupe_HGLF_ndx go create procedure sp__get_dupe_HGLF_ndx() begin create table #dupes ( creator bigint, table_id bigint, table_name varchar(250), column_name varchar(250)) ; insert into #dupes select creator , table_id, convert( char(250),table_name), convert( char(250),cname) from ( select distinct SYS.SYSTABLE.creator , SYS.SYSTABLE.table_id, SYS.SYSTABLE.table_name, (select column_name from SYS.SYSIXCOL join SYS.SYSCOLUMN where index_id=SYSINDEX.index_id and SYSIXCOL.table_id=SYSINDEX.table_id ) cname ,index_type from SYS.SYSTABLE join SYS.SYSINDEX where index_type in ( 'LF', 'HG' ) and ( select count(*) from SYS.SYSIXCOL join SYS.SYSCOLUMN where index_id=SYSINDEX.index_id and SYSIXCOL.table_id=SYSINDEX.table_id group by index_id ) = 1 ) x group by creator,table_id, table_name, cname having count(*) > 1 order by creator,table_id, table_name, cname; select convert(varchar(20),suser_name(d.creator)) owner, convert(char(30),table_name),index_type, index_name from --#dupes join SYS.SYSINDEX #dupes d, SYS.SYSINDEX si where d.table_id = si.table_id and d.creator = si.creator and index_type in ( 'LF', 'HG' ) and ( select count(*) from SYS.SYSIXCOL join SYS.SYSCOLUMN where index_id=si.index_id and SYSIXCOL.table_id=si.table_id group by index_id ) = 1 and d.column_name = (select column_name from SYS.SYSIXCOL join SYS.SYSCOLUMN where index_id=si.index_id and SYSIXCOL.table_id=si.table_id ) Order by 1,2,3 ; end go
A more detailed write-up on indexes can be found in this blog:
In short, all HNG indexes should be dropped as they add little value any more given the other enhancements to the default index as well as the HG index. All LF indexes should either be dropped or converted to HG indexes due to HG index enhancements.
If migrating to SAP IQ 16.1 SP03 or later, then consider removing indexes and leveraging the new zone maps feature.
This error is seen because the server does not have enough temporary cache left to allocate sort buffers for one or more HG indexes during a load. Three items can contribute to this.
- Temp cache may be too small. Depending on the system size and how much RAM there is, you may simply need more temp cache, more RAM, on the host.
- The number of HG indexes on the table being loaded. More HG indexes means more threads and more memory are needed in parallel.
- Can use Core_options87 to decrease the number of concurrent HG indexes being loaded.
- The goal is to lower concurrency (was made much more concurrent in IQ 16) for the HG index loading.
- With IQ 16.1 (SP03 and later), you can reduce the number of HG indexes on the table and rely on Zone Maps for performance. Even without Zone Maps, an index review should be done to make sure that all HG indexes on a table are actually needed. As IQ has matured into version 16.1, we don’t have to rely so much on HG indexes on all columns.
- The setting for sort_pinnable_cache_percent (default of 20%). The option Sort_pinnable_cache_percent specifies how much temp cache ONE sort can get; it is NOT that how much ALL sorts can get.
- Consider a setting of 2-5, especially on medium memory (over 256GB RAM) systems and large memory (over 1TB RAM) systems.
- This limits the number of HGs that can be inserted into during phase 2 of insert/load. The default is unlimited which can overallocate temp cache.
- It is recommended to start with a value in the 20-30 range, and adjust lower if the issue is seen again.
- Can lengthen the load time as fewer HG indexes are loaded in parallel, but a lower value can also eliminate the insufficient buffers error and allow for a more stable system.
- This option controls how much temporary cache to use per HG index during a load
- This is a percentage, per index, but is a percentage across all available temp cache
- A machine with 10gb temp cache vs 500gb temp cache will want very different values. I generally use something in the 2-5% range unless the system has less than 128gb RAM in total.
- It is important to understand the average load profile, too. Using the default percentage (20) on a system with 100GB temporary cache means that the first HG will get 20GB of cache. What if the load is just 10k-100k rows? The amount of memory allocated (20%) is disproportionate to the amount of cache needed.
- Recommend starting with a value of 2-5 (%) and see how this affects the issue. Continue lowering the setting by 1 if the issue is still encountered.
To better understand the “Insufficient buffers for ‘Sort’” error, consider this example and mathematical analysis of the temp cache usage:
- The 1st HG requested 20% of available temp cache, 80% remaining
- The 2nd HG gets .8*.2 = 16%, left 64% of originally available temp cache.
- And so on
- After 20th HG, 0.8*20 = 0.0115 = 1% temp cache left
- The amount of cache allocated is ONLY driven by the percentage set in sort_pinnable_cache_percent and not the data, amount of data being loaded, data types, width of data, etc.
This continues on for all HG indexes in the table.
If there are other loads (insert select, select into, load table) running at the same time, the amount of temp cache that is available is already reduced and may lead to the insufficient buffers error. IQ does contain code to adjust/handle situations like this, but there are cases where that is not aggressive enough.
As a general rule, I recommend the following process when this error is raised:
- Determine if all HG indexes are actually needed. Verify that 80-90% are used all the time. If not, drop HG indexes that offer little or no query benefit.
- Removing indexes, while counterintuitive, is actually beneficial in IQ 16.1 SP03 and later as a new feature, Zone Maps, was introduced to allow IQ to run just as efficiently without so many indexes.
- Lower Sort_pinnable_cache_percent to something in the 2-5% range
- Lower core_options87 to something in the 20-30 range
- Retest with lower settings and fewer HG indexes
With high performance systems, using /tmp (the default) for the catalog store temporary data can lead to issues. The main issue is that this filesystem can fill up and cause IQ to shutdown in preservation mode. As it is /tmp, it is highly used by system administrators and others as a dumping area or temporary area for files and data.
It is recommended to use the IQTMP16 environment variable to point IQ to a different location for its temporary files.
- On UNIX/Linux platforms, point the IQTMP16 variable to the desired directory (with the desired permissions) where temporary files are created by SAP IQ.
- You can also use the -dt database server option to specify the location of a temporary file. If this option is not specified, then the database server checks SAP IQ environment variables in this order:
- IQTMP16 (or the SATMP environment variable for SAP SQL Anywhere temporary files)
When placing the temporary work on a separate filesystem or directory, it is important to make sure that this is on a filesystem that only IQ uses and one that the DBAs have control over. If the filesystem is available to all users, you still run the risk of it filling up and having IQ shutdown unexpectedly. Typically, this filesystem shouldn’t need to be more than 10-20 GB in size, but will depend on the amount of catalog temporary activity that the application(s) have.
The transaction log for SAP IQ should periodically be pruned so that the size does not cause excess processing and traversal of data that has already been applied to the system.
I would propose an alternate solution that has been in use since IQ 15. An executable is shipped with SAP IQ called dbbackup. The main use of this program is to prune the transaction log on a running system. The key difference between using dbbackup and the -m option is that dbbackup does not require an outage and can be run on a fully active system. This will work for IQ 15 and IQ 16, both simplex and multiplex.
dbbackup -c "uid=dba;pwd=sql" -xo CATALOG_FILE.db
I would start with a schedule that runs daily on each node of the multiplex as well as on the simplex nodes.
This command can throw an error. If a checkpoint is being run, or transactions are in the process of committing, output like this is seen:
SQL Anywhere Backup Utility Version 126.96.36.1992 Waiting for transactions to complete... Transaction log truncation completed
Any output from this command can be ignored. If it is waiting for transactions to complete, once completed it will prune the log. If a checkpoint is being run, dbbackup will exit and not prune the log. There is no harm in seeing these warnings. If the log isn’t pruned during that run, it will be done the next night.
In testing done in the lab as well as with customer systems, the recommended sizing for the caches is 33% each: 33% for main cache, 33% for temporary cache, and 33% for the large memory cache. The amount of RAM to be divided into thirds is 70-90% of available RAM on the system.
For systems with less than 512 GB RAM, staying closer to 70% allows the OS to have more memory to avoid paging and swapping. On systems with 512GB of RAM or more, using 90% of RAM (460 GB) for the caches is advisable. As with any changes of this magnitude testing and monitoring should be done. Initially, the most critical item to monitor will be at the OS level to make sure that the OS is not starving for memory where it then must page or swap out to swap space.
Generally, customers use the -c option to specify the size of the catalog cache. Some customers use the -cl option to specify the starting size of the catalog cache as well.
The key is to determine whether or not the -ch option is used. -ch sets the absolute maximum memory size of the catalog cache. If this option is not used (it is generally not used with -c), then the default maximum amount of memory for the catalog cache is 90% of physical RAM on the host!
To avoid any issues with the catalog cache growing beyond expectations, it is highly recommended that the -ch option be used to set a maximum. The value of -ch will depend on the implementation.
I’ve found that if -c is used, then setting -ch to 100% to 110% of that value is appropriate. For a -c of 1000 MB, -ch should be set to something in the range of 1000-1100 MB.
When using -cl, the expectation that the value is a low value, a starting value, and is expected to grow. The value of -ch is expected to be significantly higher. There is no guidance on the value other than it must be larger than the value of -cl.
When setting the -gm option to set the total number of concurrent logins, it is critical to also set the -gn option. -gn is not set so it defaults to -gm + 5. Given that -gm is 200, this means that IQ will only allocate 205 threads for the catalog, of those 205, 200 will be reserved for network connections. This leaves just 5 threads for all other catalog activity, including the critical coordinator workload.
In normal situations, -gn should be set to 1.5x the value of -gm. In this case, with -gm set to 200, -gn should be set to 300.
As a general rule, follow the guidelines below for the -gm and -gn settings:
- If -gm is less than 100, set -gn = -gm + 50
- If -gm is >= 100, set -gn = -gm * 1.5
- On a coordinator node, set -gn to (gn_from_above + number_of_iq_nodes + num_concurrent_loads)
- If -gm is 200, then it would be (300 + 3 + 20), assuming 20 concurrent loads with three nodes
It is of vital importance to understand what happens when setting -gm too high. This option reserves resources at startup so that the connections will always suceed. It is important to set -gm to a value that will meet the peak workload of the system and that the peak be reasonable. If it is set to high, resources will be allocated but never used. Catalog threads are allocated and reserved, 1 per possible connection. These threads are only for incoming connections. If there are no connections, the threads will be idle.
It is recommended that -gn be updated on all IQ 16 systems to follow the above algorithm.
The system is a hyperthread enabled Intel-based Linux machine. Consequently, it has two logical cores for each physical core. On Intel, historically, it was recommended that hyperthreading be disabled. This is no longer the case. However, leaving IQ options at default where algorithms use the core count can lead to an overburdened system.
When IQ starts, it uses the logical processors value to determine the number of core resources. On Intel platforms with hyperthreading, this means that IQ will think that there are twice as many processor cores than there really are. For this reason, SAP recommends that IQ be started with –iqnumbercpus set to the value of logical cores, disregarding hyperthreads.
Traditionally, we recommend matching this value to the physical core count of the system. However, given the advancement in Intel’s hyperthreading algorithms, we discussed increasing the ratio from 1:1 to 1:1.25 or 20 (based on 16 physical cores).
The options that control logging (-zr, -zo, etc) should only be used in non-production environments unless needed to debug issues in production. This option adds some overhead, though minimal, to SAP IQ.
This option dictates the network packet size for all IQ network communication. On most analytical systems that return hundreds of rows, or more, of data it is recommended to set this value to the maximum of 64k, 65535. It will have little or no impact on smaller network communication and greatly improve large scale data movement.
This was not covered during the visit but is a best practice that I try to leave every customer with. There is very little downside to the option being set to the maximum and a significant upside to having a larger packet size.
The LOAD TABLE statement has an option called WITH CHECKPOINT that forces a checkpoint after each load and before the commit can run.
To be clear, when the transaction is committed all data is guaranteed to be safeguarded and written to disk. This is the only way to be ACID compliant as a relational database. There is no need to force a checkpoint operation in IQ.
The SAP IQ Reference: Statements and Options Manual says this about the option:
WITH CHECKPOINT determines whether SAP IQ performs a checkpoint. This option is useful only when loading SAP SQL Anywhere tables in an SAP IQ database.
This option is not applicable to IQ and causes significant overhead in the cluster by forcing checkpoints to run after every load on all servers.
This option should not be used in any LOAD TABLE operations.
A full index review should be done. Many changes have taken place in IQ over the IQ 16.0 and 16.1 versions that lessen the need for indexes, and certainly not as many indexes as we would use in SAP IQ 15.x.
I prefer to look at indexes differently today. Over the years, we simply said that if a column was used in the WHERE clause it should have an index that was appropriate to the cardinality and use. This is no longer necessary and adds significant overhead to SAP IQ data loading while likely not providing a significant benefit in query performance.
There will be columns that are used in nearly every query. I like to refer to these as key columns or primary columns (don’t confuse this terminology with primary keys and referential integrity): indexes that are used in nearly every situation. Then there are columns that are not used nearly as often. I like to refer to these as secondary columns: columns are used but not frequently by a majority of the users and applications.
With IQ 16.x, indexes on secondary columns should be removed. They, generally, don’t help query performance but will degrade load performance and consume space.
It should also be noted that the index advisor is a tool that we typically use to determine if indexes should be added to the system. The index advisor advice should not be blindly followed. Use the rules above before deciding whether or not to follow the index strategy.
I propose to all my customers that they capture post execution, HTML query plans before and after adding indexes. Prove, definitively, that the query significantly benefits from the additional indexes. It must be significant, too. A 1 second improvement on a query that takes 30 seconds to run is not significant. You will find that most indexes don’t provide that significant improvement.
The secondary columns are not nearly as important to index because the default n-bit index contains quite a lot of optimizations already. Also, the goal of indexing the primary or key columns is to greatly reduce the working set of data. Once that is done, the usefulness of the secondary column indexes is greatly reduced if not eliminated altogether.
In the following section we will cover whether or not indexes of any form should be used based on new features in IQ 16.x.
Moving forward, I would recommend the following for all IQ 16 systems:
- Remove HGs that are not used in most queries (primary or key columns)
- Load times increase as more work must be done to load more indexes
- The database size increases to store the indexes
- Load/aggergation issues worsen as these indexes force IQ to consume larger amounts of cache during load operations
- Reducing unused or ineffectual HG indexes wIll reduce “Insufficient Buffers for Sort” errors, if seen
- LF indexes are obsolete
- These indexes need to be removed from the system
- Follow the aforementioned rules to determing if an index is even needed
- If one is needed, use an HG
- HNG indexes are obsolete
- These indexes need to be removed from the system, if used
- Follow the aforementioned rules to determing if an index is even needed
- If one is needed, use an HG
- Both the LF and HNG were noted as obsolete in IQ 16.0 SP10
- SAP Note 2162805 – covers why the LF should no longer be used
This enhancement was delivered in the IQ 16.1 SP03 release in April 2018.
Also known as a storage index, data skipping, or constraint exclusion, it optimizes scan processing by ignoring complete pages of values if the server can guarantee that there is no value of interest to the predicate(s) being executed on that page. A zone map is a data structure containing the min/max values of each page of column data and is stored as part of the FP index (any variation from flat to n-bit).
A zone map is most effective if no other index (besides an FP index) exists on a column or the values in a column are arranged in a sequential order (for example, by transaction date). A zone map is less effective for tables with few rows, or for columns that have a uniform distribution of values across all pages.
The zone map will allow you to remove the HG index on the datetime columns without significant loss of query performance. The incremental load cost for the datetime HG indexes was primary factor for using multiple fact tables for what should have been a single table.
When you upgrade to 16.1 SP03 or later, I recommend the following:
- Rebuild the FP indexes on all columns (a rebuild of the column is required so that zone maps can be implemented) using sp_iqrebuildindex TABLE_NAME, ‘column TABLE_NAME.COLUMN_NAME’
- This happens naturally when migrating from IQ 15.x, FP indexes are converted to n-bit. I prefer to choose when the indexes are rebuilt rather than take the hit when data is changed during production cycles.
- When moving from IQ 16.0 to 16.1, the only reason to rebuild the default n-bit indexes would be to use the new Zone Map feature.
- Methodically remove HG indexes and test query performance with and without the HG index
- This sounds counterintuitive to traditional SAP IQ index design. It is.
- The goal of zone maps is to allow customers to deploy a data model with few or no indexes and yet be able to achieve the same or better performance.
- There may still be the need for HG indexes. These should only be used if they prove to significantly improve query performance.
IQ has an option called Max_Hash_Rows that controls how much of the caches can be used for performing the various hash join algorithms at runtime. The default has been set based on systems with just 4 GB of RAM. Most systems today are larger than this and accordingly this option should be increased to help with throughput.
A good starting point for setting Max_Hash_Rows is to account for the 4 GB RAM factor that went in to it:
New Value è 2.5 Million * (IQ Memory / 4GB Memory)
Where IQ Memory = IQ Main Cache in GB + IQ Temp Cache in GB
For systems with a relatively low user count, the memory component can be raised to include all RAM on the host:
New Value è 2.5 Million * (HostRAM / 4GB Memory)
Where HostRAM is the total amount of RAM on the IQ host machine
This will allow for each user to use slightly more RAM.
When increasing the maximum number of hash rows to keep in RAM it is also important to increase the amount of RAM in which those hash rows can be pinned. If you just increased the number of rows that can fit into memory, you may not get the full benefit since the hash keys must fit in pinnable cache. Typically, increasing the setting for Hash_Pinnable_Cache_Percent to 30% for a typical system (35% for a lightly loaded system) will help to fit the hash keys into the caches.
The –gm startup option (number of connections) also plays a role in this discussion as it directly determines how much RAM each user connection can consume. As such, it is important to set –gm to a realistic, and fairly accurate, value to cover all situations.
This section tries to cover some general Linux best practices for SAP IQ implementations. This is meant to draw our attention to areas that may not have been tuned in the past. It is not an indication of issues that were observed while onsite.
There are two types of HugePages in the Linux kernel. Traditional HugePages in which the application must specifically be compiled to use and transparent HugePages which is an OS level tuning mechanism for large memory programs in which no changes are needed to the application.
The following sections outline SAP IQ’s use of the different types of HugePages and how they should be configured in the kernel.
HugePages is a mechanism that allows the Linux kernel to utilize the multiple page size capabilities of modern hardware architectures. Linux uses pages as the basic unit of memory, where physical memory is partitioned and accessed using the basic page unit. The default page size is 4096 Bytes in the x86 architecture. Hugepages allows large amounts of memory to be utilized with a reduced overhead. Linux uses “Transaction Lookaside Buffers” (TLB) in the CPU architecture. These buffers contain mappings of virtual memory to actual physical memory addresses. Utilizing a huge amount of physical memory with the default page size consumes the TLB and adds processing overhead.
SAP IQ does not use HugePages. As such, it is imperative that the Linux kernel not be configured to allocate memory for HugePages use. Any memory that is allocated to HugePages cannot be used by SAP IQ.
A default configured Linux system will typically have just a few MB of memory dedicated to HugePages. The amount can be determined by running this command:
cat /proc/meminfo |grep Hugepagesize
It will typically return this value:
Hugepagesize: 2048 kB
Should more than just a few MB of memory be allocated to HugePages, it is strongly recommended to change the kernel configuration be set back to default by removing the HugePages options from /etc/sysctl.conf.
For details on HugePages in the Linux kernel, refer to this write-up:
Starting from RHEL6 and SUSE 11, Transparent HugePages are implemented and enabled by default. They are meant to improve memory management by allowing HugePages to be allocated dynamically by the “khugepaged” kernel thread, rather than at boot time like conventional HugePages. In certain circumstances, Transparent HugePages have shown to negatively impact SAP IQ performance. It is strongly recommended to disable Transparent HugePages for SAP IQ systems until this OS feature can be fully tested under the client specific workload.
The performance issue with Transparent HugePages typically manifests itself in the form of high system time. Using tools like vmstat and top, monitor the user, system, wait, and idle CPU utilization. The high system utilization is a result of the memory manager as described at
https://www.kernel.org/doc/Documentation/vm/transhuge.txt: “As the system ages, allocating huge pages may be expensive as the system uses memory compaction to copy data around memory to free a huge page for use.”
Should the system CPU utilization increase over time, it is possible that the OS is having to spend more time managing the Transparent HugePages and Anonymous memory segments. Disabling Transparent HugePages can alleviate that bottleneck.
Details on Transparent HugePages can be found here:
Disable on RedHat only until reboot (not persistent):
echo never > /sys/kernel/mm/transparent_hugepage/enabled
Persistent change, follow this link to modify grub.conf:
Consult the manuals for your specific Linux distribution and version for details on disabling HugePages and TransparentHugePages.
Linux provides a number of I/O scheduler alternatives to optimize for different I/O usage patterns. You can use the elevator= option at boot time to set the scheduler for I/O devices or you can assign a specific I/O scheduler to individual block devices.
The IO scheduler applies to filesystems only and not to raw devices. This section applies only to the simplex systems that use filesystems for the IQ devices. As a side note, these options are not detrimental on MPX systems with raw devices. For consistency sake, I would recommend a single setting for all SAP IQ hosts.
The Completely Fair Queuing (CFQ) scheduler is the default I/O scheduler for Linux. The CFQ scheduler maintains a scalable per-process I/O queue and attempts to distribute the available I/O bandwidth equally among all I/O requests. The effort balancing of I/O requests has some CPU costs.
The Deadline scheduler is one alternative to the CFQ scheduler. The deadline scheduler uses a deadline algorithm to minimize I/O latency by attempting to guarantee a start time for an I/O request. The scheduler attempts to be fair among multiple I/O requests and to avoid process starvation. This scheduler wills aggressively re-order requests to improve I/O performance.
The NOOP scheduler is another alternative, that can help minimize the costs of CPU utilization of managing the I/O queues. The NOOP scheduler is a simple FIFO queue that uses the minimal amount of CPU/instructions per I/O operation to accomplish the basic merging and sorting functionality to complete the I/O operations.
The default scheduler in Red Hat Enterprise Linux 7 is now deadline. Generally, both the NOOP and DEADLINE schedulers have shown best results for SAP IQ on Linux. It is recommended to check the current schedulers to make sure that the SAP IQ systems are using either NOOP or DEADLINE.
To make the changes persistent through boot you have to add elevator=noop to GRUB_CMDLINE_LINUX in /etc/default/grub as shown below.
A default system:
[root@localhost ~]# grep GRUB_CMDLINE_LINUX /etc/default/grub
GRUB_CMDLINE_LINUX=”crashkernel=auto rhgb quiet net.ifnames=0 biosdevname=0″
A system where the elevator was set to NOOP:
[root@localhost ~]# grep GRUB_CMDLINE_LINUX /etc/default/grub
GRUB_CMDLINE_LINUX=”crashkernel=auto rhgb quiet net.ifnames=0 biosdevname=0 elevator=noop”
After the entry has been created/updated, rebuild the /boot/grub2/grub.cfg file to include the new configuration with the added parameter:
On BIOS-based machines: ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
On UEFI-based machines: ~]# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
Refer to this RedHat Article: https://access.redhat.com/solutions/5427
To keep the file cache buffers from causing performance issues, it is recommended to set the Linux “swappiness” setting to 10 or lower. This will help keep IQ memory in RAM and lower the likelihood that Linux will swap IQ out in favor of file buffers. A higher value allows the OS to be more aggressive in swapping and paging applications while a lower value tells the system to prefer to keep them in memory and flush out filesystem cache instead. This setting can be changed by editing /etc/sysctl.conf or by using the sysctl utility.