Reasonable IQ memory values
For IQ 16 installations in the HEC/MCD environment – typically NLS related – we apply the following recommendations in the database configuration file:
19% of RAM for Load Memory (-iqlm)
39% of RAM for Main Cache (-iqmc)
19% of RAM for Temp Cache (-iqtc)
In IQ Cockpit or sp_iqstatus you will only find Main Cache +Temp Cache, which means that ‘IQ Memory’ consists of 58% of RAM.
Are these reasonable values?
Axel
To ensure adequate memory for the main and temporary IQ stores, set the -iqlm, -iqtc, and -iqmc startup parameters so that each parameter receives one third of all available physical memory allocated to IQ server .
regards
John
Hi John,
I'm aware of that recommendation, and it's a reasonable one to start with, but you should mention that it comes with recommendation to allocate 755 - 80% of machine RAM for IQ (leaving aside, that LM is not precisely "allocated") which leads to percentages of 25% - 27% based on machine RAM size.
Cheers,
Volker
correct, save 20% of total physical memory for the OS in order to prevent IQ process from being swapped out.
regards
John
It depends on the amount of RAM. I can't imagine saving 400gb of memory for the OS on a 2 TB system.
90% is the accepted best practice for IQ 16 nowadays since the servers carry more ram than they did in IQ 15 days.
A better approach would be to specify a minimum, say 10%, and a max of say 50-100 GB.
Mark
please look at best practices and hardware sizing guide and them monitor usage using sp_iqsysmon and adjust it accordingly
SAP Sybase IQ 16 Hardware Sizing Guide
SAP IQ 16: Best Practices Guide
if IQ machine is dedicated to IQ only, then you can leave 10% for the host and allocate rest of the memory to IQ
Hi Axel,
I think they're reasonable start values. As soon as there is any kind of load on the system, monitor utilization. Keep in mind that
- Load memory is not a permanently used resource, it's a limit for temporarily allocated RAM
- Temp Cache is usually more valuable than Main Cache since it contains the most volatile data. However, if it's oversized, RAM remains unused. That's not the case for the Main Cache.
- If you have multiple users working on similar data sets, they share the same persistent data (=Main Cache) but use their own private temporary and work tables (=Temp Cache).
- I've observed (Linux) systems swapping when IQ memory was below 70% of RAM size. There was a lot of local LOAD operations going on and the file system considered these files worth buffering.
If I have a generous pool of RAM to pick from, I try to set TC size to a value where 75% of the samples show utilization of 75% or above (don't respect samples taken during idle times, if any), set LM to a value reflecting the demand of typical LOAD operations (here, a percentage recommendation is least robust) and assign whatever of the IQ RAM is left, to MC
HTH
Volker
Load memory no longer exists in IQ. This was removed in IQ 15.2. In IQ 16 we added a new cache, Large Memory Accumulator (LMA). The LMA cache is absolutely used all the time and is a permanent resource. Like main and temp cache, LMA is allocated at startup. The OS may not give us all the RAM we want, but that's an OS tuning issue. We ask for all the RAM at startup.
LMA is used to aid in all data loading. Consider this the old load memory. However, this cache also contains all dictionaries and stats information for the FP and n-bit indexes. If you reduce the size of LMA, you then force swapping for those lookup objects.
The IQ caches should all start at 30% of ram on most systems. Certainly, having 600 GB for LMA is too much on a 2 TB machine. But on systems that are under 1 TB in size, 30% of RAM for main, temp, and LMA is a good starting point and leaves enough RAM for the other little IQ bits, the OS, and other minor apps.
We've done enough testing with IQ 16 over the years to know that 30% for each cache is a good starting point and one that infrequently requires changing. Yes, there will always be exceptions and detailed monitoring will help drive the changes so that it can be fine tuned. But as a staring point, 30% is the best practices starting point.
It should be noted that for NLS, the NLS bible, "SAP First Guidance - SAP-NLS Solution with SAP IQ" ,sets the best practices slightly different. They recommend using 80% of RAM and giving each cache 1/3rd of the 80%. See page 46.
The Linux swapping of IQ ram is generally an issue with the SWAPINESS setting at the OS. I recommend setting that to 0-10 to help avoid the swapping of IQ.
Swappiness is a Linux kernel parameter that controls the relative weight given to swapping out runtime memory, as opposed to dropping pages from the system page cache. Swappiness can be set to values between 0 and 100 inclusive. A low value causes the kernel to avoid swapping, a higher value causes the kernel to try to use swap space.
One other note, too, on memory. Make sure that the Linux Out of Memory (OOM) killer is not enabled. I've seen too many times where this is configure and it kills IQ for consuming too much RAM. Never a good thing for production databases!
Mark
Hi
As Mark already stated, have a look to the SAP First Guidance Document for the Implementation of the SAP-NLS Solution
Best Regards Roland