This is part 3 of the “IdM SQL Basics” blog posts. This is a work in progress no longer in progress and currently only the MicroSoft SQL Server topic is more or less done but as this is taking much more time than expected I published them in their current state and will update them whenever possible.

Update 25.07.2014: As Matt outlines the situation in The Future of SAP IDM, IdM is moving to Sofia, and our Labs team are moving on to other ventures. It is very unlikely that I will update these blog entries any further. I’ve added some short keyword stuff in the Oracle section, but if someone else here in SDN wants to take over and/or use parts of this in improved blog entries you are most welcome to do so. More might be available in raw atqueries and useful(?) stuff, infodump if/when I publish it.

#1: IDStore views, tables, queries & basic examples and some examples of performance impacts => Here

#2: How to locate problem queries in your implementation => Here

#3: Testing and improving queries  => This should be it

This doc tries to give you some ideas on how to troubleshoot long running queries that you might have found in your IdM solution, perhaps using methods from part #2. Since most queries can be written in a couple of different approaches I’ll try to show how to compare the versions and help decide which one would do your system good, and which one would be bad. I also give an approach for halting single entries temporarily in the workflow and getting values from them that can be used to replace %MSKEY%/%MX_MANAGER%/other value references in the queries.

SQL Server

SQL Server has some built in mechanisms that are very useful and most of them can be used without additional permissions.


SQL Server Management Studio (SSMS) and SQL Server can give you some fairly accurate execution statistics if you enable them. The ones I use most frequently are IO & TIME, which are enabled by running the following statements in a query window.



I usually use IO & TIME combined as this gives a fairly accurate time and IO usage. When using SSMS you only need to run these commands once and they remain valid for the remainder of the session.  Running this produces very little to be excited about:


To demonstrate the output they give I’ve used a query from Part #1. Unfortunately I’ve lost the virtual machine I used in part 1 and 2, so the data and names of the users are unfortunately different but I trust you see my point despite that. The query lists all attributes for entries named USER.AQ.1000%, with the bad example first:

select mcMSKEY,mcAttrName,mcValue from idmv_vallink_basic where mcMSKEY in

(select mcmskey from idmv_vallink_basic where mcAttrName = ‘MSKEYVALUE’ and mcValue like ‘USER.AQ.1000%’)

The resultset will be listed as usual:


The exiting stuff is sitting in the “Messages” tab:


First is the result from the time statistics where we can see the time spent parsing the query and creating an execution plan at the top.

Followed by the number of rows returned (86).

Next we see the results of the IO statistics – a list of tables (or their indexes) that were accessed to produce the resultset and the number of read-operations performed. In general you can say that numbers to the left are good (scan being lowest cost index operation) and then cost grows as the number go to the right. If you see physical reads you might try to execute the query again to see if it just needed to populate caches, but it’s usually not a good sign.

At the end you get the final time statistics, showing the total query execution time.

Now, lets do the same with the “better” version of the query (see part #1 again):

select mcMSKEY,mcAttrName,mcValue from idmv_vallink_basic where mcMSKEY in

(select mcmskey from idmv_entry_simple where mcMSKEYVALUE like ‘USER.AQ.1000%’)

The messages tab shows us the total execution time is a lot faster, and the IO statistics shows that the number of scans/reads are significantly lower which means it’s less of a load on the database. Joins, merges of data from different tables etc.


So when running into a query that takes a long time to execute this can be a good way to see how much better your improved version is.

There’s an additional statistics I’ve found helpful at times: Profile. This will give a very detailed list of what the query execution actually looks like. It’s quite a task to read the results,


Using PROFILE can help locate where excessive read operations or loops occur. It’s still not the definitive all-included list of things the server does, but there’s already enough numbers and data in this to interpret badly so I usually stop here. If you see something like Table Access or Full Scan it’s time to panic. At that point the server has given up trying to use an index and you’re looking at a row by row read of all your data in the affected table and should be avoided whenever possible.

Profile of the bad example and the better example:



Execution Plan

You can do a very quick and dirty comparison of two queries by selecting them both and run them with Include Actual Execution Plan enabled. Click the “Include Actual Execution Plan” button before executing the query and if you’ve selected multiple statements you will get an automatic cost related to batch analysis.


This example shows that the second statement without a doubt has the lower cost of the two.


To Be Done (Autotrace, Explain plan, Profile +++)  This is in keyword form and will not be completed (see introduction section for reasons):

Explain plan doesn’t run the query while trace does, otherwise fairly similar. If you run SET AUTOTRACE ON STAT first you also get a statistics window as well outlining useful stuff like resultbytesize, number of reads/sorts, table and index scan counts etc. Also works with run script (F5). To use the autotrace function the account you use needs some additional access:



Then use F6 or the button to run a query w. trace.


Then apply brain to analyze the output (in my example table access (FULL) could be considered bad), improve by changing the query or adding indexes or something more creative, retest see if it solves the issue.

Explain plan, use F10 or button in UI. Same plan output as above. You can look for plans in the plan cache using:

FROM    v$sql where sql_fulltext like '%select distinct mskey from (SELECT mskey from mxiv_values VALS where%' 

With the SQL ID you can get the plan using

SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR(('gwp663cqh5qbf',0));

Formatting options allow more/less information, much info available online on the options, example


Or if you’re in luck a nice HTML report could be available (google dbms_sqltune):

select DBMS_SQLTUNE.REPORT_SQL_MONITOR(type=>'HTML',report_level=>'ALL',sql_id=>'1zf0h9sw688bx') as report FROM dual;  

History and stats of executions for a specific plan

select * from DBA_HIST_SQLSTAT where sql_id = 'a2zqghuq57f2y'  

Executions for a timeframe

select * 
from   dba_hist_sqlstat t, dba_hist_snapshot s 
where  t.snap_id = s.snap_id 
and    t.dbid = s.dbid 
and    t.instance_number = s.instance_number 
and    s.begin_interval_time between trunc(sysdate)-1 and trunc(sysdate)  

Or generate one manually using something like:

select * from ( 
  SELECT mskey from idmv_jmx_entries VALS where mskey in 
    (select distinct mskey from mxi_values where SearchValue ='PER' AND Attr_ID=60) 
  AND mskey in 
    (select distinct mskey from mxi_values where SearchValue ='KRABSETSVE' AND Attr_ID=59) 
  AND disabled=0 
SET LONG=100000; 
COL xplan format a100; 
SELECT dbms_xplan.build_plan_xml(statement_id => 'SAPTEST_1') AS XPLAN FROM dual;  

More useful information here: DBMS_XPLAN

The SQL Developer also has a very powerful Profiler function which is very useful when testing procedure improvements but this is more for useful during development, not implementation.


See introduction section, this is now someone elses thing to write about.

How to test… PVOs, conditionals, switches and uSelects

One of the big challenges is that the data in the IdM solution can be temporary. Pending Value Objects (PVO) for example are deleted when processing is completed. If the conditional statement you’re having problem with acts on PVOs it becomes very difficult to test the query.

A crude but very efficient “breakpoint” is to insert an action before or after where you simply remove the dispatchers from the job . Sometimes the “Sync” or “Log: <whatever>” tasks are placed perfectly for this and you can use them for something good. This will cause the processing of the task to stop and all entries/PVOs will stop at that point and remain available for query testing. This more or less requires that you have the system to yourself since it blocks all processing on that task.

Another alternative is to create a selective block by using a script that checks if the entry has a particular reference or value and block only that. It’s actually one of the few good uses I’ve found for those “Sync” & “Log: <whatever operation>” actions that are in the framework. You can offcourse create a new action anywhere and use the same scripts. Here’s a very basic example that you can extend to output the mskey & other values from the entry that you’d need to fill in data in the SQL Statements:

1) I’ve defined two global constants:


2) Then I modify my “Log:Create User” task, and set the retry count on the task to a number higher than the haltcount:


3) I also replace the NADA script with a “PauseForEntry script” (it is attached at the end of this blog), and add some additional values:


If the mskey or entry reference matches the global it will set the haltcount value in a context variable and count it down to 0 before letting the entry continue, using uSkip (1,2) to fail the entry until then.

And with that I buy myself about 100*30 seconds to test/debug queries on a specific entry without affecting other developers working on the same system and I get the MX_MANAGER and MX_TITLE values I need for my conditional query or whatnot. I can also release the entry at any time by changing the global constant to another mskey, such as 0. I would not recommend leaving this in when migrating to production environments though.

External references

SQL Server

Statistics Time:

Statistics IO:

Statistics Profile:



Showplan XML:

PauseForEntry script:

// Main function: PauseForEntry
function PauseForEntry(Par){
          mskey = Par.get("MSKEY");
          parentmskey = Par.get("PARENTMSKEY");
          haltcount = Par.get("HALTCOUNT");
          haltfor = Par.get("HALTFOR");
          debugValues = Par.get("DEBUGVALUES");
          if (mskey = haltfor) {
                    mStr = "Debug script found entry matching global DEBUG_HALTFOR mskey:"+mskey;
          else if (parentmskey = haltfor) {
                    mStr = "Debug script found PVO referencing matching global DEBUG_HALTFOR mskey:"+mskey;
                    // This is not the entry we're looking for... Move along
          currentHaltCount = uGetContextVar("HALTCOUNT",haltcount);
          mStr += "\r\n Current haltcount:"+currentHaltCount;
          currentHaltCountI = parseInt(currentHaltCount);
          if (currentHaltCountI < 1) {
                    // we've given ourselves enough time, let the entry go
                    mStr+= "\r\n Releasing entry";
                    uError (mStr);
                    currentHaltCountI --;
                    mStr+= "\r\n Holding entry, debugvalues are:"+debugValues;
                    uError (mStr);
                    OutString = uSetContextVar("HALTCOUNT",currentHaltCountI );
          return ;

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

Leave a Reply