The IdM 7.2 Config Analyzer – A deeper look inside
The IdM 7.2 Config Analyzer – A deeper look inside
In this blog I want to give a broader overview on the IdM Config Analyzer as the SAP document “Using the Configuration Analyzer” did. Also I want to visualize the config analyzer a bit more.
At first I will go into the configuration options in detail. After that the output of the config analyzer will follow as well as interpreting it for some examples and telling you what has to be done in these cases. At the end I will sum up the benefits of the config analyzer and give an outlook on the migration sidesteps we experienced so far.
The following systems / applications have been used in this blog:
- SAP NetWeaver Identity Management 7.1 SP 4 and SP 5
- Both, MSSQL 2005 and Oracle 10.x databases
Knowledge / experience in the following areas are necessary / helpful:
- IdM 7.1 in general
- The new IdM 7.2 database structure
- Understanding and changing database queries
Referenced SAP documents:
How to use the IdM 7.2 config analyzer
Please see the document “Using the Configuration Analyzer” for details about the usage. The most important steps are the correct configuration of the details.
A ready to be used configured overview will look like in the picture below. In this case I retrieved the parameters from a dispatcher:
Please note that the dispatcher has to be on the same machine the config analyzer is executed on. If you have only dispatchers on Unix/Linux, you can use the machine you created the dispatcher or where your Identity Center is running. So, even if you use Unix/Linux as your primary operation system somewhere has to be a Windows. All your correct configured Identity Centers can be used as a source.
For an Oracle database this looks like this:
Please note that I changed the encrypted password to plain text. This can be necessary if the decryption does not work properly.
This is also a good idea when you use the third option of manually entering the connection data. What has to be entered is as follows:
- The JDBC URL can found in the Identity Center and adapted as needed
- The driver will be oracle.jdbc.driver.OracleDriver or com.microsoft.sqlserver.jdbc.SQLServerDriver in most of the cases. Actually, in over four years and a good dozen of projects I have never seen other drivers than Oracle or MSSQL
- In the classpath the paths to the SQL JAR’s of the drivers have to be entered
As seen in the SAP document these files are created:
The files in the archive are a bit more for information than the actual migration. Yet, they are quite helpful indeed. Please see chapter “Benefits of using the config analyzer regularly” for more details.
The results of the three issue files will be discussed in the next chapter.
Examples from IdM 7.2 migration
If you take a look in the .html file you will find a neat looking overview. This is helpful in the beginning of a migration where you need to understand and find the mentioned issues. For the .csv file I recommend to store it in Excel format. This allows you to add some more information like a responsible, status, etc. The .xml file, well, the only thing I could think of is viewing it in Notepad++ or a similar text editor.
The first thing which is really eye-catching are a lot of issues from the 7.1 SAP Provisioning Framework. If you have a second development system or dare to (which I do not recommend except in a system created only for this purpose), erase it and all the “sap_” Scripts. After that run the config analyzer again. Now there should be quite less issues remaining. If I remember correctly there are some 200 to 300 of them.
Erasing the old SAP Provisioning Framework is only a good idea if you are really sure that you changed everything to the new one.
The different types of issues are described quite well in the SAP documentation. Instead of describing them again, I will give you some examples. In all queries I replaced the global constants as well as I did some anonymizing.
Example 1: Different logic needed:
Query from a job’s source tab regarding the role model:
|SELECT * FROM sapISV_SAProleAssign WHERE
AND (mskey IN (SELECT mskey FROM MXIV_SENTRIES
WHERE attrname = ‘ISV_STATUS’ AND searchvalue = ‘ACTIVE’))))
ORDER BY logonuid, roleAssignments
And here the 7.2 query:
|SELECT * FROM sapISV_SAProleAssign WHERE
AND NOT mskey IN (SELECT mskey FROM IDMV_VALUE_BASIC
WHERE attrname=’ISV_STATUS’ AND searchvalue = ‘DELETED’))))
ORDER BY logonuid , roleAssignments
The view change is nothing spectacular. But what was special here is that we had to change the query logic a bit as the role model is quite complex. There were also changes in the , but then the query would be too complex to understand.
Example 2: Two views needed instead of only one
We used this query in 7.1:
|select distinct m1.mskey from mxiv_sentries m1
join mxiv_sentries m2 on m1.mskey = m2.mskey
join mxiv_sentries m3 on m1.Searchvalue = to_char(m3.mskey)
where m1.attrname = ‘MXREF_MX_PRIVILEGE’ and
m2.attrname = ‘ACCOUNTISV_SAP’ and not m2.searchvalue in (select distinct logonuid from sapISV_SAPuser) and
m3.Attrname = ‘MX_REPOSITORYNAME’ and m3.searchvalue = ‘ISV_SAP’
Which turns into this in 7.2:
|select distinct L1.mcthismskey as mskey from idmv_link_ext L1 join idmv_value_basic T1 on L1.mcthismskey = T1.mskey join idmv_value_basic T2 on L1.mcothermskey = T2.mskey where L1.mcattrname = ‘MXREF_MX_PRIVILEGE’ and T1.attrname = ‘ACCOUNTISV_SAP’ and not T1.searchvalue in (select distinct logonuid from sapISV_SAPuser) and T2.Attrname = ‘MX_REPOSITORYNAME’ and T2.searchvalue = ‘ISV_SAP’|
In this example a triple join on the mxiv_sentries turns into a join on two different views. If you would “just replace” everything with a single view except the vallink views it would not work. Indeed the vallink views would work in this example, but if you have queries where other columns than the standard ones are included, you will get into trouble.
Please note that instead of the searchvalue column the mcothermskey column is used. In latter are only MSKEYs so no more to_char is needed. This is some slight improvement, but should not have such great influence
Personally I prefer value and link over vallink. But it is up to you what you want to use.
Benefits of using the config analyzer regularly
So what are the benefits of this little tool? Well, there more than supporting the migration:
- “One click” summary available everytime you want. This comes very handy if you want to protocol the system state on a regularly base.
- Detailed information about the actual implementation in some of the most important aspects like the global constants, repositories, jobs, scripts, etc.
- Comparability between your IdM systems. Find out the differences between development, test and production.
- Finding inconsistencies / possible problems in the audit and on the entries
- Job information and their local scripts. This can be very nice if you want to replace local scripts with a global one.
- All global scripts in plain text for documentation, archiving and more
- For all tasks you can see if there is an access control and whether there are child tasks.
Unfortunately this last part is a bit thin on the outline. There are neither the access queries nor the task names available.
To be honest, the config analyzer is no universal remedy, but it CAN help a lot finding problems and taking a deeper look into the implementation.
Just to give an example:
There is a timeout/deadlocking problem somewhere in the system and it cannot be located. Sure, observing the database server is one part towards the solution, but if it happens quite randomly you can take a look inside the results of the config analyzer in the meantime. Searching for scripts with heavy database access, queries which are complex, etc. All of these can be found more easily looking into the output of the config analyzer.
In this chapter I do not want to examine all the steps mentioned in the migration guide further. What I want to mention here are the experiences made during migrations which were leading to sidesteps during the migration.
Simply “solving the issues” found and going on, has not and probably will not work in most cases. Sometimes you have to rework the changes or even add workarounds to your implementation. Especially the new 7.2 views can be tricky if you have queries which concern both, values and links.
Although there is a new transport mechanism in the portal, by now we have chosen the traditional way of transporting over the new one. Just to be safe which changes really happen. The new way might be useful if you have time and IdM systems enough to try it out beforehand. I would try it out if there are at least four stages and enough time. There I would use the new mechanism between stage two and three (which I assume are test and integration). But if you are under time/budget pressure it is better to transport as before. In addition to this the mentioned rework from chapter “Examples from IdM 7.2 migration” comes into effect.
And never forget to test. Testing every bit of the whole solution is crucial as you do not want your productive system to produce errors. Sounds like an old hat? Yes, but this applies to a 7.2 migration even more than to a “normal development” as not the implementation has changed but the systems’ foundation.
But all these sidesteps depend on each individual implementation. Some might be necessary in one project, but a waste of time in the next one. Yet, testing is a simple must have not a sidestep.
As a conclusion I only can recommend the IdM Config Analyzer. It is not only a must have during the migration to 7.2, but it also can be an everyday tool.
The usage is quite easy. The configuration for accessing the IdM database can be retrieved from a dispatcher, the Identity Center Console or entered manually. As output there are several files from which you can take advantage. These files carry the information about migration issues as well as system and implementation details.
As a small rounding I gave some insights in the migration sidesteps which may occour during a 7.2 migration.